Governments vs ChatGPT: Investigations around the world
ChatGPT, the AI-powered tool that allows you to chat and get answers to almost any question, has taken the world by storm. Now, governments around the world are starting to take notice of these tools and have launched investigations into OpenAI’s ChatGPT.
Here are the countries that are probing OpenAI. They’re in alphabetical order, but the list includes dates for all the official announcements and media releases if you prefer chronological order.
Canada’s Office of the Privacy Commissioner opened an investigation on 4 April following a complaint alleging that OpenAI collected, used, and disclosed personal information without consent.
Privacy commissioner Philippe Dufresne said that the impact of AI on privacy is a crucial concern and that his office must keep pace with rapidly-evolving technology. Dufresne’s office has not disclosed other details since this is an ongoing investigation.
Prompted by Italy’s ban and Spain’s request to look into privacy concerns surrounding ChatGPT (see further down), on 13 April, the European Data Protection Board (EDPD) agreed to launch a task force to coordinate the work of European data protection authorities.
So far, there’s little information about the European Data Protection Board’s (EDPB) new task force other than a decision to tackle ChatGPT-related action during the EDPB’s next plenary, scheduled for 26 April. The minutes of EDPB’s 13 April plenary session are not yet available.
As reported in the media on 14 April, France’s data protection regulator (CNIL) opened a formal investigation into ChatGPT after receiving five complaints, three of which were made public. The CNIL did not make any official announcements.
The second complaint to the CNIL, also on 4 April, came from developer David Libeau, who explained in a blog post that OpenAI lacks transparency and fairness, and fails to safeguard people’s right to data protection.
The third CNIL complaint was initiated on 12 April by member of parliament Éric Bothorel after noticing that ChatGPT often gives erroneous information. To test the tool, he asked ChatGPT for information about himself, which Bothorel says was largely inaccurate (including his date of birth!).
Bothorel also took the initiative to organise a seminar on ChatGPT for French members of parliament. The event will take place at the National Assembly on 9 May.
Meanwhile, the French city of Montpellier has banned its officials from using ChatGPT. The decision was taken after deputy mayor Manu Reynaud recommended the ban as a precaution.
Germany’s data protection conference (DSK), the body of independent German data protection supervisory authorities of its federal and state governments, opened an investigation into ChatGPT (most likely on 10 April; announcement is undated).
The announcement was made by the North Rhine-Westphalia watchdog; a similar announcement was made by the Commissioner for Data Protection and Freedom of Information of Hesse. Details are otherwise scarce, as the DSK itself has been mum about it.
Ireland’s data protection commissioner is in touch with Italy’s regulator over ChatGPT’s temporary ban in Italy, according to a media report. ‘We will coordinate with all EU data protection authorities in relation to this matter.’ No other details have been made available so far.
The Italian Data Protection Authority (GDPD) was the first Western country to impose a temporary limit on OpenAI’s ChatGPT, on 31 March, citing four reasons: a data breach reported on 20 March, unlawful data collection, inaccurate results, and the lack of an age verification system to keep children safe. Read more: Italy’s rage against the machine
In compliance, OpenAI geo-blocked access to ChatGPT to anyone residing in Italy. But its API (the interface that allows other applications to interact with it) and Microsoft Bing (which also uses ChatGPT) remained accessible.
The GDPD has now provided the company with a list of demands it must comply with by 30 April, before the authority may lift its temporary ban. Among them, the Italian government wants OpenAI to let people know how personal data will be used to train the tool and to request consent from users before processing their personal data.
But a more challenging request is for the company to introduce measures for identifying accounts used by children by 30 September and to implement an age-gating system for underaged users. The age-verification request coincides with efforts by the EU to improve how platforms confirm their users’ age. The EU’s new eID proposal, for instance, will introduce a much-needed framework of certification and interoperability for age-verification measures. The way OpenAI tackles this issue will be a testbed for new measures.
Spain’s Data Protection Agency (AEPD) announced an independent investigation on 13 April to examine OpenAI’s practices for possible breaches.
The AEPD also said that the week before, it requested the EU’s data protection watchdog to include ChatGPT on the agenda of its next plenary meeting (more below).
On 4 April, the Swiss Federal Data Protection and Information Commissioner (FDPIC) said it was in communication with the Italian Garante to obtain more information about its ban on ChatGPT.
The FDPIC hasn’t started any formal investigation, so for the time being, it is advising users to understand how the company is processing their data before querying or uploading images. The same goes for companies using other AI tools: They must ensure that they inform their users on how they’re processing their data and for which purposes.
The UK has not initiated any investigation either, but on 3 April, the Information Commissioner’s Office reminded organisations using generative AI software that there are no exceptions to the rules governing personal data.
On 30 March, the Center for Artificial Intelligence and Digital Policy filed a complaint with the US Federal Trade Commission (FTC), requesting it to open an investigation into OpenAI’s practices and to stop the company from issuing new commercial releases of GPT-4.
The concerns are broad: In its 47-page complaint, CAIPD argues that OpenAI’s practices are unfair and deceptive, and contain numerous privacy risks; the company doesn’t provide evidence of safety checks to keep children safe from harmful content; and overall, the company’s practices violate emerging legal norms on AI governance.
CAIPD’s complaint has long been coming. In March, the organisation’s president, Marc Rotenberg, and chair and research director, Merve Hickok, appealed to US policymakers to introduce guardrails for ensuring algorithmic transparency, fairness, accountability, and traceability, across the entire AI lifecycle. A fortnight later, they hinted they would file a complaint with the FTC.
Read next: Governments vs ChatGPT: Regulation around the world
Sign up for the Digital Watch Weekly to receive the latest analysis and updates on global digital policy. It’s delivered to your inbox every Monday.
Leave a ReplyWant to join the discussion?
Feel free to contribute!