Italy has become the first Western country to block the use of the artificial intelligence (AI) chatbot ChatGPT over insufficient regulatory oversight and suspected data collection breaches. The Italian Data-Protection Authority ordered OpenAI to temporarily stop processing Italian users’ data whilst the Italian Data-Protection Authority conducts an investigation into the chatbot’s compliance with Europe’s privacy regulations. The Authority also accused OpenAI of failing to verify the age of users. This in turn led to the alleged data collection of children below the age of 18. The Italian Data-Protection Authority has issued OpenAI with a 20-day period to address the data concerns or risk a fine of $20M or 4% of its annual revenue.
The ban has sparked concerns over the future of AI regulation worldwide. Other countries in the Europe Union are said to be interested in following suit over data privacy concerns, even announcing plans for proposed landmark legislation. The new “European AI Act” is said to complement the General Data Protection Regulation and impose heavy restrictions on AI use in critical infrastructure, law enforcement, education, and the judicial system. The forefront of the proposed legislation is to target general purpose AI, such as ChatGPT, due to its high-risk application. Additionally, Sweden, Belgium, France, Germany, and Ireland are conducting their own investigations into the best way to regulate AI use.
It is hard to argue against international AI regulation and this catalyst may be the start of tight regulatory scrutiny against AI systems.