OpenAI has temporarily taken down its chatbot, ChatGPT, in Italy after the government’s Data Protection Authority, also known as Garante, accused the Microsoft-backed company of breaching privacy rules. The probe was launched due to OpenAI’s alleged failure to verify the age of ChatGPT’s users, who are required to be 13 or above. Garante said that the chatbot had “an absence of any legal basis that justifies the massive collection and storage of personal data” to train the chatbot, and OpenAI has 20 days to respond with remedies or risk a fine of up to €20m ($21.68m) or 4% of its annual worldwide turnover.
In response, OpenAI has disabled ChatGPT for users in Italy, and the website cannot be accessed in the country. However, the chatbot is also unavailable in mainland China, Hong Kong, Iran, Russia, and parts of Africa where residents cannot create OpenAI accounts. Since its launch last year, ChatGPT has gained over 100 million monthly active users, making it the fastest-growing consumer application in history, according to a UBS study published in March.
The rapid development of AI technology has drawn attention from lawmakers in several countries, with many experts advocating for new regulations to govern AI due to its potential impact on national security, jobs, and education. The European Commission, which is currently debating the EU AI Act, has called for all companies active in the EU to respect EU data protection rules. European Commission Executive Vice President Margrethe Vestager has also tweeted that the Commission may not be inclined to ban AI but regulate its uses.
In an open letter published on Wednesday, Tesla CEO Elon Musk and a group of AI experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, citing potential risks to society. Meanwhile, some AI researchers, such as Johanna Björklund, an associate professor at Umeå University in Sweden, have criticized OpenAI for its lack of transparency regarding how it trains its AI model.
OpenAI stated that it actively works to reduce personal data in training its AI systems to ensure that its AI learns about the world and not about private individuals.