The Centre for Artificial Intelligence and Digital Policy, a tech ethics group, has filed a complaint with the US Federal Trade Commission (FTC), requesting a halt to the commercial release of OpenAI’s GPT-4. This latest iteration of OpenAI’s Generative Pre-trained Transformer program has impressed some users with its human-like responses to queries, but the Centre for Artificial Intelligence and Digital Policy believes that it poses risks to privacy and public safety and is biased and deceptive. OpenAI is based in California and is backed by Microsoft.
The Centre for Artificial Intelligence and Digital Policy’s complaint follows an open letter that called for a six-month pause in the development of AI systems that are more powerful than GPT-4, citing potential risks to society. The Centre for Artificial Intelligence and Digital Policy argues that GPT-4 does not meet the FTC’s standards of transparency, fairness, explainability, and empirically sound practices while fostering accountability.
The Centre for Artificial Intelligence and Digital Policy further alleges that OpenAI exposed private chat histories to other users and that it was possible for an AI researcher to take over someone’s account, view their chat history, and access their billing information without their knowledge. The group’s president, Marc Rotenberg, a veteran privacy advocate, has expressed concern about the commercial pressures that may be driving OpenAI to release a product that is not yet ready.
Rotenberg and his group believe that OpenAI is not complying with FTC guidelines and that the product is unfair and deceptive. He was one of more than 1,000 signatories to the letter urging a pause in AI experiments. The Centre for Artificial Intelligence and Digital Policy is urging the FTC to investigate OpenAI, enjoin further commercial releases of GPT-4, and establish the necessary guardrails to protect consumers, businesses, and the commercial marketplace.
The complaint is likely to raise concerns about the regulation of AI systems and how they are developed and deployed. As AI systems become more advanced, it is important to ensure that they are transparent, explainable, and fair. The risks of biased or deceptive systems are significant, and it is essential that these risks are addressed to protect society’s interests. As such, it is likely that this complaint will be taken seriously and may lead to further scrutiny of OpenAI’s GPT-4 and other AI systems.