Federal Trade Commission Launches Sweeping Investigation into OpenAI Amidst Data Leak Concerns and ChatGPT’s Accuracy Issues
The Federal Trade Commission (FTC) has initiated a comprehensive investigation into OpenAI, raising questions about potential violations of consumer protection laws that may have put personal data and reputations at risk. The agency recently delivered a 20-page demand to the San Francisco-based company, seeking records that shed light on how OpenAI addresses risks associated with its AI models, according to a document obtained by The Washington Post. This regulatory inquiry represents the most significant threat OpenAI has faced from U.S. regulators thus far, coinciding with the company’s global campaign to shape the future of artificial intelligence (AI) policy.
OpenAI’s ChatGPT has been hailed as the fastest-growing consumer application in history, sparking fierce competition among Silicon Valley firms to develop competing chatbots. Sam Altman, the company’s CEO, has emerged as an influential figure in the AI regulation debate, testifying on Capitol Hill, engaging with lawmakers, and meeting with President Biden and Vice President Harris.
However, OpenAI now faces a substantial challenge in Washington, as the FTC has repeatedly warned that existing consumer protection laws are applicable to AI, even as policymakers struggle to establish new regulations. Senate Majority Leader Charles E. Schumer (D-N.Y.) has indicated that it may take months to pass new AI legislation. The FTC’s demand for information from OpenAI signals its intent to enforce these warnings. If the agency finds evidence of consumer protection law violations, it has the authority to impose fines or place restrictions on the company’s data handling practices. The FTC has become the leading enforcer of consumer protection laws against major tech companies, having levied significant fines against Meta (formerly Facebook), Amazon, and Twitter for alleged violations.
The FTC’s request to OpenAI encompasses detailed descriptions of any complaints received regarding ChatGPT’s dissemination of “false, misleading, disparaging, or harmful” statements about individuals. The agency is investigating whether OpenAI engaged in unfair or deceptive practices that resulted in reputational harm to consumers. Additionally, the FTC is seeking records related to a security incident that OpenAI disclosed in March, involving a bug in its systems that exposed payment-related information and chat history data of certain users. The agency is examining whether OpenAI’s data security practices contravene consumer protection laws. OpenAI had stated in a blog post that the number of users affected by the incident was exceptionally low.
While the FTC declined to comment on the investigation, OpenAI’s CEO, Sam Altman, expressed the company’s willingness to cooperate with the agency in a tweet, stating their commitment to ensuring the safety and pro-consumer nature of their technology. Altman also expressed disappointment with the FTC’s initial focus on a data leak, as it does not contribute to building trust. However, he emphasized OpenAI’s confidence in adhering to the law and prioritizing consumer interests.
During a hearing, Rep. Dan Bishop (R-N.C.) questioned FTC Chair Lina Khan about the legal authority empowering the agency to make demands of a company like OpenAI, reflecting concerns about potential overreach by the FTC. Bishop pointed out that libel and defamation cases are typically addressed under state laws, alluding to the FTC’s inquiries about disparagement. Khan responded by stating that libel and defamation are not the agency’s primary focus, but misuse of private information in AI training could constitute fraud or deception under the FTC Act. Khan emphasized the FTC’s focus on assessing substantial harm to individuals, which can manifest in various forms.
The FTC has consistently signaled its impending actions on AI through speeches, blog posts, op-eds, and news conferences. In an April speech at Harvard Law School, Samuel Levine, the director of the FTC’s Bureau of Consumer Protection, asserted the agency’s readiness to swiftly address emerging threats in the AI landscape while acknowledging the importance of innovation. Levine stressed that being innovative does not grant a license for recklessness, and the FTC is prepared to employ all available tools, including enforcement, to combat harmful practices.
The FTC’s approach to regulating AI has been communicated through vivid blog posts, sometimes invoking popular science fiction films to caution the industry about violating the law. The agency has warned against AI scams, deceptive use of generative AI to manipulate potential customers, and exaggerated claims about AI product capabilities. In April, Khan participated in a news conference alongside Biden administration officials, highlighting the risk of AI discrimination.
The FTC’s actions have received swift pushback from the tech industry. Adam Kovacevich, CEO of the Chamber of Progress, an industry coalition, acknowledged the FTC’s authority over data security and misrepresentation concerns. However, he questioned whether the agency has the jurisdiction to regulate defamation or the specific contents generated by ChatGPT. Kovacevich expressed concern that the FTC may prioritize flashy cases over achieving substantial results in securing AI technology.
Among the extensive information requested from OpenAI, the FTC seeks research, testing, and surveys that evaluate consumers’ understanding of the accuracy and reliability of ChatGPT’s generated outputs. The agency has made specific inquiries about records pertaining to the chatbot’s potential to produce disparaging statements, requesting information on complaints related to false statements made by the chatbot. The FTC’s focus on fabrications stems from several high-profile incidents where ChatGPT produced incorrect information that could harm individuals’ reputations. For instance, radio talk show host Mark Walters filed a defamation lawsuit against OpenAI, alleging that ChatGPT fabricated legal claims against him. The chatbot falsely claimed that Walters, the host of “Armed American Radio,” was involved in defrauding and embezzling funds from the Second Amendment Foundation, a lawsuit to which Walters was not a party.
The FTC’s request also delves into OpenAI’s products and their advertising, policies and procedures for releasing new products, including instances where the company withheld a large language model due to safety concerns. Detailed descriptions of the data used for training OpenAI’s products, which predominantly come from text scraped from sources like Wikipedia and Scribd, are also sought. The agency is interested in understanding how OpenAI refines its models to address issues of “hallucination,” where the models generate answers even when they lack the necessary information.
Furthermore, OpenAI must provide information regarding the scale of the March security incident and the steps taken to respond to the breach. The FTC’s Civil Investigative Demand also extends to how OpenAI licenses its models to other companies.
While the United States has lagged behind other governments in AI legislation and privacy regulation, there is now a flurry of activity in Washington to catch up. Senate Majority Leader Schumer recently hosted a briefing with officials from the Pentagon and intelligence community, discussing national security risks associated with AI. Schumer, along with a bipartisan group of senators, is actively working on new AI legislation, aiming to strike a balance between fostering innovation and ensuring appropriate safeguards. Vice President Harris also held a discussion at the White House on the safety and security risks of AI, involving consumer protection advocates and civil liberties leaders.
As the FTC’s investigation unfolds, the tension between innovation, consumer protection, and regulatory oversight in the AI space continues to escalate. OpenAI’s cooperation with the agency will be closely watched, while the outcome of this investigation may have far-reaching implications for the broader AI industry and its future regulation.