The U.S. Federal Trade Commission has launched an investigation into OpenAI, the creator of ChatGPT, over allegations that it violated consumer protection laws by putting personal popularity and private information in danger, according to an FTC request for information sent to the corporate.
The move poses the most important regulatory threat to the Microsoft-backed startup that sparked the generative AI craze, fascinating consumers and businesses while raising concerns about potential risks.
This week, the FTC sent out a 20-page request for records on how OpenAI handles the risks related to its AI models.
The agency is investigating whether the corporate engaged in dishonest or deceptive practices that caused “reputational damage” to consumers.
Considered one of the questions concerns the steps OpenAI has taken to exploit the potential of its products to “generate statements about real people who are false, misleading or disparaging.”
![The FTC is investigating whether OpenAI has engaged in deceptive or deceptive practices.](https://nypost.com/wp-content/uploads/sites/2/2023/07/NYPICHPDPICT000010835008.jpg?w=1024)
The Washington Post was the primary to report on the poll.
The FTC declined to comment.
OpenAI didn’t immediately respond to a request for comment.
Because the race to develop more powerful AI services accelerates, so does the regulatory scrutiny of technology that has the potential to upend the best way societies and businesses operate.
Global regulators are pushing to apply existing laws covering the whole lot from copyright and data privacy to two key issues: the info fed into models and the content they produce, Reuters reported in May.
![Italian regulators temporarily blocked the AI-powered ChatGPT chatbot in March.](https://nypost.com/wp-content/uploads/sites/2/2023/07/NYPICHPDPICT000013979994.jpg?w=1024)
Within the US, Senate Majority Leader Chuck Schumer has called for “comprehensive laws” to advance and ensure AI safeguards and can hold a series of forums later this yr.
OpenAI also bumped into trouble in Italy in March, where the regulator shut down ChatGPT amid accusations that OpenAI violated the European Union’s GDPR – a sweeping privacy regime introduced in 2018.
ChatGPT was later reintroduced after the US company agreed to install age verification features and allowed European users to block their information from getting used to train the AI model.