Skip to main content

OpenAI Faces Class-Action Lawsuit: ChatGPT Allegedly ‘Sold’ User Data to Advertising Giants Meta and Google

Ilustrační obrázek
OpenAI, a leader in generative artificial intelligence, has found itself at the epicenter of a legal earthquake. According to recent reports, the company faces a class-action lawsuit in California alleging that ChatGPT secretly shared sensitive user information with advertising giants Meta and Google. The accusations concern the use of tracking tools such as Facebook Pixel and Google Analytics, which allegedly transformed private conversations into monetizable data for targeted advertising.

Listen to this article:

The world of artificial intelligence has focused primarily on the performance of new models in recent months, but while we watch benchmarks, a fundamental battle for privacy is taking place behind the scenes. According to information from Cybersecurity News, a lawsuit was filed in the District of California accusing OpenAI of violating user privacy. The key problem is not the training of the models themselves, but the way the ChatGPT web interface interacts with third parties.

How does the "hidden" tracking in ChatGPT work?

According to the lawsuit filed by Amargo Couture, OpenAI integrated tools into its web interface that are standard for online marketing, but in the context of an AI chatbot represent a fundamental ethical and legal problem. Specifically, these are Facebook Pixel and Google Analytics.

Facebook Pixel is a code snippet that allows Meta to track user behavior on websites. In the case of ChatGPT, according to the lawsuit, these requests occur in real time. Every time a user interacts with the chatbot, Pixel can send data to Meta's servers. This data may include not only technical parameters but also contextual content, such as the browser tab title, which may reflect the topic of the user's current question.

On Google's side, the lawsuit focuses on Google Analytics and related tags for Google Ads. These tools allegedly capture hashed email addresses used to log into ChatGPT, device identifiers, and cookies that allow linking activity in ChatGPT directly to existing Google profiles. This data is subsequently used in systems such as "Core Audiences" or "Custom Audiences" for extremely precise ad targeting on Facebook and Instagram.

What could all be at risk?

For the average user, the most important question is: What exactly is being transmitted? The lawsuit states that users have reasonable expectations that their conversations will remain between them and OpenAI. However, ChatGPT is often used for discussions about:

  • Financial situations (paycheck analysis, investment questions),
  • Health problems (symptom analysis, interpretation of medical reports),
  • Legal matters (contract preparation, legal questions).

If these topics are "pumped" into advertising systems, the user becomes a product, whose most intimate questions serve to have Meta or Google serve them targeted ads for medications, banking products, or legal services.

Comparison with the competition: How do others respond to privacy?

This incident puts OpenAI in an uncomfortable position, especially when comparing their approach with other major players in the market. Different strategies are forming in the area of privacy:

Model / Company Availability in the Czech Republic Privacy focus Price (basic tier)
ChatGPT (OpenAI) Yes Controversial (including tracking pixels) Free / $20 per month (Plus)
Claude (Anthropic) Yes High (focus on "Constitutional AI") Free / ~20 EUR per month (Pro)
Gemini (Google) Yes Integrated (within the Google ecosystem) Free / ~20 EUR per month (Advanced)

While Anthropic positions itself as the "safe and ethical" model where privacy is a key selling point, OpenAI strives for massive scaling, often at the cost of integration with advertising technologies. Google Gemini has similar data issues, but its model is part of an ecosystem that collects data from the start, so there is not such a discrepancy between expectation and reality as with OpenAI.

Impact on users and companies in the EU and the Czech Republic

For Czech and European users, this situation is extremely sensitive due to strict GDPR regulation and the newly implemented EU AI Act. While the lawsuit in the USA relies on California laws (CIPA) and the federal Electronic Communications Privacy Act (ECPA), in Europe, similar conduct by OpenAI could lead to enormous fines from national data protection authorities (e.g., ÚOOÚ in the Czech Republic).

What does this mean for you?

  1. Regular user: If you use ChatGPT for sensitive queries, you must account for the possibility that your metadata (topic, device, ID) may be used for advertising. We recommend using "Temporary Chat" mode or turning off data training in settings.
  2. Companies in the Czech Republic: For companies that use ChatGPT to analyze internal documents, this situation is a warning. Using the free version of ChatGPT for corporate data is highly risky in this context. For secure deployment, the ChatGPT Enterprise version is necessary, which offers guaranteed privacy and does not provide data to third parties.

Within the EU, it is important to monitor whether OpenAI will have to implement stricter separation between the chat interface and advertising trackers for the European market to meet the transparency requirements set by the artificial intelligence law.

Does this mean that Meta and Google read the text of my messages in ChatGPT?

The lawsuit claims that this is not a direct export of the entire conversation text, but a transfer of metadata and identifiers (such as cookies, email addresses, and contextual information from the browser), which allow advertising systems to know what the user is talking about and subsequently target ads at them.

How can I best protect myself in ChatGPT?

The best way is to turn off the "Chat History & Training" option in ChatGPT settings (Settings -> Data Controls). For maximum security within companies, it is recommended to use the API or the Enterprise version, which have different data processing conditions.

Is ChatGPT legal in the Czech Republic even with these risks?

Yes, the tool is fully available in the Czech Republic. However, if OpenAI were to violate GDPR rules, European regulatory bodies could enforce changes in how the service is provided in the EU, which could also affect the Czech market.

X

Don't miss out!

Subscribe for the latest news and updates.