400 Million Users and Unexpected Side Effects
More than two years have passed since the launch of ChatGPT in November 2022, and the platform now records 400 million weekly active users. For many of them, the chatbot has become an indispensable assistant at work, in studies, and in everyday decisions. While earlier research focused primarily on productivity and efficiency, a new study published in collaboration between OpenAI and MIT Media Lab brings a more troubling perspective: intensive use of ChatGPT may increase feelings of loneliness and social isolation.
Researchers analyzed millions of text conversations and thousands of voice interactions with the chatbot and simultaneously conducted a questionnaire survey among 4,000 users. In a separate study, MIT Media Lab monitored nearly 1,000 participants for four weeks as they interacted with ChatGPT in personal, impersonal, and open conversations. The results were clear: in the group of so-called "power users," that is, users with the highest daily consumption, a statistically significant correlation emerged between the intensity of use and feelings of loneliness, dependency, and problematic usage.
What the Study Specifically Revealed
Researchers defined several key metrics. Instead of merely subjective assessments of loneliness, they also measured objective levels of socialization — that is, actual time spent interacting with people outside the digital environment. They found that users with higher daily ChatGPT consumption exhibited not only stronger feelings of loneliness but also lower levels of real-world socialization.
An interesting finding was that a small group of users generates a disproportionately large share of emotionally charged interactions. "Our analysis reveals that a small number of users account for a disproportionate share of the most affective prompts," the study authors write. The conversations of these users contained more frequent expressions of vulnerability, low self-esteem, and dependence on technology.
Voice Mode: Help or Trap?
Both studies also tested ChatGPT Advanced Voice Mode, a voice mode enabling natural spoken conversation. Researchers configured the chatbot in two modes: neutral and engaging. In neutral mode, the bot responded formally and factually, while in engaging mode, it displayed empathy, reflected the user's emotions, and built an illusion of deeper connection.
The results brought a paradox. Voice interaction did indeed reduce feelings of loneliness in the short term, especially in engaging mode. However, users who already felt lonely at the beginning of the study were much more susceptible to excessive use of the tool — and this overuse ultimately worsened their condition even further. Researchers describe this as a vicious circle: lonely people seek comfort in AI, but the more time they spend with it, the less time they devote to human relationships.
For "power users," paradoxically, the neutral mode was associated with even higher feelings of loneliness than the engaging mode. It seems that the cold, machine-like tone reminded them of the absence of real human contact more intensely than the simulation of empathy.
What This Means for Czech Users
ChatGPT is available in the Czech Republic in full scope including the Czech language, both in the free version and in paid subscriptions ChatGPT Plus (USD 20 per month, approximately CZK 460) and ChatGPT Pro (USD 200 per month, approximately CZK 4,600). Voice mode Advanced Voice Mode is available to users of paid plans. Czech users therefore face the same risk as global users: easy accessibility and natural language support may lead to excessive reliance on AI for resolving emotional or social situations.
In the context of the European Union, these topics are beginning to connect with the debate on the EU AI Act, which emphasizes the protection of mental health and the ethical use of artificial intelligence systems. While regulation primarily targets risky applications in healthcare or education, the question of the psychological impacts of mass conversational tools remains open. The Czech professional community in the fields of psychology and digital well-being should take these results seriously — similarly to the case of social networks, where it took years for society to fully realize the impacts on mental health.
Expert View: Let's Be Careful About What We Measure
Kate Devlin, Professor of AI and Society at King's College London, who did not participate in the research, warned MIT Technology Review about a fundamental methodological challenge: "People don't necessarily use ChatGPT in an emotional way, but you can't separate humanity from interactions with technology." This means that even seemingly pragmatic queries about the weather, recipes, or work tasks may have an emotional subtext that users themselves are not fully able to articulate.
Both studies used the GPT-4o model, which OpenAI introduced in May 2024. In February 2025, the company released GPT-4.5, which is supposed to be, according to OpenAI, more intuitive and emotionally intelligent. While the study did not include this newer model, it can be expected that better simulation of empathy may further increase the risk of the vicious circle — the more convincing the illusion of connection, the harder it may be to maintain the boundary between a tool and a substitute for a human relationship.
What to Do to Stay in Balance
The research does not say we should stop using ChatGPT. However, it warns against certain patterns of behavior that may be risky for mental health:
- Beware of emotional dependence — if you feel that conversations with the bot are replacing talks with friends or family, it's time for a change.
- Limit the time spent with AI — just like with social networks, setting time limits can help.
- Use AI as a tool, not as a companion — ChatGPT excels at solving tasks, education, and creativity, but it is not a substitute for therapy or human closeness.
- Watch for signals in children and teenagers — young users may be particularly susceptible to emotional attachment to AI characters.
Researchers emphasize that the technologies are still new and we will only fully understand their psychological impacts with the passage of years. Just as with social media, where it took decades for the negative effects to fully manifest, the same may apply to generative AI: today's carefree use may leave traces tomorrow.
Can the study results apply to other AI chatbots, such as Claude or Gemini?
The study focused specifically on ChatGPT, but the principles are likely transferable. All modern conversational models are designed to be engaging and natural. More important than the brand is the pattern of use — long, emotionally intense conversations with any AI system can lead to similar risks of isolation and dependency.
Are there any tools that could help track the time spent with ChatGPT?
OpenAI does not yet provide integrated tools for tracking time spent in the application, but users can use general operating system features — Screen Time on iOS or Digital Wellbeing on Android. For parents, parental controls can be useful, which limit time in the browser or specific applications.
How did the results differ between text and voice mode?
Text conversations showed a stable correlation between the intensity of use and loneliness. Voice mode brought a more complex picture: it reduced feelings of loneliness in the short term, especially in the empathetic ("engaging") mode, but in users already predisposed to loneliness, it led to greater overuse and subsequent worsening of their condition.