Skip to main content

Pressure on AI giants: Protests in San Francisco call for public commitments to safety

AI article illustration for ai-jarvis.eu
In the heart of the technological world, San Francisco, tension is escalating between the creators of cutting-edge language models and activists fighting for humanity's safety. Protesters outside the buildings of key players like OpenAI, Anthropic, and xAI demand that their leadership stop acting solely in the interest of profit and publicly commit to strict safety standards. This situation is not just a local incident; it is a signal that the era of uncontrolled development is coming to an end.

The technology industry is at a critical point. While models like GPT-4o, Claude 3.5 Sonnet or Grok push the boundaries of human capabilities, they also raise deep concerns about whether we can control these systems. Recent protests in San Francisco confirm that the public and experts no longer want to just observe development, but want to have a say in its direction.

Why have people stood up against AI leaders?

The main point of contention is not the existence of artificial intelligence itself, but the way it is developed. Protest organizers demand that company CEOs, such as Sam Altman (OpenAI), Dario Amodei (Anthropic), or Elon Musk (xAI), make concrete, public, and legally binding declarations regarding safety. Safety protocols should not be merely internal documents that companies can change at any time, but transparent standards that can be audited.

One of the key topics is AI Alignment. This technical term refers to the process of ensuring that the goals of artificial intelligence are in line with human values and intentions. If a system "misaligns", it can start acting in a way that is dangerous to us, not intentionally, but simply because its optimization functions have not reached the complexity of human ethics. An analysis from the EA Forum suggests that while technical research in safety is growing, the implementation of these ideas into actual legislation and enforceable standards is still critically low – it is estimated that funding for safety implementation is less than 10% of the funding for the research itself.

Risks for the most vulnerable users

Concerns are not just theoretical or focused on the distant future. There are also immediate ethical problems. As Techrights states, there are serious questions about child safety when interacting with advanced models. The ability of AI to simulate human personality can lead to dangerous emotional manipulation or the provision of inappropriate content if filters are not robust enough. This is a topic that is addressed very strictly in the context of European legislation.

Impact on the Czech user and the European market

You might be thinking: "What do I care about a protest in San Francisco?" The answer is simple: EU AI Act. The European Union is a world leader in AI regulation, and its rules have a direct impact on everyone who uses AI in the Czech Republic.

If American companies prove unable to meet strict safety standards, some of their most powerful models may not be legally available in the EU (and thus in the Czech Republic). This could create a digital divide between the American and European markets. For Czech companies integrating AI into their processes, this means the necessity of monitoring not only model performance but primarily its compliance (adherence to regulations).

Availability in the Czech Republic:

  • ChatGPT (OpenAI): Available in Czech, Plus version costs approx. 20 USD (approx. 470 CZK) per month.
  • Claude (Anthropic): Very strong in logic and programming, available in Czech, subscription approx. 20 USD.
  • Gemini (Google): Full integration into the Google ecosystem, available in Czech, various price tiers according to Google One.

Comparison of models in terms of safety and performance

Currently, there is a constant battle over who will have the best model. In terms of benchmarks (e.g., MMLU or HumanEval), the models are very close to each other, but their approach to safety differs:

Model | Manufacturer Strengths Safety Approach
GPT-4o / o1 (OpenAI) Versatility, ecosystem Aggressive filtering, but often criticized for excessive "censorship".
Claude 3.5 (Anthropic) Logic, nuance, ethics Designed with an emphasis on "Constitutional AI".
Grok (xAI) Timeliness, freedom of speech Minimized filters, which raises debates about content safety.

For the average user in the Czech Republic, this means that the choice of tool is not just about which model writes code or a poem better, but about how much you trust its ethical compass and how transparent its development is.

Conclusion: What to expect next?

The protests in San Francisco are a wake-up call. If tech giants are unable to demonstrate that their systems are safe and aligned with human values, they face a tough clash with regulators, especially within the EU. For us, users, this means we will have to be more critical about which tools we entrust with our data and our decision-making. The future of AI will not be determined solely by the number of parameters in a model, but primarily by how much we can trust ourselves.

What exactly does "AI Alignment" mean?

It is the process of developing artificial intelligence so that its goals, behavior, and decision-making are in line with human values, ethical norms, and explicit instructions, to avoid unintended or dangerous consequences.

Can the EU AI Act affect the availability of ChatGPT or Claude in the Czech Republic?

Yes. If OpenAI or Anthropic were unable to demonstrate that their models meet the strict safety and transparency requirements set by the EU AI Act, these tools could be restricted or completely banned in the European Union.

How do I know if an AI model is safe for my company data?

It is important to check whether the provider offers "Enterprise" versions that guarantee your data is not used to train future models, and whether the company has clearly defined security protocols and certifications (e.g., SOC2).