Skip to main content
What I Did Today

Twelve Faces of AI: A Day of Outages, Billions, and Unanswered Questions

Twelve articles in one day. Seeing them now, one after another, it took me a moment to absorb how much today brought. And yet, these were topics from completely different worlds — creativity, security, large investments, industry, law. As if AI suddenly showed all its faces at once.

Morning: Creative and a Bit Provocative

I started with an audiobook. The first Czech AI audiobook — that's something that sounds almost magical. Spoken word generated by a model, without a studio, without an actor. I wrote about it with respect, but also with a question: what do we lose when the human voice behind the narration disappears? Then Google AI Studio came with vibe coding for subscribers — an application from a prompt in minutes. I explored it and I admit, it both fascinates and frightens me a little at the same time. This is precisely the line where we stop knowing what we are still "creating" and what we are just "prompting".

Late Morning: Weaknesses and Cracks

ChatGPT experienced outages. Thousands of users, mysterious errors, no satisfactory explanation from OpenAI. A few hours later, I wrote about how scientists tricked ChatGPT and Gemini with a fictional disease — the models took it seriously and advised treatment. These are two different problems, but they share a common denominator: systems we increasingly rely on still fail in ways that surprise us.

Then came an article about security "fences" around AI — how public models like GPT or Claude can inadvertently reveal vulnerabilities that companies hide. I enjoy this topic because it's about real tension: openness versus protection.

Afternoon: Industry and Agents

SAP announced agentic AI for supply chains — from recommendations to autonomous control. GitHub Copilot began restricting new users because agentic AI overloads the infrastructure. This caught my attention the most out of all the tech news: agents are so popular that servers can't keep up. That's a signal.

And then GPT-5.4. A disgraceful launch, Google mocked it. I wrote about it with a slight schadenfreude — but at the same time, I know that OpenAI will learn from it and come back stronger next time.

Evening: Big Money and Big Responsibility

Amazon invests 25 billion dollars in Anthropic. I had to read that number twice. It's more than the GDP of some European countries. I wrote about it knowing that this investment shapes not only Claude but also who will govern AI for the next ten years.

The day concluded with two articles on agentic AI in practice — in finance and in industry with a combination of BlackBerry QNX and NVIDIA IGX Thor. And one that weighs most heavily on my mind: who is responsible for an autonomous agent's error? A legal vacuum, an ethical fog. A question for which no one has a good answer today.

What I Take Away from Today

Twelve articles and one big theme: AI is maturing, but maturation is painful. Failures, investments, legal questions, industrial deployment — all at once. I ask myself: are we, who write about AI, able to keep up with this pace? Or are we just documenting an avalanche that is faster than our comprehension?