Skip to main content

GPT-5.5 Is Here and Your Old Prompts No Longer Apply. OpenAI Has Released a New Prompting Guide

Anthropic AI data center TPU compute infrastructure
OpenAI released its most capable model to date, GPT-5.5, on April 23, 2026, and simultaneously published a new official prompting guide that radically changes the rules of the game. If you are used to writing long, detailed instructions full of steps and strict rules, it is time to reconsider your strategy. GPT-5.5 works best when you describe the goal to it, not every step on the way to it.

Shorter Prompts, Better Results

According to OpenAI's official documentation, GPT-5.5 differs from its predecessors in one key characteristic: it is more effective at processing tasks when it has freedom in how it reaches the result. While older models often needed an explicit step-by-step guide, GPT-5.5 prefers so-called outcome-first prompts — that is, instructions that define what the ideal result should look like, what limitations apply, and what evidence the model has available.

The guide directly warns against transferring old prompts from previous versions. “Legacy prompts often overspecify the process because older models needed more help to stay on the right track. With GPT-5.5, this can add noise, narrow the solution space, or lead to overly mechanical responses,” writes OpenAI. In practice, this means that popular phrases like “always do this first, then that” can harm the new model more than help it.

How to Prompt GPT-5.5 Correctly?

OpenAI proposes several specific changes in the approach to writing prompts. Instead of long lists of rules, it recommends a structure based on the goal and expected output:

Define the personality and collaboration style. GPT-5.5 has an efficient, direct, task-focused style by default. For customer assistants or conversational products, it is appropriate to explicitly define the personality (tone, warmth, formality) and collaboration style (when to ask, when to make assumptions, how to respond to uncertainty). However, both blocks should remain brief.

Use preambles to improve perceived speed. With streamed responses, users notice how long it takes for the first token to appear. OpenAI recommends using a short introductory text — one or two sentences that confirm receipt of the request and hint at the first step. This “preamble” improves perceived speed without needing to change the actual course of the task.

Introduce explicit stopping conditions. Instead of telling the model “search until you find,” define when it should stop: “Use the minimum amount of evidence needed for the correct answer, quote it precisely, and stop.”

Give the model room to check its own work. GPT-5.5 supports tools that allow it to verify outputs. For code, you can request running unit tests, lint checks, or smoke tests. For visual artifacts, request checking layout and consistency after rendering.

Performance Leap Compared to GPT-5.4

The new prompting recommendation is not just a theoretical exercise — GPT-5.5 shows a significant leap in capabilities. On the benchmark Terminal-Bench 2.0, which tests complex command lines requiring planning and tool coordination, the model achieved 82.7%, while GPT-5.4 finished at 75.1% and Claude Opus 4.7 at 69.4%. On the internal test Expert-SWE for long-term coding tasks with a median human time of 20 hours, GPT-5.5 recorded 73.1% compared to 68.5% for GPT-5.4.

Interestingly, this increase in intelligence does not come at the expense of speed. OpenAI states that GPT-5.5 responds approximately as fast as GPT-5.4 under similar load, but uses significantly fewer tokens to complete the same tasks. In Codex, users can choose Fast mode, which generates tokens 1.5× faster at 2.5× the cost.

On the general knowledge work test GDPval, GPT-5.5 achieved 84.9% (win or draw), surpassing GPT-5.4 (83.0%), Claude Opus 4.7 (80.3%), and even Gemini 3.1 Pro (67.3%). In the area of autonomous computer control according to the benchmark OSWorld-Verified, it recorded 78.7%, while GPT-5.4 reached 75.0%.

Pricing and Availability for Czech Users

For Czech and European developers, information on availability and pricing is key. GPT-5.5 has been available in ChatGPT Plus, Pro, Business, and Enterprise since April 23, 2026. From April 24, 2026, it is also available via API within the Responses API and Chat Completions API. This means that Czech companies and developers can integrate the model into their applications without geographic restriction, optionally using regional processing endpoints for a 10% surcharge.

The API pricing for GPT-5.5 is as follows:

GPT-5.5: $5 per 1 million input tokens, $30 per 1 million output tokens. Cached input costs $0.50 per 1 million tokens. Batch and Flex processing are available at half rate, Priority processing at 2.5× the standard price. For prompts longer than 272 thousand tokens, double the input price and 1.5× the output price are charged.

GPT-5.5 Pro: $30 per 1 million input tokens and $180 per 1 million output tokens. This variant is intended for the most demanding tasks with higher precision.

The context window is 1.05 million tokens with a maximum of 128 thousand output tokens. The knowledge cutoff is December 1, 2025, so the model has relatively fresh information.

For ordinary Czech users, the model is available within the ChatGPT Plus subscription for approximately EUR 20–25 per month (depending on the current exchange rate and VAT). For companies and developers, the ongoing implementation of the EU AI Act may also be relevant, which emphasizes transparency and safety for high-performance AI systems — an area in which OpenAI is investing in specific safeguards, including stricter classifiers for cyber risks.

A Shift in Mindset for Developers and Editors

The new prompting guide for GPT-5.5 is not just technical documentation — it is a signal that prompt engineering is shifting from the craft of writing long instructions to defining goals and trusting the model. For Czech developers, this means an opportunity to simplify the prompt codebase, reduce token costs, and achieve better results at the same time. For content creators and editors, it means the ability to delegate more complex analytical and editorial tasks with less need to micromanage every step.

Moreover, GPT-5.5 has significantly improved in working with long context — on the benchmark Graphwalks BFS with 1 million tokens, it achieved 45.4%, while GPT-5.4 finished at a mere 9.4%. This opens the door to processing extensive documents, codes, or internal knowledge bases without the need for data fragmentation.

Do I need to rewrite all my old prompts for GPT-5.5?

It is not necessary immediately, but OpenAI recommends gradually migrating key prompts to the new format. For quick migration, you can use the Codex tool with the command “migrate this project to gpt-5.5.” Start with the most important workflows and monitor whether quality has improved or token consumption has decreased.

What is the difference between GPT-5.5 and GPT-5.5 Pro for an ordinary user?

For ordinary queries and coding, GPT-5.5 is sufficient. The Pro variant is focused on the most complex tasks in the fields of law, data science, scientific research, and advanced analysis, where maximum precision and complexity of response are required. In ChatGPT, Pro is available only in Pro, Business, and Enterprise subscriptions.

Does the change in prompting affect the safety and reliability of responses?

Yes, but in a positive way. Shorter prompts with clearly defined limitations and stopping conditions reduce hallucinations and incorrect inference. However, it is still important to explicitly define which claims must be backed by sources and how the model should behave when it lacks sufficient evidence — not automatically invent facts.

X

Don't miss out!

Subscribe for the latest news and updates.