AI Doesn't Care About Your 'Please' And 'Thank You'
Research confirms that adding polite phrases to AI prompts delivers zero performance benefit while burning real money in computational costs. As European enterprises scale up AI deployments, understanding what actually works in prompt engineering has never been more commercially important.
The idea that polite language somehow optimises AI chatbot performance has reached epidemic proportions across European offices, classrooms, and households. From users addressing ChatGPT as though it were a sentient colleague to elaborate flattery aimed at coaxing better responses, the internet is saturated with theatrical prompt engineering advice that simply does not hold up under scrutiny.
Recent research has systematically debunked the notion that positive framing enhances AI accuracy. Experimenters labelled AI models as "smart," urged careful consideration, and even concluded prompts with cheerful phrases such as "This will be fun!" None of these tactics consistently improved performance. They wasted computational resources instead.
Advertisement
One curious exception did emerge: instructing an AI to pretend it was commanding a Starship Enterprise crew actually boosted its basic mathematical abilities. While clearly an anomaly, it highlights the unpredictable and distinctly non-human logic governing AI responses, and why anecdotal prompt folklore should be treated with scepticism.
The Hidden Cost of Digital Courtesy
In early 2025, a user on X posed a pointed question: how much money has OpenAI lost in electricity costs from people saying "please" and "thank you" to its models? Sam Altman, OpenAI's CEO, offered a cryptic response: "Tens of millions of dollars well spent. You never know."
Whether that figure is precise or anecdotal, it underscores a genuine concern. Large language models function by dissecting input into tokens that are statistically analysed to generate responses. Every word, from pleasantries to punctuation, influences computational load and ultimately translates into real-world costs for providers and, indirectly, for the European businesses paying for API access.
For enterprises running thousands of daily queries through models from OpenAI, Google, or Anthropic, even marginal token inflation across a workforce compounds into meaningful expenditure. This is not a trivial point for procurement teams evaluating AI budgets.
What European Researchers and Regulators Are Saying
The debate over prompt behaviour is not merely academic. Researchers at ETH Zurich, one of Europe's leading AI research institutions, have been examining how users interact with language models and what habits actually drive output quality. Their broader findings align with the emerging consensus: structural clarity in a prompt outperforms emotional or social cues every time.
Meanwhile, the EU AI Office, established under the AI Act and based in Brussels, has flagged user literacy as a core concern in its guidance on high-risk AI deployments. Ensuring that users understand what AI systems actually respond to, rather than perpetuating folk wisdom about machine psychology, is increasingly viewed as part of responsible deployment practice under the Act's transparency obligations.
The politeness phenomenon also varies across languages and cultures. A 2024 study suggested that while English-language models showed minimal response to polite versus blunt commands, Japanese-speaking chatbots actually performed worse when users were overly courteous. For European enterprises operating in multilingual environments, across French, German, Polish, Spanish, and dozens of other languages, these inconsistencies matter when standardising internal AI usage policies.
Evidence-Based Strategies for Better AI Communication
Sophisticated AI models are considerably less susceptible to superficial social cues than early folklore suggested. The core insight is straightforward: AI tools are mimics, not sentient beings. They simulate human behaviour without genuine emotions or understanding. Treating them otherwise is a category error with a measurable cost.
Experts now broadly agree on the following strategies for more effective AI interactions:
Ask for multiple options rather than singular answers to encourage critical evaluation of outputs.
Provide concrete examples instead of generic instructions when seeking specific formats or styles.
Use iterative, interview-style prompts where the AI asks clarifying questions before generating a final response.
Maintain neutral framing to avoid biasing responses with leading or emotionally charged language.
Focus on clear task definition rather than social or emotional manipulation of the model.
These principles are reinforced by Mistral AI, the Paris-based large language model developer and one of Europe's most prominent AI labs. Mistral's own documentation and developer guidance consistently emphasise structured, explicit prompting over conversational niceties, a stance that reflects both performance data and the company's engineering-first culture.
Prompt Technique Comparison
Polite requests: No measurable improvement; increased token usage; acceptable only as personal preference.
Multiple examples: Significant improvement; higher initial cost; best for complex creative tasks.
Iterative questioning: Highly effective; requires multiple exchanges; ideal for planning and analysis.
Neutral framing: Reduces bias; minimal overhead; recommended for decision-making tasks.
The Psychology Behind Persistent Politeness
Despite the evidence, human courtesy towards artificial intelligence endures. Research indicates that roughly 70 per cent of users maintain politeness because they consider it proper behaviour, while others view it as practice for human interactions or as a precautionary measure should AI systems become more autonomous in future.
This behaviour reflects deeper psychological factors rather than technical reasoning. We are, in effect, training ourselves to be polite to machines, which may well have downstream implications for how we interact with other people as AI mediates more of daily professional and social life.
For European policymakers and AI ethics boards, this is not a trivial concern. The AI Act's provisions on human oversight and transparency are partly motivated by exactly this kind of creeping anthropomorphisation: the risk that users begin to attribute agency, intent, or moral status to systems that have none, and adjust their behaviour accordingly in ways that undermine critical judgement.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article4 terms
tokens
Small chunks of text (words or word fragments) that AI models process.
prompt engineering
Crafting effective instructions to get better results from AI tools.
API
Application Programming Interface, a way for software to talk to other software.
bias
When an AI system produces unfair or skewed results, often reflecting prejudices in training data.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.