Sam Altman, CEO of OpenAI, publicly condemned Anthropic’s recent advertising campaigns, labeling them 'clearly dishonest' in a heated escalation of competition between leading AI firms. The dispute centers on claims made in Anthropic’s latest ad materials, which Altman says misrepresent AI safety standards and model capabilities.
- Anthropic’s February 3, 2026, ad campaign claimed 'zero hallucination' in Claude 3.5 model performance.
- OpenAI CEO Sam Altman dismissed the claims as 'clearly dishonest' during a private executive meeting.
- Public benchmark data from the AI Safety Evaluation Suite v4.1 reported a 12.7% hallucination rate for Claude 3.5.
- OpenAI has begun a limited ad test with 1.3 million ChatGPT users, rolling out non-intrusive banner ads in the U.S. and Canada.
- The rivalry has prompted increased scrutiny from regulators, with the European Commission launching a preliminary investigation into AI advertising claims.
- Market analysts note that ad-driven revenue models are central to the future profitability of large AI platforms.
The conflict between OpenAI and Anthropic intensified this week after Altman took to a closed executive forum to criticize Anthropic’s new marketing strategy. He specifically targeted a series of digital ads released on February 3, 2026, that promoted Anthropic’s Claude 3.5 model as having 'human-level reasoning' and 'zero hallucination' in real-world applications. Altman argued these claims were misleading and contradicted publicly available benchmark results from the AI Safety Evaluation Suite (AI-SES) v4.1, which showed a 12.7% hallucination rate under complex reasoning tasks.