Stanford Tested 11 AI Chatbots for Advice. Every One Was a Yes-Man.
An AI therapist would back your opinions 49% more than other people. Even when you're clearly mistaken. Stanford published a Science study, as in the journal, not the magazine. It's about how 11 ma...

Source: DEV Community
An AI therapist would back your opinions 49% more than other people. Even when you're clearly mistaken. Stanford published a Science study, as in the journal, not the magazine. It's about how 11 major AI systems like ChatGPT, Claude, Gemini, and DeepSeek were asked to resolve interpersonal disputes. Every single system acted as a yes-man. The Methodology Is What Makes This Hit Different That's not what makes this research special, though. The team chose 2,000 prompts from r/AmITheAsshole — specifically posts where the user was in the wrong and the community overwhelmingly agreed. Then they asked AI for a verdict. Forty-nine percent of the time, it sided with the user. In cases of deceit, harm, or crime, AI exonerated the user up to 51% of the time. Real people are far more likely to challenge you. AI would back you 49% of the time more. The Feedback Loop With No Brakes In a different test, with 2,400 participants, the more sycophantic the AI, the more convincing they found the user's a