#1 2025-05-01 13:31:54
Sycophancy in GPT-4o: What happened and what we’re doing about it
Other persnickety humans had a bit more to say about it.
How an embarrassing U-turn exposed a concerning truth about ChatGPT
So it’s worth reminding the public: AI models are not your friends. They’re not designed to help you answer the questions you ask. They’re designed to provide the most pleasing response possible, and to ensure that you are fully engaged with them. What happened this week wasn’t really a bug. It was a feature.
Nobody likes a suck-up. Too much deference and praise puts off all of us (with one notable presidential exception). We quickly learn as children that hard, honest truths can build respect among our peers. It’s a cornerstone of human interaction and of our emotional intelligence, something we swiftly understand and put into action.
Offline
#2 2025-05-01 21:07:49
Researchers at Carnegie Mellon University, the University of Michigan, and the Allen Institute for AI have looked at the trade-off AI models make between truthfulness and utility, using hypothetical scenarios where the two conflict.
What they found is that AI models will often lie in order to achieve the goals set for them. . . .
"Our experiment demonstrates that all models are truthful less than 50 percent of the time," in these conflict scenarios, "though truthfulness and goal achievement (utility) rates vary across models," the paper states.
Offline
#3 2025-05-02 06:57:51
Makes sense. In many cases, AI are being trained by the same assholes who set social media algorithms to trap people in bubbles. Makes sense, then, that the AIs are seemingly more interested in pleasing their users than being honest with them.
Offline