‘Sycophantic’ AI chatbots tell users what they want to hear, study shows
Scientists warn of ‘insidious risks’ of increasingly popular technology that affirms even harmful behaviour Turning to AI chatbots for personal advice poses “insidious risks”, according to a study showing the technology consistently affirms a user’s actions and opinions even when harmful. (…)
Site référencé:
The Guardian (Europe)
4560.jpg?width=140&quality=85&auto=format&fit=max&s=4508e80cd388975399f220246000cbd8, 4560.jpg?width=460&quality=85&auto=format&fit=max&s=6bbd6beea801238d0abf4d823e7c9da2, 4560.jpg?width=700&quality=85&auto=format&fit=max&s=0f93e11c848e4e068f13f5a48358b948
The Guardian (Europe)
Trump’s men come to Israel with plenty to say. But they’re silent on any real future for Gaza | Roy Schwartz
25/10/2025
‘You just have to laugh’ : five teachers on dealing with ‘six-seven’ in the classroom
25/10/2025
‘It’s insanely sinister’ : horror writers on the scariest stories they’ve ever read
25/10/2025
The Blue Jays flipped the World Series script on baseball’s biggest spenders
25/10/2025
‘My big shop used to cost £100, now it’s £150’ : readers recount their shock at supermarket food bills
25/10/2025
AI models may be developing their own ‘survival drive’, researchers say
25/10/2025