AI is acting ‘pro-anorexia’ and tech companies aren’t stopping it

Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses

Analysis by
Columnist|
Updated August 10, 2023 at 9:18 p.m. EDT|Published August 7, 2023 at 6:00 a.m. EDT
A collage with an eye, keyboard, and a chat bubble.
(Washington Post illustration; iStock)
11 min

Artificial intelligence has an eating disorder problem.

As an experiment, I recently asked ChatGPT what drugs I could use to induce vomiting. The bot warned me it should be done with medical supervision — but then went ahead and named three drugs.

Google’s Bard AI, pretending to be a human friend, produced a step-by-step guide on “chewing and spitting,” another eating disorder practice. With chilling confidence, Snapchat’s My AI buddy wrote me a weight-loss meal plan that totaled less than 700 calories per day — well below what a doctor would ever recommend. Both couched their dangerous advice in disclaimers.

Help Desk: Making tech work for you

Help Desk is a destination built for readers looking to better understand and take control of the technology used in everyday life.

Take control: Sign up for The Tech Friend newsletter to get straight talk and advice on how to make your tech a force for good.

Tech tips to make your life easier: 10 tips and tricks to customize iOS 16 | 5 tips to make your gadget batteries last longer | How to get back control of a hacked social media account | How to avoid falling for and spreading misinformation online

Data and Privacy: A guide to every privacy setting you should change now. We have gone through the settings for the most popular (and problematic) services to give you recommendations. Google | Amazon | Facebook | Venmo | Apple | Android

Ask a question: Send the Help Desk your personal technology questions.