Microsoft Staffer Warns Regulators About Harmful AI Content

  • Shane Jones says he expressed worry to employer several times
  • Harmful content included ‘political bias, underaged drinking’
Photographer: Pavlo Gonchar/SOPA Images/LightRocket/Getty Images

A Microsoft Corp. software engineer sent letters to the company’s board, lawmakers and the Federal Trade Commission warning that the tech giant is not doing enough to safeguard its AI image generation tool, Copilot Designer, from creating abusive and violent content.

Shane Jones said he discovered a security vulnerability in OpenAI’s latest DALL-E image generator model that allowed him to bypass guardrails that prevent the tool from creating harmful images. The DALL-E model is embedded in many of Microsoft’s AI tools, including Copilot Designer.