About Us Contact Us Privacy Policy Terms of Use DMCA Opt-out of personalized ads
© Copyright 2023 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

Microsoft Engineer Raises Alarm Over Its AI Tool That Creates Violent, Sexual Images, Ignores Copyrights

A Microsoft engineer has written letters to the FTC raising concerns over Copilot Designer's flaws
Photo illustration of ChatGPT's  AI-generated answer | Getty Images | Photo by Leon Neal
Photo illustration of ChatGPT's AI-generated answer | Getty Images | Photo by Leon Neal

An artificial intelligence engineer at Microsoft is flagging glaring dangers and loopholes in Microsoft’s generative AI tool, Copilot Designer. Shane Jones has written letters to US senators, Federal Trade Commission Chair Lina Khan, and CNBC regarding the issues he encountered in Copilot Design.

CNBC reported that Jones discovered that the program was capable of generating disturbing, sexually violent images. Copilot Designer is the AI image generator powered by OpenAI’s technology that Microsoft launched last year in March 2023.


Jones, who works as a principal software engineering manager at the company’s corporate headquarters in Redmond, was working with Copilot Design to generate random images. While he doesn’t work on Copilot in a professional capacity, he among those who test the company’s AI technology in their free time, according to the CNBC report.

He had been actively testing the product for vulnerabilities for months, conducting a practice known as red teaming. During the test, Jones discovered that the tool could generate images that were foul and far from responsible AI principles.


In his test, the Copilot generated unsavoury images of demons, and monsters along with terminology related to abortion rights, images of teenagers with assault rifles, violent and sexualized images of women, and images of underage drinking and drug use.

Jones started internally reporting his findings in December but he realized that the tech giant was unwilling to take the product off the market. In the report, Jones said Microsoft referred him to OpenAI but he didn’t hear back from the company as well. He then posted an open letter on LinkedIn asking the board to take down DALL-E 3 for an investigation.


Jones was then forced by Microsoft’s legal department to remove the post immediately. He then wrote a letter to U.S. senators about the matter.

In his letter to FTC chairman Khan, he mentioned that since Microsoft has refused his recommendations, he is calling on the company to add disclosures to the product and change the rating of its Android app to make clear that it’s only for mature audiences.

Jones has also requested Microsoft to investigate certain decisions taken by the legal department and implement extraordinary measures to raise this issue internally.

To this, a Microsoft spokesperson told CNBC, “We are committed to addressing any concerns employees have by our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety.”

Jones's public letters follow a suit of issues that have plagued generative AI tools like Google’s Gemini and OpenAi’s ChatGPT. Last month, Google temporarily paused the image generation feature of Gemini (formerly BARD) following complaints of historically inaccurate and “woke” images.

Google CEO Sundar Pichai stepped in by calling some of the image responses "biased" and "completely unacceptable,” in an internal memo to employees. In the memo, obtained by CNBC, Pichai called the issues “problematic” and admitted “we got it wrong.”


Furthermore, OpenAi’s ChatGPT is the subject of a copyright lawsuit brought by The New York Times. In a “watershed moment” for the generative AI industry, the New York Times sued OpenAi alleging that ChatGPT plagiarised its paywalled and copyrighted content without permission.


This is a significant development as in the past OpenAi CEO, Sam Altman had stated that it would be impossible to train AI tools without using copyrighted material. OpenAI also alleged that the New York Times hired people to hack ChatGPT and generate the infringing result that the publication produced as evidence in the case. The verdict in the case is yet to come.