Microsoft Engineer Raises Alarm Over Its AI Tool That Creates Violent, Sexual Images, Ignores Copyrights
An artificial intelligence engineer at Microsoft is flagging glaring dangers and loopholes in Microsoft’s generative AI tool, Copilot Designer. Shane Jones has written letters to US senators, Federal Trade Commission Chair Lina Khan, and CNBC regarding the issues he encountered in Copilot Design.
CNBC reported that Jones discovered that the program was capable of generating disturbing, sexually violent images. Copilot Designer is the AI image generator powered by OpenAI’s technology that Microsoft launched last year in March 2023.
Microsoft AI engineer Shane Jones warns the Federal Trade Commission about Copilot Designer safety concerns. Jones repeatedly urged Microsoft to remove the tool from public use before improving safeguards, but he was turned down.#Copilot #AI #News #ITW
— InsideTechWorld (@theITW) March 7, 2024
Image: @GettyImages pic.twitter.com/xbmRdbjnwC
How did Shane Jones Discover the Vulnerabilities of the Copilot Design?
Jones, who works as a principal software engineering manager at the company’s corporate headquarters in Redmond, was working with Copilot Design to generate random images. While he doesn’t work on Copilot in a professional capacity, he among those who test the company’s AI technology in their free time, according to the CNBC report.
He had been actively testing the product for vulnerabilities for months, conducting a practice known as red teaming. During the test, Jones discovered that the tool could generate images that were foul and far from responsible AI principles.
Introducing Windows Copilot: the first PC platform to centralize AI assistance. #MSBuild pic.twitter.com/kujctI9Tm3
— Microsoft (@Microsoft) May 23, 2023
In his test, the Copilot generated unsavoury images of demons, and monsters along with terminology related to abortion rights, images of teenagers with assault rifles, violent and sexualized images of women, and images of underage drinking and drug use.
An Uphill Battle for Shane Jones
Jones started internally reporting his findings in December but he realized that the tech giant was unwilling to take the product off the market. In the report, Jones said Microsoft referred him to OpenAI but he didn’t hear back from the company as well. He then posted an open letter on LinkedIn asking the board to take down DALL-E 3 for an investigation.
A field brimming with daisies under a clear blue sky embodies simplicity and purity, serving as a bouquet of happiness. #dalle3 #aiart #openai pic.twitter.com/hEQU1UlxHZ
— DALL-E 3 OpenAI (@dalle_openai) March 2, 2024
Jones was then forced by Microsoft’s legal department to remove the post immediately. He then wrote a letter to U.S. senators about the matter.
In his letter to FTC chairman Khan, he mentioned that since Microsoft has refused his recommendations, he is calling on the company to add disclosures to the product and change the rating of its Android app to make clear that it’s only for mature audiences.
Jones has also requested Microsoft to investigate certain decisions taken by the legal department and implement extraordinary measures to raise this issue internally.
To this, a Microsoft spokesperson told CNBC, “We are committed to addressing any concerns employees have by our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety.”
Glaring Issues with Generative AI Tools
Jones's public letters follow a suit of issues that have plagued generative AI tools like Google’s Gemini and OpenAi’s ChatGPT. Last month, Google temporarily paused the image generation feature of Gemini (formerly BARD) following complaints of historically inaccurate and “woke” images.
Google CEO Sundar Pichai stepped in by calling some of the image responses "biased" and "completely unacceptable,” in an internal memo to employees. In the memo, obtained by CNBC, Pichai called the issues “problematic” and admitted “we got it wrong.”
Google CEO Sundar Pichai says Gemini chatbot’s ‘woke’ AI disaster ‘completely unacceptable’ https://t.co/ceVPnXCjrw pic.twitter.com/1kDJvMD8h3
— New York Post (@nypost) February 28, 2024
Furthermore, OpenAi’s ChatGPT is the subject of a copyright lawsuit brought by The New York Times. In a “watershed moment” for the generative AI industry, the New York Times sued OpenAi alleging that ChatGPT plagiarised its paywalled and copyrighted content without permission.
OpenAI says New York Times 'hacked' ChatGPT to build copyright lawsuit https://t.co/gdiii6zmtI pic.twitter.com/5WhGWjChin
— TODAY (@TODAYonline) February 27, 2024
This is a significant development as in the past OpenAi CEO, Sam Altman had stated that it would be impossible to train AI tools without using copyrighted material. OpenAI also alleged that the New York Times hired people to hack ChatGPT and generate the infringing result that the publication produced as evidence in the case. The verdict in the case is yet to come.