North Korean Hackers Use ChatGPT to Scam Social Media Users; Crypto Clients Are the Biggest Target
A shocking discovery has been made in a recent report regarding the tactic employed by Pyongyang hackers. This group of hackers is using ChatGPT as a tool to defraud people who use such platforms as LinkedIn and other social networks by making them share secret information. OpenAI disclosed this information through its partnership with Microsoft, which is its investor.
Identification and termination of accounts
Last week, five government-affiliated groups were stopped by OpenAI and Microsoft who intended to exploit artificial intelligence for cybercrime. Microsoft's further investigation helped in identifying and closing down accounts related to two Chinese groups (Charcoal Typhoon and Salmon Typhoon), one Iranian group (Crimson Sandstorm), one North Korean group (Emerald Sleet alias Kimsuky), and one Russian group (Forest Blizzard). The AI technology was being utilized by these groups for evil purposes. Particularly, Emerald Sleet (Kimsuky) was acting as a genuine organization seeking insights into North Korean foreign policies from unsuspecting individuals.
Concerning trends in cybersecurity
Although there was no clear proof of any big cyberattacks by these groups, the research suggests a concerning tendency among hackers to investigate the potential of the internet. OpenAI additionally disclosed that the North Korean group Emerald Sleet was utilizing ChatGPT to identify defense professionals and companies in the Asia-Pacific area as well as to acquire information on security weaknesses and generate content for phishing their operations. According to Yonhap, there are signs that North Korea is using generative AI in its hacking and other illicit cyber operations. This information is provided by South Korea's state intelligence agency.
Pushed by a senior National Intelligence Service (NIS) official, North Korean hackers are now using generative AI to find possible targets for their hacks and obtain the tools they need to carry out their illegal operations. According to the NIS, there were 36% more hacking attempts in South Korea's public sector last year, or around 1.62 million attempts every day. Also, the NIS believes that North Korea is using its foreign-stationed IT professionals to land jobs at IT companies. Once there, they allegedly insert harmful codes into these organizations' software applications with the intention of stealing cryptocurrency.
Deceptive recruitment profiles on social media
Erin Plante, vice-president of investigations of cybersecurity startup Chainalysis, which specializes in cryptocurrency, told the Financial Times that North Korean hacking gangs have been fabricating realistic-looking recruitment profiles on social media sites like LinkedIn. She also mentioned that generative AI helps with several activities right from start to end, including picture creation, identity fabrication, and communications, which help hackers build rapport with their targets.
Limited scope of GPT-4 in malicious cybersecurity tasks
OpenAI highlighted that their findings are consistent with outside evaluations, showing that GPT-4's potential to assist with "malicious cybersecurity tasks" is limited to what can already be accomplished with publicly available tools that don't rely on AI. It seems that cryptocurrency clients are the group's biggest target and it turns out that North Korean-affiliated hackers had also compromised the networks of American enterprise software provider JumpCloud to target cryptocurrency customers last year.