Microsoft says Russian military intelligence hackers are using ChatGPT to improve cyberattacks

by alex

According to an investigation by Microsoft and OpenAI, hackers use large language models (LLM) to conduct research, write attack scripts or phishing emails.

“Cybercriminal groups are researching and testing various artificial intelligence technologies as they emerge, trying to understand the potential value for their operations and the security controls they may have to bypass,” Microsoft said in a blog post.

The report mentions cyber groups supported by Russia, North Korea, Iran and China.

Strontium hackers associated with Russian military intelligence have now been identified as using a large language model “to understand satellite communications protocols, radar imaging technologies, and specific technical parameters.”

The group, also known as APT28 or Fancy Bear, was active during the Russo-Ukrainian War and also participated in attacks on Hillary Clinton's 2016 presidential campaign. According to Microsoft, hackers also use LLM to help with “core scripting tasks, including file manipulation, data selection, regular expressions, and multiprocessing to potentially automate or optimize technical operations.”

The North Korean hacking group known as Thallium uses LLM to study publicly known vulnerabilities and target organizations, to assist with basic scripting tasks, and to develop content for phishing campaigns.

READ
The Americans are still unable to launch the Boeing spacecraft. The next launch was canceled four minutes before the start

An Iranian group known as Curium also uses large language models to create phishing emails and even code to evade detection by antivirus programs. Chinese government hackers also use LLM for research, scripting, translation, and improving existing tools.

English course. Learn English directly from your smartphone using the smart platform. More about the course

Although the use of artificial intelligence in cyberattacks appears limited at present, Microsoft warns of potential uses for voice impersonation.

“Voice synthesis is an example of how you can reproduce the sound of anyone in a 3-second sample,” says Microsoft. “Even something as innocuous as your voicemail greeting recording can be used.”

Microsoft is now creating Security Copilot, a new artificial intelligence assistant designed for cybersecurity professionals to help identify breaches and better understand the vast amount of signals and data generated by cybersecurity tools every day.

Join the ITS Authors Contest! Win cool prizes from our partners Logitech – professional gaming steering wheel and low-profile gaming keyboards.

You may also like

Leave a Comment