The Ukraine-Russia war appears to have spilled over to cyberspace as the Ukranian national cyber incident response team (CERT-UA) has now claimed that an AI-generated malware is using large language models or LLMs to auto-execute commands on a Windows system.
By doing so, it accesses all data on the compromised hardware, thus rendering one’s personal computing machine into an open book for hackers. The malware, imaginatively named LameHug, is coded in Python and uses Hugging Face API to interact with certain LLM protocols that then generate commands per the prompts that the hacker gives.
In the past, we have seen several such instances play out in the movies. Of course, even the latest threat assessment could be a part of the Ukraine-Russia battle on ground, especially since the former is now gaining traction with NATO and US President Donald Trump. What better way to get Trump’s continued attention than to give corporate America a Russian threat to tackle and fix?
The CERT-UA has directly attributed the attacks to the Russian state-backed threat group known as APT28. In fact, the National Cyber Security Center of UK has also attributed a cyber campaign using another malware called Authentic Antics to Russia’s military intelligence agency – the GRU – which also reportedly controls the APT28. The NATO countries have been targeting the GRU with multiple sanctions for cyberattacks.
According to Bleeping Computer, the malware was created on Alibaba Cloud (of Chinese origin) and the LLM used is an open-source one designed specifically to generate code, reasoning and follow instructions thereof. In other words, it converts natural language notes into executable code or shell commands.
The Ukrainian authorities said they began digging into the LameHug malware after getting several malicious emails sent from compromised accounts. That these accounts belonged to ministry officials got them thinking that these could be part of an attempt to distribute the malware to many Ukranian government organizations.
These emails carried a ZIP attachment with the LameHug execute files and once the system got corrupted, these AI-generated commands sought to collect system information that was saved into a text file besides repeatedly searching documents on some Windows directories such as Documents, Desktop and Downloads.
Researchers claim that if proven beyond doubt, LameHug could well be the first malware that includes LLP support to carry out hacking tasks automatically. Proof enough that artificial intelligence has the potential to wreak havoc in the data systems across the world. Wonder what action this discovery would prompt in the larger world of AI-led enterprises?