Cybercriminals are using large language models (LLMs) to enhance malware, allowing it to rewrite itself in real time and target high-value assets like cryptocurrency. Google’s Threat Intelligence Group identified 5 AI-enabled malware families that dynamically query LLMs to modify or create code, with some groups exploiting AI for crypto theft. Google has taken steps to disable accounts linked to these activities and strengthen safeguards.

Leave a Reply