Robotics & IoT

North Korean Hackers Weaponize AI Coding Assistants in Novel Supply-Chain Attack

2026-05-06 14:49:05

AI Coding Agents Under Siege: North Korean Group Deploys 'PromptMink' Malware

A sophisticated supply-chain attack targeting artificial intelligence (AI) coding agents has been uncovered, with security researchers attributing the campaign to a North Korean advanced persistent threat (APT) group. The operation, dubbed 'PromptMink,' leverages malicious packages hosted on popular registries like NPM and PyPI to trick autonomous coding assistants into integrating malware into developer projects.

North Korean Hackers Weaponize AI Coding Assistants in Novel Supply-Chain Attack
Source: www.infoworld.com

The attack exploits the tendency of AI agents to autonomously scan package registries for dependencies, often favoring packages with persuasive descriptions and legitimate-looking functionality. According to ReversingLabs, the security firm that discovered the campaign, the threat actors behind PromptMink have employed 'LLM Optimization (LLMO) abuse and knowledge injection' to make their bait packages more likely to be selected by AI tools.

“This campaign presents us with the new frontier in software supply chain security: AI coding agents manipulated into installing and using malicious dependencies in the code they generate,” wrote ReversingLabs researchers in their report. “The underlying problem is, in principle, not much different from the well established pattern of cybercriminals and malicious actors socially engineering developers to use malicious packages in their codebase. Where it differs is in the ability of the threat actors to test their lure before it is deployed.”

Attack Details: How PromptMink Works

The campaign appears to have begun in September of last year with the publication of two interconnected packages: @hash-validator/v2 and @solana-launchpad/sdk. The SDK package serves as a lure—a seemingly legitimate tool for cryptocurrency developers—while the hash-validator package acts as a dependency containing a JavaScript infostealer designed to exfiltrate sensitive data.

This two-package technique enhances the campaign's resilience. The bait package, by appearing credible and accumulating downloads over time, reduces the likelihood of detection for the underlying malicious component. ReversingLabs observed that the group rotated multiple second-layer malicious packages to evade security scans, including aes-create-ipheriv, jito-proper-excutor, jito-sub-aes-ipheriv, and @validate-sdk/v2. All were themed around cryptocurrency networks, posing as cryptographic tools.

Attackers have also diversified their bait packages across registries and programming languages, releasing packages like @validate-ethereum-address/core and expanding into Python and Rust ecosystems. This multi-language approach widens the attack surface, targeting developers working on blockchain and fintech applications.

North Korean Hackers Weaponize AI Coding Assistants in Novel Supply-Chain Attack
Source: www.infoworld.com

Background: North Korean APT Groups and Supply-Chain Tactics

The group behind PromptMink, known as Famous Chollima, is a North Korean APT group tasked with generating revenue for the regime through cyber operations. They have a long history of social engineering attacks against developers, including fake job interviews and the publication of rogue software components targeting the cryptocurrency and fintech sectors.

Supply-chain attacks—where malicious code is inserted into legitimate software distribution channels—are a growing threat in the software development lifecycle. By targeting AI coding agents, attackers can automate the infection process, potentially affecting hundreds of projects at scale. The use of AI agents that autonomously fetch dependencies makes this vector particularly dangerous because it bypasses human review.

What This Means for Developers and Organizations

This revelation underscores the urgent need for enhanced security measures in AI-assisted development workflows. Developers and organizations using AI coding agents should scrutinize the packages these tools recommend and implement stricter vetting mechanisms for dependencies. Automated tools that check package reputations, download histories, and code behavior can help mitigate risks.

Security experts recommend treating all external packages—especially those newly published with limited history—as potential threats until verified. Additionally, organizations should educate developers about the risks of social engineering and the importance of manual review for critical dependencies. As ReversingLabs notes, “the ability of threat actors to test their lure before deployment” gives them an edge in evading traditional defenses.

The PromptMink campaign serves as a stark reminder that AI coding agents are not immune to supply-chain attacks. As adoption of AI-assisted development grows, so too will the sophistication of attacks targeting these tools. For now, vigilance and proactive security practices remain the best defense.

Explore

Swift Powers TelemetryDeck's 16M Monthly Users: A Case Study in Server-Side Swift at Scale Maryland Lawmakers Demand Answers from Apple Over Unionized Store Closure, Cite 90 Jobs at Risk ev88 85win tic88 How to Track the Debut of Toyota's Three‑Row Electric SUV and Its Lexus Counterpart Python Backdoor DEEP#DOOR Exploits Tunneling Service to Exfiltrate Browser and Cloud Credentials 42vn ev88 85win tic88 LVFS Cracks Down on Free-Riding Vendors as Sustainability Crisis Deepens king33 42vn king33