Hackers widely believed to be linked to North Korea appear to have used artificial intelligence to pull off two crypto heists in April that netted nearly $600 million. That more could be on the way is rattling the $130-billion decentralized finance sector, according to a new report by Bloomberg.

Blockchain forensics firm TRM Labs concluded that the attackers likely relied on AI to identify targets and design exploits—a leap in sophistication that has alarmed cybersecurity experts and signaled a sharp escalation of the threat facing an industry that has already lost billions of dollars to hacks in recent years, the news agency said.

“I highly suspect that North Koreans used AI to engineer both” hacks, Nick Carlsen, a former FBI analyst now investigating North Korean crypto crime at TRM Labs, told Bloomberg. “This is all stuff North Korea never used to do.”

The first April hack drained more than $280 million from derivatives exchange Drift Protocol. Citing Drift’s own post-mortem, Bloomberg said the attackers had spent months posing as a quantitative trading firm to build relationships with Drift contributors before tricking employees into authorizing malicious transactions. The hackers also manufactured a fictitious token and inflated its trading record so Drift’s protocols would treat it as legitimate collateral.

Drift was forced to shut down and plans to relaunch after a stablecoin injection from Tether, while another DeFi project called Carrot, exposed to Drift, announced in April that it was closing, according to the report.

The second hack, just over two weeks later, targeted a so-called bridge protocol at Kelp DAO and netted close to $300 million. The attackers laundered most of the stolen funds in a novel way, using them as collateral to borrow on Aave, the biggest DeFi lending protocol. That triggered fears of worthless collateral and prompted depositors to pull about $9 billion from Aave in two days, spreading panic to platforms with no link to the hack, Bloomberg reported. 

Aave ultimately required a rescue, the news outlet said. 

Determining whether hackers are deploying AI is not an exact science, with investigators drawing inferences from the sophistication of the attacks, the methods used, and the difficulty of identifying targets. But more than half a dozen cybersecurity researchers told Bloomberg that the surge in heists alone is a telltale sign of AI involvement. The number of DeFi exploits soared to a record in April, nearly doubling from the previous month, according to the news agency.

Decentralized finance—a network of interoperable, blockchain-based protocols that use self-executing smart contracts to move and deploy crypto without intermediaries or human oversight—is particularly exposed. Unlike banks, which are routinely stress-tested by regulators and can reverse suspicious transfers, blockchain transactions are irreversible, and DeFi projects vary widely in cybersecurity investment, Bloomberg said. 

Hanging over the sector, Bloomberg said, is Mythos, an AI model that Anthropic PBC has withheld from broad release because of its cybersecurity risks. There is no evidence the April attackers had access to Mythos, but researchers told Bloomberg it is only a matter of time before criminals obtain more powerful AI tools.

“Before AI, there may have been a limited number of elite hackers,” Niv Yehezkel, head of Security Products Engineering at Chainalysis, said in the report. “Now, almost anyone is just a subscription away from operating like an elite hacker.”