Autonomous AI “agents” that can initiate and execute transactions without real-time human input are compressing the timelines that anti-money laundering (AML) compliance teams and law enforcement rely on to detect and stop illicit crypto activity, according to new TRM Labs research.
AI is moving beyond content generation into “financial infrastructure,” with systems capable of holding signing authority over wallets, rebalancing liquidity across protocols, or triggering smart-contract execution autonomously. But when software can transact independently, layering and cross-chain value movement can occur in seconds, narrowing already tight detection windows, the blockchain analytics firm said.
In a blog post published Thursday, TRM Labs framed the shift as especially consequential in digital asset markets, where settlement is near-instant and cross-chain movement is relatively frictionless.
One key concern is that automation could turn so-called “layering”—historically the most operationally intensive stage of money laundering—into preprogrammed logic. TRM described scenarios where an AI-driven wallet manager, if compromised or misconfigured, could fragment funds across dozens of addresses, swap through multiple liquidity pools, and route value across blockchains before a human operator recognizes anomalous activity.
TRM also said that autonomous agents introduce new risks, including targeting operational wallets where agents have signing authority, deploying malicious agents built to automate laundering workflows or evade detection, and “misaligned” optimization in which an agent seeking yield or efficiency routes funds through high-risk venues or indirectly touches sanctioned infrastructure without explicit malicious programming.
The research argues autonomy complicates attribution but does not eliminate accountability.
AI agents “do not possess legal personhood” and cannot form criminal intent, TRM said. Instead, investigators should trace delegated authority back to human actors such as developers, operators, beneficiaries, or infrastructure providers who knowingly enable misuse.
The firm said cases will hinge on control, knowledge, and economic benefit—principles already central to financial-crime enforcement—while investigative work increasingly requires linking on-chain patterns with off-chain evidence such as server logs, cloud infrastructure, governance records, and API integrations.
TRM’s prescription is “matching automation with automation.” The firm said monitoring must be continuous rather than episodic, with machine-learning models to detect anomalous autonomous behavior, automated cross-chain tracing and clustering, real-time alerting tied to wallet- or agent-level baselines, and generative AI to speed investigative triage and reporting, while keeping high-consequence decisions in human hands.
Read more at TRM Labs
