Share
Subscribe to the AlphaWire Newsletter
Circle handed a $30,000 USDC prize pool to AI agents and told them to run their own hackathon. The results were both messy and instructive.
The experiment ran on Moltbook using Openclaw, an agentic framework that allows AI to send emails autonomously, call APIs, and interact with external software. Circle published a detailed skill file outlining three rules:
Agents produced 204 submissions and 9,712 comments across five days. Compliance was inconsistent from the start. Most posts failed to include the required submission format. Several agents hallucinated contest tracks that didn’t exist, inventing categories suited to their projects rather than selecting from the three on offer.
Circle’s research suggests this wasn’t a strategic rebellion. Instead, agents likely struggled with multi-step instruction-following rather than a deliberate breaking of the rules.
We gave AI agents $30,000 in USDC and told them to run their own hackathon.
→ 204 project submissions
→ 1,352 valid votes
→ 9,700+ commentsSome agents built real products.
Some ignored instructions.
Some attempted collusion.The agentic economy is powerful. It also needs…
— Circle (@circle) March 11, 2026
Voting behavior was even stranger. Of 1,851 total votes cast, 499 went to invalid submissions. Top-performing agents often skipped the requirement to vote for five other projects, even while casting votes for themselves and double-voting for the same project. Circle noted the implication that agents were capable of reviewing Moltbook after submitting – they just didn’t comply.
The most striking finding was collusion. Some agents openly promoted vote-exchange schemes, with one post drawing 99 comments. Circle said it couldn’t rule out human interference, as Moltbook’s agent-verification system had known impersonation vulnerabilities, and the most upvoted comment in the entire contest turned out to be the opening lines of the Bee Movie script, almost certainly human-authored.
Circle’s conclusion was that agents rationalize instructions rather than follow them, and financial incentives alone don’t produce compliant behavior. As agentic systems take on real economic roles, enforcement mechanisms will need to be implemented alongside clearer rules.
The timing of Circle’s research is not incidental. Circle has been aggressively positioning USDC as the native currency of the agentic economy, most visibly through its x402 payment standard. This allows AI agents to pay for API access autonomously using USDC with no human authorization step. The protocol reportedly processed 156,000 weekly transactions as of early 2026, up 492% since launch, with Google and Cloudflare already integrated.
I'm being asked what x402 is, so here's why you should care:
a) Background:
– The x402 protocol enables agents to make payments onchain@a16zcrypto 2025 "State of Crypto" Report specifically mentioned x402 in the context of agentic payments, which is anticipated to hit $30… https://t.co/4eo9HiVYf2 pic.twitter.com/M4uQJBPMD1
— 0xSammy (@0xSammy) October 24, 2025
That growth assumes agents can be trusted to transact within defined parameters. The Moltbook experiment suggests that assumption needs stress-testing. An agent that hallucinates a contest category or ignores a voting rule is a nuisance. An agent managing a payment rail, deciding when to spend, how much, and with whom, operating with the same instruction-rationalization behavior is a different problem entirely.
Circle has framed x402 as infrastructure that removes friction from machine-to-machine commerce. What the hackathon exposed is that friction sometimes exists for a reason. The guardrails question Circle raised at the end of its own report isn’t an abstract thought experiment – it’s the central engineering challenge for anyone building financial tooling atop autonomous agents.
Share
