Share
Subscribe to the AlphaWire Newsletter
Crypto security is often discussed in terms of exploits, audits, and defensive tooling. This week, Vitalik Buterin pushed the conversation in a different direction. In a detailed post on X, he described security as the effort to minimize the gap between user intent and what a system actually does. Under that definition, security failures aren’t only bugs or hacks, but moments where software executes code correctly while betraying what the human behind the keyboard expected.
How I think about "security":
The goal is to minimize the divergence between the user's intent, and the actual behavior of the system.
"User experience" can also be defined in this way. Thus, "user experience" and "security" are thus not separate fields. However, "security"…
— vitalik.eth (@VitalikButerin) February 22, 2026
Buterin’s framing collapses the usual divide between security and user experience. Both are about intent alignment, but they apply pressure in different ways. User experience deals with everyday interactions, while security focuses on tail-risk cases where a mismatch carries heavy downside and often involves hostile actors.
The argument starts with a simple example. A user wants to send 1 ETH to “Bob”. That intent already hides ambiguity. Bob must be represented by an address or key, and that mapping can fail. Even the meaning of “ETH” depends on which chain the user accepts as canonical after a fork. None of this fits neatly into code, leaving the system to approximate the user’s wishes.
Buterin goes further to make a sharper claim – perfect security isn’t achievable because user intent is difficult to specify. This holds true even before attackers enter the picture, and with privacy goals, the problem deepens. Encrypting messages may protect content while metadata still exposes senders, recipients and time-stamps. Whether this level of exposure counts as trivial or catastrophic depends on context, not math.
Buterin’s views mirror long-running debates in AI safety, where goal specification proves tougher than the execution – a system can follow instructions exactly and still fail the person making use of them.
Valatik argues for redundancy, where users express intent in multiple overlapping ways – execution happens only when those signals align.
Concrete examples already exist. Transaction simulations show expected outcomes before confirmation. Spending limits and multisig require intent to be reaffirmed through separate controls. Formal verification and post-assertions compare what code does against stated properties. Each method approaches intent from a different angle.
The design goal is clear; low-risk actions should feel simple – even automatic.
Buterin also points to large language models (LLMs) as a possible extra signal. A generic model reflects broad human norms. A user-tuned model reflects what is normal for that person. Used carefully, this can flag anomalous behavior, but used alone, it becomes another single point of failure. He is explicit about that boundary.

For wallet developers and protocol teams, the takeaway is practical. Security work is shifting away from isolated defenses and toward systems that test intent from several directions. The harder question isn’t how to add more clicks, but how to decide which actions deserve them.
Share
