The standard critique of magic-link authentication is one sentence: "It's only as secure as your email." If an attacker reads your inbox, they can click your sign-in links and become you.
That critique is fair for most magic-link implementations. It is not fair for ours, and the difference is worth explaining, because the security model we built is interesting on its own terms even if you don't use MIR.
The standard pattern
Most magic-link systems work like this:
- User enters email
- Server generates a token, stores it in a database, and emails a link
- User clicks the link
- Server checks the token, marks it consumed, creates a session
Anyone who possesses the link can complete step 4. That's the entire security model. If the email account is compromised, so is everything that uses magic links to authenticate against it.
This is why security people hate magic links and prefer hardware keys. A hardware key cannot be exfiltrated by reading email.
The pattern we built
When a user requests a magic link from MIR, three things happen on the server, not one:
- A token is generated, hashed, and stored. (Standard.)
- A login nonce is generated and stored alongside the token. The plaintext nonce is set as an httpOnly cookie in the requesting browser.
- A device fingerprint of the requesting device is stored alongside the token.
When the link is clicked, MIR doesn't just check the token. It checks all three:
- Token validity and freshness (must exist, must not be expired, must not have been used). The expiry is ten minutes, not the half-hour or hour most systems use.
- Device fingerprint match (the consuming device must match the device that requested the link, with a graceful exception for browser auto-updates inside the ten-minute window).
- Login nonce presence (recorded as a context signal, not a hard block).
The fingerprint check is the strong gate. If it fails, the link is rejected with the message "This sign-in link must be opened on the same device that requested it," the token is consumed so it cannot be retried, and a negative signal is recorded against the user's own history. That last part is worth emphasizing: failed interception attempts feed the participation history of the very identity the attacker was trying to compromise. The system both blocks the attack and remembers that an attack happened.
What this defeats
There are two distinct attack scenarios for inbox compromise. They are not equivalent, and our design defeats one but not the other.
Passive interception. A legitimate user requests a sign-in link from their own device. An attacker who has compromised the user's inbox sees the link arrive, clicks it from their own machine, and tries to complete the login. This attack fails against MIR. The link is bound to the device that requested it, the attacker's device fingerprint does not match, the link is rejected, the token is consumed so it cannot be retried, and a negative signal is recorded against the user's own history. The legitimate user can be notified the next time they sign in, and any future authorization decision involving them can take that signal into account.
Active initiation. An attacker who has compromised the user's inbox initiates a login flow themselves, from their own device. The magic link arrives in the compromised inbox. The attacker clicks it from the same device that requested it, and the fingerprint matches because they generated it. This attack succeeds against the fingerprint binding. The protections that apply here are different: aggressive rate limiting on link requests per email, the fact that the user may notice an unrequested sign-in email and react, and the audit trail of unfamiliar request IPs and devices that flag the resulting session as suspicious.
The important property of the active-initiation case is that it cannot be invisible. Every magic link request is logged with the requesting IP and device. Every successful session from a previously-unseen device is a candidate for step-up authentication on the user's next high-stakes action. The system cannot prevent the initial sign-in, but it can refuse to extend that sign-in into anything consequential without additional verification. That is the role of step-up auth in the broader design, and it is the answer to "what about the active case."
The honest summary is that fingerprint binding closes one attack vector cleanly and shrinks another. It does not close all of them, and we do not claim it does.
What it does not defeat
We are not selling magic. There are threat models this design does not protect against, and being honest about them is more useful than pretending the design is perfect.
Endpoint compromise. If an attacker has malware on the user's actual machine, they have the cookie, the fingerprint, and the email. Magic links lose, but so does every other authentication method, including hardware keys, because the attacker is already inside the trust boundary.
Cold-start on a new device. If the user has never signed in from a particular device before and is initiating a login from that device, the protection downgrades to the ten-minute one-time link. The fingerprint binding only protects flows where the request and the consumption happen on the same device. The first time you sign in from a new laptop, you're getting standard magic-link security, not enhanced security. We accept this trade-off because the alternative is forcing every new-device login through a hardware key, which most users will not have.
Adversarial in-the-middle on the device's network. A sufficiently motivated attacker who can intercept HTTPS traffic on the user's local network has options that are out of scope for any application-layer protection. The answer there is TLS certificate pinning and not opening laptops on hostile Wi-Fi.
Why we built it this way
The default position in the security industry is that magic links are weak and hardware keys are strong, and you should always prefer the strong option. That's true if your only constraint is security. It is not true if your constraints also include onboarding velocity, user heterogeneity, and the reality that most users don't carry a hardware key.
The position we landed on is that magic links can be made substantially stronger without losing the UX advantage that makes them useful in the first place. The fingerprint binding is invisible to users who request and consume their links on the same device, which is the overwhelming majority of cases. The nonce feeds an audit trail that makes intrusions visible. The expiry is short enough to make stolen links low-value. And the negative-signal emission means the system doesn't just block the attack, it remembers it.
For users who want stronger guarantees, passkeys are available. We're not arguing magic links are the right answer for everyone. We're arguing that the standard "magic links are only as secure as email" critique stops being accurate the moment you bind the link to the device that requested it.
Most authentication systems are designed by people who started from a security model and let the UX fall where it would. We started from the UX and asked how much security we could layer underneath without giving any of it back. The answer turned out to be: more than the industry assumes.
MIR — Memory Infrastructure Registry
The internet's participation history layer.