Skip to content
Experimental research tool. Not medical, career, or professional advice. No warranty. See methodology.

The Linear Algebra of the Match

Medical residency matching is a high-dimensional optimization problem. We decompose it, calibrate it against real NRMP outcomes, and let you simulate your own rank list against the calibrated cohort.

What we model

The match is a one-shot equilibrium between two ranked sides (applicants and programs). We fit a probabilistic model to NRMP Charting Outcomes 2024 and Match Results 2022–2026, then run a Roth-Peranson Gale-Shapley simulation at NRMP scale (~44,000 applicants × ~7,700 programs). Each program ends up with an equilibrium acceptance threshold; your profile generates a score per program; the difference predicts your match probability.

Real NRMP data
Per-pool match rates (MD/DO/IMG senior + grad), per-specialty applicant feature means, per-program quotas across 5 cycles (2022–2026), couples-match outcomes 1987–2024.
Cohort equilibrium
Pre-computed at build time: full Gale-Shapley over the calibrated synthetic cohort. Runtime looks up the per-program acceptance threshold instead of re-simulating.
Validated
Simulated per-pool match rates within ~5 pp of the real NRMP targets. Per-specialty calibration scorecard at /methodology/validation.

The math framework

Structure layer

S (N×7) — applicant features

P (M×5) — program features

Ap = S @ WpT — program affinity

As = Ws @ PT — applicant affinity

Cohort layer

τ_p — per-program acceptance threshold

σ_p — threshold std

P(accept | s, p) = logistic((s − τ_p) / σ_p)

Calibrated by Roth-Peranson at build time.

Action layer

ROL — rank-ordered list

signals — gold/silver tier per program

P(land at k) = P(accept_k) × ∏i<k(1 − P(accept_i))

What we don't model

Doximity reputation, second-order beliefs (“everyone signals program X so I shouldn't”), per-program weight variance beyond what NRMP publishes, fellowship cohorts (cohort math deferred until per-fellowship data is extracted from NRMP/SF Match/AUA reports). We're honest about these in methodology.