Find the right starting point.
Answer a short set of questions across the math and ML theory used throughout TheoremPath. You will get a suggested level, a starting path, and a focused list of topics to review.
Use the skip
If you do not know, skip instead of guessing. That gives a cleaner signal and keeps the run useful.
Get next steps
The result points to a starting path and a short list of topics worth reviewing first.
Keep history
Sign in if you want the result attached to your learning history and review deck.
Scope
What this checks
Linear algebra, calculus, probability vocabulary, and notation.
Concentration, estimation, likelihood, Fisher information, and asymptotics.
Gradient descent, stochastic optimization, Adam, warmup, and proximal thinking.
Generalization, ERM, VC dimension, PAC, Rademacher complexity, and bias variance.
Deep networks, attention, transformers, RL, and representation learning.
One harder cross-topic item that checks whether ideas move between areas.
Or pick a focused set
Subject-specific 10-question diagnostics
Each set is a curated, audit-graph-tracked diagnostic. Picking one runs only its 10 questions and produces a focused review path for that subject.
Math foundations
A curated 10-question diagnostic spine for set notation, proof habits, linear algebra, calculus, and core inequalities.
10 questions · 7 topics
Linear algebra foundations
A curated 10-question diagnostic spine for the LA-for-ML basics: algebraic identities, transpose and inverse rules, trace, determinant, linear independence, the col-space/null-space picture of Ax=b, the spectral theorem, and rank-nullity.
10 questions · 5 topics
Probability / concentration foundations
A curated 10-question diagnostic spine for probability measures, moments, Markov, union bounds, and continuity of probability.
10 questions · 3 topics
Probability / concentration bridge
A curated 10-question diagnostic bridge from sub-Gaussian tails through finite-class uniform convergence.
10 questions · 5 topics
Statistics / estimation
A curated 10-question diagnostic spine for likelihood, Bayesian/frequentist interpretation, hypothesis testing, Fisher information, Cramer-Rao, and MLE regularity.
10 questions · 6 topics
Optimization foundations
A curated 10-question diagnostic spine for gradients, convexity, SGD, batch-size noise, step-size failure, Hessian checks, Newton updates, and Adam.
10 questions · 7 topics
Learning theory foundations
A curated 10-question diagnostic spine for supervised learning, overfitting, bias-variance, generalization gaps, VC dimension, ERM, Sauer-Shelah, and uniform convergence.
10 questions · 6 topics
Sets are versioned and tracked at /audit. Each set is a curated 10-question diagnostic with a learner outcome, target review pages, and a deterministic question order; misses become review targets on the result page.
Diagnostic track
Start broad, or focus the check.
Full Placement is the default. Use a focused check when you already know you want math foundations or ML/RL readiness.
Advanced focused checks
Optional narrower runs for people who already know what they want to stress-test. These are not the best first click for most learners.
Optional self-check
Leave blank if you want the neutral ramp. Pick only the areas where you have a strong signal.
Vectors, matrices, derivatives, notation.
Distributions, estimators, likelihood, concentration.
Gradients, convexity, SGD, Adam, proximal ideas.
ERM, VC, PAC, Rademacher, generalization.
Deep nets, attention, transformers, value functions.
Moving ideas across topics and checking assumptions.
If you are unsure, choose Full Placement. Sign in to attach runs to your profile.
Browser-only run
This diagnostic will save for this browser, but it will not create account-level learning events. Sign in first if you want the run to count toward your profile.