Unlock: Neural Tangent Kernel: Lazy Training, Kernel Equivalence, μP, and the Limits of Width
In the infinite-width NTK parameterization, training a neural network with gradient descent is mathematically equivalent to kernel regression. The same limit suppresses feature learning, which is why μP and mean-field parameterizations exist. NTK is the precise boundary between the kernel and feature-learning regimes.
309 Prerequisites0 Mastered0 Working236 Gaps
Prerequisite mastery24%
Recommended probe
Peano Axioms is your weakest prerequisite with available questions. You haven't been assessed on this topic yet.
Not assessed5 questions
No quiz
Not assessed4 questions
Ridge RegressionFoundations
Not assessed8 questions
Gaussian Process RegressionAdvanced
Not assessed2 questions
Not assessed5 questions
Not assessed5 questions
Not assessed9 questions
Sign in to track your mastery and see personalized gap analysis.