Theorem Index
Curated theorem entries across probability, concentration, learning theory, optimization, and reinforcement learning. Some entries have a dedicated theorem page; others link to the exact topic section where the statement appears.
Lean-backed theorem pages
Dedicated theorem pages with exact statements, proof scope, dependencies, and diagnostics.
Probability and concentration core
The foundation path currently carrying the strongest Lean and diagnostic coverage.
Markov's Inequality
Nonnegative tail probability bounded by expectation divided by threshold.
Chebyshev's Inequality
Deviation probability controlled by variance and squared distance.
Hoeffding's Lemma
Bounded centered variables have sub-Gaussian MGF control.
Finite Union Bound
Probability of a finite union is bounded by the sum of probabilities.
Weak Law of Large Numbers
Sample averages converge in probability under the recorded scope.
Uniform Convergence for Finite Classes
The stochastic theorem is not fully formalized; checked artifacts cover bridge steps.
Learning-theory and ML theorem backlog
High-value statements that should eventually receive exact claim pages or tighter proof records.
Bernstein's Inequality
Variance-sensitive bounded-sum tail bound; Lean wrapper not promoted yet.
McDiarmid's Inequality
Bounded-differences concentration for functions of independent inputs.
Sauer-Shelah Lemma
Combinatorial growth bound behind finite VC dimension.
VC Generalization Bound
Classical sample-complexity theorem; public topic statement only for now.
Rademacher Generalization Bound
Expected-supremum capacity bound; not currently claim-formalized here.
Matrix Bernstein Inequality
Operator-norm concentration theorem for sums of random matrices.
Bellman Optimality Equation
Fixed-point equation for optimal value functions in MDPs.
Policy Gradient Theorem
Gradient identity used by policy-gradient algorithms.