Where this topic leads
Topics that build on Concentration Inequalities
Once you have Concentration Inequalities, these are the topics that cite it as a prerequisite. Pick by tier and the area you want to push into next.
Editor's suggested next (24)
- Sub-Gaussian Random Variables
- Bernstein Inequality
- Sub-Exponential Random Variables
- McDiarmid's Inequality
- Adaptive Learning Is Not IID
- Algorithmic Stability
- Chernoff Bounds
- Empirical Risk Minimization
- Epsilon-Nets and Covering Numbers
- Glivenko-Cantelli Theorem
- Markov Decision Processes
- Matrix Concentration
- Minimax Lower Bounds: Le Cam, Fano, Assouad, and the Reduction to Testing
- No-Regret Learning
- PAC Learning Framework
- Rademacher Complexity
- Stochastic Gradient Descent Convergence
- Stochastic Processes for ML
- Symmetrization Inequality
- VC Dimension
- Bennett's Inequality
- Chi-Squared Concentration
- Hoeffding's Lemma
- Slud's Inequality
Core flagship topics (19)
- Algorithmic Stabilitylayer 3 · learning-theory-core
- Bennett's Inequalitylayer 2 · concentration-probability
- Bernstein Inequalitylayer 2 · concentration-probability
- Chernoff Boundslayer 1 · concentration-probability
- Chi-Squared Concentrationlayer 2 · concentration-probability
- Empirical Risk Minimizationlayer 2 · learning-theory-core
- Epsilon-Nets and Covering Numberslayer 3 · concentration-probability
- Hoeffding's Lemmalayer 1 · concentration-probability
- Markov Decision Processeslayer 2 · rl-theory
- Matrix Concentrationlayer 3 · concentration-probability
- McDiarmid's Inequalitylayer 3 · concentration-probability
- Minimax Lower Bounds: Le Cam, Fano, Assouad, and the Reduction to Testinglayer 3 · statistical-foundations
- PAC Learning Frameworklayer 1 · learning-theory-core
- Rademacher Complexitylayer 3 · learning-theory-core
- Stochastic Gradient Descent Convergencelayer 2 · optimization-function-classes
- Sub-Exponential Random Variableslayer 2 · concentration-probability
- Sub-Gaussian Random Variableslayer 2 · concentration-probability
- Symmetrization Inequalitylayer 3 · concentration-probability
- VC Dimensionlayer 2 · learning-theory-core
Standard topics (5)
- Adaptive Learning Is Not IIDlayer 3 · learning-theory
- Glivenko-Cantelli Theoremlayer 2 · learning-theory-core
- No-Regret Learninglayer 3 · rl-theory
- Slud's Inequalitylayer 2 · concentration-probability
- Stochastic Processes for MLlayer 2 · concentration-probability