Where this topic leads
Topics that build on Convex Optimization Basics
Once you have Convex Optimization Basics, these are the topics that cite it as a prerequisite. Pick by tier and the area you want to push into next.
Editor's suggested next (38)
- Regularization Theory
- Kernels and Reproducing Kernel Hilbert Spaces
- Activation Functions
- Ascent Algorithms and Hill Climbing
- Augmented Lagrangian and ADMM
- Bounded Rationality
- Convex Duality
- Coordinate Descent
- The EM Algorithm
- Expected Utility Theory
- Game Theory Foundations
- Gradient Descent Variants
- Interior Point Methods
- K-Means Clustering
- Lasso Regression
- Line Search Methods
- Logistic Regression
- Markov Decision Processes
- Minimax and Saddle Points
- Mirror Descent and Frank-Wolfe
- Nash Equilibrium
- Newton's Method
- Online Convex Optimization
- Optimizer Theory: SGD, Adam, and Muon
- Policy Gradient Theorem
- Preconditioned Optimizers: Shampoo, K-FAC, and Natural Gradient
- Projected Gradient Descent
- Proximal Gradient Methods
- Ridge Regression
- Riemannian Optimization and Manifold Constraints
- Scaling Laws
- Stability and Optimization Dynamics
- Stochastic Approximation Theory
- Subgradients and Subdifferentials
- Support Vector Machines
- Training Dynamics and Loss Landscapes
- The Kernel Trick
- Maximum A Posteriori (MAP) Estimation
Core flagship topics (20)
- Activation Functionslayer 1 · ml-methods
- Bounded Rationalitylayer 2 · decision-theory
- Convex Dualitylayer 2 · mathematical-infrastructure
- Game Theory Foundationslayer 2 · decision-theory
- Gradient Descent Variantslayer 1 · optimization-function-classes
- K-Means Clusteringlayer 1 · ml-methods
- Lasso Regressionlayer 2 · ml-methods
- Logistic Regressionlayer 1 · ml-methods
- Markov Decision Processeslayer 2 · rl-theory
- Maximum A Posteriori (MAP) Estimationlayer 0B · statistical-estimation
- Newton's Methodlayer 1 · numerical-optimization
- Optimizer Theory: SGD, Adam, and Muonlayer 3 · llm-construction
- Policy Gradient Theoremlayer 3 · rl-theory
- Proximal Gradient Methodslayer 2 · numerical-optimization
- Ridge Regressionlayer 1 · ml-methods
- Scaling Lawslayer 4 · llm-construction
- Subgradients and Subdifferentialslayer 1 · optimization-function-classes
- Support Vector Machineslayer 2 · ml-methods
- The EM Algorithmlayer 2 · statistical-estimation
- The Kernel Tricklayer 2 · ml-methods
Standard topics (17)
- Ascent Algorithms and Hill Climbinglayer 1 · numerical-optimization
- Augmented Lagrangian and ADMMlayer 2 · numerical-optimization
- Coordinate Descentlayer 2 · numerical-optimization
- Expected Utility Theorylayer 2 · decision-theory
- Kernels and Reproducing Kernel Hilbert Spaceslayer 3 · optimization-function-classes
- Line Search Methodslayer 2 · numerical-optimization
- Minimax and Saddle Pointslayer 2 · rl-theory
- Mirror Descent and Frank-Wolfelayer 3 · numerical-optimization
- Nash Equilibriumlayer 2 · decision-theory
- Online Convex Optimizationlayer 3 · numerical-optimization
- Preconditioned Optimizers: Shampoo, K-FAC, and Natural Gradientlayer 3 · optimization-function-classes
- Projected Gradient Descentlayer 2 · numerical-optimization
- Regularization Theorylayer 2 · optimization-function-classes
- Riemannian Optimization and Manifold Constraintslayer 3 · optimization-function-classes
- Stability and Optimization Dynamicslayer 2 · optimization-function-classes
- Stochastic Approximation Theorylayer 2 · optimization-function-classes
- Training Dynamics and Loss Landscapeslayer 4 · llm-construction
Advanced or specialty topics (1)
- Interior Point Methodslayer 3 · numerical-optimization