Skip to main content

Reinforcement Learning from Human Feedback

3 selectedDifficulty 5-73 unseenView topic
IntermediateNew
0 answered
2 intermediate1 advancedAdapts to your performance
Question 1 of 3
120sintermediate (5/10)compare
RLHF typically trains a reward model on pairwise human preferences before policy optimization. Why preferences rather than absolute ratings?