Reinforcement Learning from Human Feedback
IntermediateNew
0 answered2 intermediate1 advancedAdapts to your performance
Question 1 of 3
120sintermediate (5/10)compare
RLHF typically trains a reward model on pairwise human preferences before policy optimization. Why preferences rather than absolute ratings?