Executive summary: The degree of “slack” in AI training refers to how intensely the process pressures models to maximize rewards versus allowing suboptimal behavior. More slack means more uncertainty about the resulting goal, while less slack increases confidence. The author leans towards preferring less slack.
Key points:
Slack refers to how much training ruthlessly optimizes for maximum rewards versus allowing flexibility and uncertainty. Low slack regimes pressure intense reward maximization.
Slack affects the likelihood training produces models that imperfectly pursue proxy goals not perfectly aligned with training rewards.
Slack is conceptually distinct from efforts in training to root out goal misgeneralization through adversarial techniques.
Biological reward processes like human dopamine likely involve more slack than ML training. Evolutionary selection also leaves more slack.
We can likely control the slack during training. Less slack provides more confidence about the resulting goal, while more slack risks wishful thinking.
The author leans towards preferring less slack to increase certainty about goals, but slack may vary across training stages.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The degree of “slack” in AI training refers to how intensely the process pressures models to maximize rewards versus allowing suboptimal behavior. More slack means more uncertainty about the resulting goal, while less slack increases confidence. The author leans towards preferring less slack.
Key points:
Slack refers to how much training ruthlessly optimizes for maximum rewards versus allowing flexibility and uncertainty. Low slack regimes pressure intense reward maximization.
Slack affects the likelihood training produces models that imperfectly pursue proxy goals not perfectly aligned with training rewards.
Slack is conceptually distinct from efforts in training to root out goal misgeneralization through adversarial techniques.
Biological reward processes like human dopamine likely involve more slack than ML training. Evolutionary selection also leaves more slack.
We can likely control the slack during training. Less slack provides more confidence about the resulting goal, while more slack risks wishful thinking.
The author leans towards preferring less slack to increase certainty about goals, but slack may vary across training stages.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.