Criticism of the main framework in AI alignment by Michele Campolo (34 karma)Why it’s good: Explores an area of AGI alignment that I think is under-discussed, namely the possibility of using AGI for direct moral progress. (Other under-discussed areas with more karma: AGI and Lock-In, Robert Long on Why You Might Want To Care About Artificial Sentience, ‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting, Steering AI to care for animals, and soon.)
Criticism of the main framework in AI alignment by Michele Campolo (34 karma)
Why it’s good: Explores an area of AGI alignment that I think is under-discussed, namely the possibility of using AGI for direct moral progress. (Other under-discussed areas with more karma: AGI and Lock-In, Robert Long on Why You Might Want To Care About Artificial Sentience, ‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting, Steering AI to care for animals, and soon.)