If you haven’t already come across it, you might find the points given under the “how big of a risk is misalignment” section of Ajeya Cotra’s post on cold takes interesting. I would be pretty interested in a more comprehensive list of ways out there that alignment optimists and pessimists tend to disagree about the difficulty of the problem, and what ML experts (outside of AI safety) think about each specific point or if they have other cruxes.
If you haven’t already come across it, you might find the points given under the “how big of a risk is misalignment” section of Ajeya Cotra’s post on cold takes interesting. I would be pretty interested in a more comprehensive list of ways out there that alignment optimists and pessimists tend to disagree about the difficulty of the problem, and what ML experts (outside of AI safety) think about each specific point or if they have other cruxes.