In short, I am not hoping for a specific outcome, and I can’t take into account every single scenario. If someone starts giving more credit to research on moral reasoning in AI after reading this, that’s already enough, considering that the topic doesn’t seem to be popular within AI alignment, and it was even more niche at the time I wrote this post.
Sure! And like I said, I do think this is valuable: it just seems more obviously valuable as a way to ensure the best outcomes (aligned AI), rather than as a means to avoid the worst outcomes.
In short, I am not hoping for a specific outcome, and I can’t take into account every single scenario. If someone starts giving more credit to research on moral reasoning in AI after reading this, that’s already enough, considering that the topic doesn’t seem to be popular within AI alignment, and it was even more niche at the time I wrote this post.
Sure! And like I said, I do think this is valuable: it just seems more obviously valuable as a way to ensure the best outcomes (aligned AI), rather than as a means to avoid the worst outcomes.