I’m surprised you only mention/allude to AI briefly, and only in the context of Elon Musk’s donations to OpenAI.
Intuitively it seems reasonably likely that people’s impact on AI timelines and risks will dominate and the other considerations are smaller in comparison. Especially as several of the billionaires on the list are tech billionaires.
So originally I was planning to have a normal estimation and then a different one with more detail on x-risk in particular, but didn’t end up going through with it. But I agree that x-risk/AI could be expanded more on.
Personally something like “There are many important things in the world, and AI risk is just one among them; it would be bad to only evaluate outsiders by their alignment with our fairly idiosyncratic priorities.”
I’m surprised you only mention/allude to AI briefly, and only in the context of Elon Musk’s donations to OpenAI.
Intuitively it seems reasonably likely that people’s impact on AI timelines and risks will dominate and the other considerations are smaller in comparison. Especially as several of the billionaires on the list are tech billionaires.
So originally I was planning to have a normal estimation and then a different one with more detail on x-risk in particular, but didn’t end up going through with it. But I agree that x-risk/AI could be expanded more on.
Hmm this comment seemed pretty controversial! Curious where/why people disagree, though no obligation to comment of course.
Personally something like “There are many important things in the world, and AI risk is just one among them; it would be bad to only evaluate outsiders by their alignment with our fairly idiosyncratic priorities.”
(I upvoted your comment, thanks!) Which things do you think are more important than AI risk?