I would like to add to this that there is also just the question of how strong a lot of these claims can be.
Maybe the future is super enormous. And maybe me eating sushi tomorrow night at 6pm instead of on Wednesday could have massive repercussions. But it could also have massive repercussions for me to eat sushi on Friday, or something.
A lot of things “could” have massive repercussions. Maybe if I hadn’t missed the bus last week, Super Hitler wouldn’t have been born.
There are some obvious low-hanging fruits in the world such that they would reduce the risk of catastrophe (say, nuclear disarmament, or the Seed Vault, or something). But there are also a lot of things that seem less obvious in their mechanisms, and which could go radically differently than how the people who outline them seem to think. Interventions to increase the number of liberal democracies on the planet and the amount of education could lead to more political polarization and social instability, for example. I’m not saying it would, but it could. Places that have been on the receiving end of “democratizing” interventions often wind up more politically unstable or dangerous for a variety of reasons, and the upward trend in education and longevity over the past few decades has also been an upward trend in polarization, depression, anxiety, social isolation…
Sure, maybe there’s some existential risk to humanity, and maybe the future is massive, but what reason do I have to believe that my eating sushi, or taking public transit, or donating to one charity over another, or reading some book, is actually going to have specific effects? Why wouldn’t the unintended consequences outweigh the intended ones?
It’s not just skepticism about the potential size of the future, it’s skepticism about the cause-effect relationship being provided by the potential “mugger”. Maybe we’re 100% doomed and nothing we do will do anything because an asteroid is going to hit us in 50 years that we will not be able to detect due to a serendipitous astronomical occurrence, and all of it is pointless. Maybe some omnipotent deity is watching and will make sure we colonize the galaxy. Maybe research into AI risk will bring about an evil AI. Maybe research into AI is pointless because AIs will necessarily be hyperbenevolent due to some law of the universe we have not yet discovered. Maybe a lot of things.
Even with the dedication and careful thought that I have seen many people put into these probabilities, it always looks to me like there aren’t enough variables to be comfortable with any of it. And there are people who don’t think about this in quantitative terms who would find even my hypothetical more comprehensive models to be inadequate.
I would like to add to this that there is also just the question of how strong a lot of these claims can be.
Maybe the future is super enormous. And maybe me eating sushi tomorrow night at 6pm instead of on Wednesday could have massive repercussions. But it could also have massive repercussions for me to eat sushi on Friday, or something.
A lot of things “could” have massive repercussions. Maybe if I hadn’t missed the bus last week, Super Hitler wouldn’t have been born.
There are some obvious low-hanging fruits in the world such that they would reduce the risk of catastrophe (say, nuclear disarmament, or the Seed Vault, or something). But there are also a lot of things that seem less obvious in their mechanisms, and which could go radically differently than how the people who outline them seem to think. Interventions to increase the number of liberal democracies on the planet and the amount of education could lead to more political polarization and social instability, for example. I’m not saying it would, but it could. Places that have been on the receiving end of “democratizing” interventions often wind up more politically unstable or dangerous for a variety of reasons, and the upward trend in education and longevity over the past few decades has also been an upward trend in polarization, depression, anxiety, social isolation…
Sure, maybe there’s some existential risk to humanity, and maybe the future is massive, but what reason do I have to believe that my eating sushi, or taking public transit, or donating to one charity over another, or reading some book, is actually going to have specific effects? Why wouldn’t the unintended consequences outweigh the intended ones?
It’s not just skepticism about the potential size of the future, it’s skepticism about the cause-effect relationship being provided by the potential “mugger”. Maybe we’re 100% doomed and nothing we do will do anything because an asteroid is going to hit us in 50 years that we will not be able to detect due to a serendipitous astronomical occurrence, and all of it is pointless. Maybe some omnipotent deity is watching and will make sure we colonize the galaxy. Maybe research into AI risk will bring about an evil AI. Maybe research into AI is pointless because AIs will necessarily be hyperbenevolent due to some law of the universe we have not yet discovered. Maybe a lot of things.
Even with the dedication and careful thought that I have seen many people put into these probabilities, it always looks to me like there aren’t enough variables to be comfortable with any of it. And there are people who don’t think about this in quantitative terms who would find even my hypothetical more comprehensive models to be inadequate.