If you expect the far future to be net negative in expectation, then reducing existential risk necessarily increases quality risk. In this essay I list some reasons why the far future might be net negative:
We sustain or worsen wild animal suffering on earth.
We colonize other planets and fill them with wild animals whose lives are not worth living.
We create lots of computer simulations of extremely unhappy beings.
We create an AI with evil values that creates lots of suffering on purpose. (But this seems highly unlikely.)
In the essay I discuss how likely I think these scenarios are.
In your essay you place a lot of weight on other people’s opinions. I wonder, if for some reason you decided to disregard everyone else’s opinion, do you know if you would reach a different conclusion?
My probabilities would be somewhat different, yes. I originally wrote “I’d give about a 60% probability that the far future is net positive, and I’m about 70% confident that the expected value of the far future is net positive.” If I didn’t care about other people’s opinions, I’d probably revise this to something like 50%/60%.
It seems to me that the most plausible future scenario is we continue doing what we’ve been doing, the dominant effect of which is that we sustain wild animal populations that are probably net negative. I’ve heard people give arguments for why we shouldn’t expect this, but I’m generally wary of arguments that say “the world will look like this 1000 years from now, even though it has never looked like this before and hardly anybody expects this to happen,” which is the type of argument used to justify that wild animal suffering won’t be a problem in the far future.
I believe most people are overconfident in their predictions about what the far future will look like (an, in particular, on how much the far future will be dominated by wild animal suffering and/or suffering simulations). But the fact that pretty much everyone I’ve talked to expects the far future to be net positive does push me in that direction, especially people like Carl Shulman and Brian Tomasik* who seem to think exceptionally clearly and level-headedly.
*This isn’t exactly what Brian believes; see here.
There are strong signals that we would/wouldn’t be able to encode good values in an AI.
Powerful people’s values shift more toward/away from caring about non-human animals (including wild animals) or sentient simulations of non-human minds.
I hear a good argument that I hadn’t already heard or thought of. (I consider this pretty likely, given how little total thought has gone into these questions.)
If you expect the far future to be net negative in expectation, then reducing existential risk necessarily increases quality risk. In this essay I list some reasons why the far future might be net negative:
We sustain or worsen wild animal suffering on earth.
We colonize other planets and fill them with wild animals whose lives are not worth living.
We create lots of computer simulations of extremely unhappy beings.
We create an AI with evil values that creates lots of suffering on purpose. (But this seems highly unlikely.)
In the essay I discuss how likely I think these scenarios are.
In your essay you place a lot of weight on other people’s opinions. I wonder, if for some reason you decided to disregard everyone else’s opinion, do you know if you would reach a different conclusion?
My probabilities would be somewhat different, yes. I originally wrote “I’d give about a 60% probability that the far future is net positive, and I’m about 70% confident that the expected value of the far future is net positive.” If I didn’t care about other people’s opinions, I’d probably revise this to something like 50%/60%.
It seems to me that the most plausible future scenario is we continue doing what we’ve been doing, the dominant effect of which is that we sustain wild animal populations that are probably net negative. I’ve heard people give arguments for why we shouldn’t expect this, but I’m generally wary of arguments that say “the world will look like this 1000 years from now, even though it has never looked like this before and hardly anybody expects this to happen,” which is the type of argument used to justify that wild animal suffering won’t be a problem in the far future.
I believe most people are overconfident in their predictions about what the far future will look like (an, in particular, on how much the far future will be dominated by wild animal suffering and/or suffering simulations). But the fact that pretty much everyone I’ve talked to expects the far future to be net positive does push me in that direction, especially people like Carl Shulman and Brian Tomasik* who seem to think exceptionally clearly and level-headedly.
*This isn’t exactly what Brian believes; see here.
Okay. Do you see any proxies (besides other people’s views) that, if they changed in our lifetime, might shift your estimates one way or the other?
Off the top of my head:
We develop strong AI.
There are strong signals that we would/wouldn’t be able to encode good values in an AI.
Powerful people’s values shift more toward/away from caring about non-human animals (including wild animals) or sentient simulations of non-human minds.
I hear a good argument that I hadn’t already heard or thought of. (I consider this pretty likely, given how little total thought has gone into these questions.)