Tons of useful comments as usual. Thanks for taking the time to look at everything.
I agree with most of your comments and I’ll change the spreadsheet accordingly.
I do have one concern with several of the claims you make. I mostly agree that the claims seem reasonable, but I’m skeptical of putting too high a confidence on claims like “there won’t be biological beings in the far future.” I’m familiar with the arguments, but I don’t believe we can make particularly strong claims about civilization’s technological capabilities 1000 years from now. Maybe it would be reasonable to say with 90% confidence that far-future humans won’t be made of biology, but I wouldn’t use a probability much higher than that.
Really, people would be 1/5th as likely to try and make the lowest possible total welfare as the highest possibe welfare?
Okay, that’s probably too high. I do think it’s extremely unlikely that we’ll end up with dolorium because normal humans decide to, but there’s a non-astronomically-small chance that we’ll end up with a malevolent AI. I reduced the probability difference from 1⁄5 to 1⁄50.
And David Pearce-style scenarios get 0.
What is a David Pearce-style scenario?
AI developers caring about animals prevents factory farming with probability 1 (which you earlier gave an unconditional probability of 0.3)
The effect of factory farming on the utility of the far future is weighted by the probability that it exists, so the spreadsheet does this correctly given that you accept the inputs.
I do have one concern with several of the claims you make. I mostly agree that the claims seem reasonable, but I’m skeptical of putting too high a confidence on claims like “there won’t be biological beings in the far future.” I’m familiar with the arguments, but I don’t believe we can make particularly strong claims about civilization’s technological capabilities 1000 years from now. Maybe it would be reasonable to say with 90% confidence that far-future humans won’t be made of biology, but I wouldn’t use a probability much higher than that.
“There won’t be biological beings in the future’ isn’t of central importance in your spreadsheet. What is dubious is a divide between 40% probability of a widely colonized universe where almost all the energy goes unused and there is no appreciable contribution of machine intelligence (since if even a tiny fraction of said energy went to such it would be more numerous than the numbers you put in for biological life), vs all machine intelligence and no biological life.
A world with biological life and orders of magnitude more machine life isn’t a possibility in the Global sheet, but looks a lot more likely than all biological life across many galaxies with no machine life.
There is a 0.2 probability given to ‘civilization stays on Earth.’ 0.4 for galactic colonization combined with absurdly primitive food production and no creation of machine intelligence is a lot. You’re talking about an incredibly advanced technological base.
What is a David Pearce-style scenario?
Engineering wild ecologies to have high welfare, with things like contraception-regulated populations, increased pleasure and reduced pain.
Okay, that’s probably too high. I do think it’s extremely unlikely that we’ll end up with dolorium because normal humans decide to, but there’s a non-astronomically-small chance that we’ll end up with a malevolent AI.
By which you mean an actively malevolent AI, like one designed to make the world as bad as possible by utilitarian lights? Some kind of sign-flip in a utility function switching from ‘maximize good’ to ‘maximize evil’? Or that an AI indifferent to human concerns (a wireheader, or paperclipper, or digits-of-pi calculator) would have instrumental reason to make the worst possible world by utilitarian lights?
What is dubious is a divide between 40% probability of a widely colonized universe where almost all the energy goes unused and there is no appreciable contribution of machine intelligence (since if even a tiny fraction of said energy went to such it would be more numerous than the numbers you put in for biological life), vs all machine intelligence and no biological life.
Well it looks like you are right yet again.
By which you mean [...]?
I believe the most likely possibilities are (1) a sign-flip utility function and (2) people design an AI with the purpose of conquering other nations or something malevolent-ish like that, and these goals end up causing the AI to maximize dolorium. These possibilities do seem pretty remote though.
Tons of useful comments as usual. Thanks for taking the time to look at everything.
I agree with most of your comments and I’ll change the spreadsheet accordingly.
I do have one concern with several of the claims you make. I mostly agree that the claims seem reasonable, but I’m skeptical of putting too high a confidence on claims like “there won’t be biological beings in the far future.” I’m familiar with the arguments, but I don’t believe we can make particularly strong claims about civilization’s technological capabilities 1000 years from now. Maybe it would be reasonable to say with 90% confidence that far-future humans won’t be made of biology, but I wouldn’t use a probability much higher than that.
Okay, that’s probably too high. I do think it’s extremely unlikely that we’ll end up with dolorium because normal humans decide to, but there’s a non-astronomically-small chance that we’ll end up with a malevolent AI. I reduced the probability difference from 1⁄5 to 1⁄50.
What is a David Pearce-style scenario?
The effect of factory farming on the utility of the far future is weighted by the probability that it exists, so the spreadsheet does this correctly given that you accept the inputs.
“There won’t be biological beings in the future’ isn’t of central importance in your spreadsheet. What is dubious is a divide between 40% probability of a widely colonized universe where almost all the energy goes unused and there is no appreciable contribution of machine intelligence (since if even a tiny fraction of said energy went to such it would be more numerous than the numbers you put in for biological life), vs all machine intelligence and no biological life.
A world with biological life and orders of magnitude more machine life isn’t a possibility in the Global sheet, but looks a lot more likely than all biological life across many galaxies with no machine life.
There is a 0.2 probability given to ‘civilization stays on Earth.’ 0.4 for galactic colonization combined with absurdly primitive food production and no creation of machine intelligence is a lot. You’re talking about an incredibly advanced technological base.
Engineering wild ecologies to have high welfare, with things like contraception-regulated populations, increased pleasure and reduced pain.
By which you mean an actively malevolent AI, like one designed to make the world as bad as possible by utilitarian lights? Some kind of sign-flip in a utility function switching from ‘maximize good’ to ‘maximize evil’? Or that an AI indifferent to human concerns (a wireheader, or paperclipper, or digits-of-pi calculator) would have instrumental reason to make the worst possible world by utilitarian lights?
Well it looks like you are right yet again.
I believe the most likely possibilities are (1) a sign-flip utility function and (2) people design an AI with the purpose of conquering other nations or something malevolent-ish like that, and these goals end up causing the AI to maximize dolorium. These possibilities do seem pretty remote though.