Good work putting all of this effort into the topic and sharing your thinking for others to read and comment on.
I have many issues with the model, some of which I have mentioned under your previous posts (e.g. re the prior, the inconsistent treatment of different kinds of evidence, only some of which get discounted by the prior). Here I’ll mention some of the things that jumped out at me regarding your parameter values in the spreadsheet.
Note: All CI’s here have zero variance because individual well-being shouldn’t factor in to prior estimates of good done.
Are you assuming that you have good reasons to believe in particular values, but that the actions of society over millions of years (which will affect many of the other variables) won’t be responsive to those same reasons?
hedonium...uility...1000...for same energy requirements as human brain
In your spreadsheet the physically optimal physical structure for producing well-being is much less efficient at doing so than insects, even after your sentience adjustment, rather than orders of magnitude better. You have insects with absolute value of utility of 4 multiplied by a sentience adjustment of 0.01, giving 0.04. For hedonium and dolorium you have 1000 and 1, giving a ratio of 25,000:1 ratio for utility per human-brain-energy-requirement worth of optimized computronium to utility per insect.
Human brains have somewhat less than 100 billion neurons, while relatively huge-brained insects like honeybees approach a million, and the vast bulk of insects have far less. So just taking neuron count would give a factor of a million (and the computronium can be subdivided however you like), before taking into account the difference between typical insect existence and continuous peak experience, or the room for tremendously greater computational efficiency than the brain (super-cooling, reversible computing, reallocating neural machinery).
So this number is orders of magnitude off on your premises.
hedonium...dolorium...0.05...0.01
Really, people would be 1/5th as likely to try and make the lowest possible total welfare as the highest possibe welfare? People have rather different attitudes about those two states. What kind of poll results do you think you would get if you asked people “should we create as much suffering as possible, in inhuman beings optimized to maximize ethical badness according to some brand of utilitarianism, or should we instead try to maximize happiness?”
So you give a lot of credence to a civilization of human-like computer minds, but their expected well-being is only that of wealthy humans today? This when they don’t need to suffer from illness, disease, hunger, fatigue, etc? When they can have tremendous experiences for cheap (virtual versions of things are cheap), reliably high mood, modifiable drives, abundant beauty, access to immense knowledge (including knowledge of how to live well), etc?In a world of super advanced technology?
Seems low. If things are being optimized for efficiency and brutal competition then you don’t get ems for long, and otherwise you get a world shaped by the desires of the people in it.
fill universe with biology...0.4
Human brains, compared to physical limits, are much less able to deal with temperatures of a few Kelvin, far less energy-efficient, far more subject to injury and death, and otherwise limited. They are also far, far, more difficult to transport across the interstellar gulfs, and have far lower reproductive potential than AI. Most of the habitable scope of the cosmos and potential for computation is inhospitable to human brains but friendly to AI.
We have factory farming...0.3
Galaxies of factory farming, with that level of technology? People going tremendously out of their way to get nutrition and taste that could be had far more cheaply without inflicting pain on a cosmic scale? I find this credence bizarrely high. And it’s combined with credence 0 for lots of happy pets, or even superhappy agriculture.
We spread WAS...0.4
And David Pearce-style scenarios get 0.
humans per star...10^11
If there are so few humans per star there is going to be a tremendous amount of computer intelligence per human or any animals on planetary habitats. In the Solar System today life captures far less than a millionth of solar output.
GiveDirectly...probability of extinction...given end of poverty
Why think that reduced poverty (with better education, health, and slower population growth as consequences, not to mention potential for better international relations) is more likely to harm than help the long-run?
Estimated by doubling the 80% CI for online ads
This gets you into falsifiable-with-the-naked-eye territory, predicting millions of vegetarians (and majority conversion in the highly targeted demographics) based on past spending.
veg-years...stop factory farming
Is this accounting for the likelihood of factory farming ending otherwise?
AI researcher multiplicative effect...1...3...size of community when AI is created...200...10,000
You have the total research community going up in size many, many times, but very little impact of current early field-building on that later size. Is this supposed to reflect that it is only 1 year of work?
AI developers caring about animals prevents factory farming with probability 1 (which you earlier gave an unconditional probability of 0.3), but prevents wild vertebrate animal suffering 0.1-0.4, insects 0.05-0.2. Why the discrepancies? Why do these people have absolute power over factory farming (rather than governments, corporate superiors, the general public, etc)? Do you really see it as twice as likely that people comprehensively intervene in nature to help or exterminate every little frog or fish in the world as to do the same to insects?
If the action is not building Earthlike ecologies all over the place, or displacing nature with artificial structures, then one and the same action suffices. If you’re doing comprehensive genetic and robotic interventions in a rich futuristic society, it won’t cost much more to throw in the insects with the frogs and tiny fish.
dolorium scenarios prevented
In your implicit account of hedonium and dolorium here it’s made up of micro-minds (rather than mega-minds, and other alternatives). That’s why you rate it more highly than big brain emulations. And you doubt that people would bother to create hedonium of your current favored guess variety because you think they won’t care about tiny minds.
But if people don’t care about tiny minds so they are uninterested in making optimized happy ones, why would they care about making optimized unhappy ones? Conditional on the (weird) premise of people trying to make the world as bad as possible by utilitarian lights, people being concerned about small minds makes things worse, since when they try to make things bad they will do so orders of magnitude more efficiently on your premises.
So why think that expected dolorium probability goes down more than hedonium goes up from increased concern for small minds, rather than both hedonium and dolorium expectations increasing?
Tons of useful comments as usual. Thanks for taking the time to look at everything.
I agree with most of your comments and I’ll change the spreadsheet accordingly.
I do have one concern with several of the claims you make. I mostly agree that the claims seem reasonable, but I’m skeptical of putting too high a confidence on claims like “there won’t be biological beings in the far future.” I’m familiar with the arguments, but I don’t believe we can make particularly strong claims about civilization’s technological capabilities 1000 years from now. Maybe it would be reasonable to say with 90% confidence that far-future humans won’t be made of biology, but I wouldn’t use a probability much higher than that.
Really, people would be 1/5th as likely to try and make the lowest possible total welfare as the highest possibe welfare?
Okay, that’s probably too high. I do think it’s extremely unlikely that we’ll end up with dolorium because normal humans decide to, but there’s a non-astronomically-small chance that we’ll end up with a malevolent AI. I reduced the probability difference from 1⁄5 to 1⁄50.
And David Pearce-style scenarios get 0.
What is a David Pearce-style scenario?
AI developers caring about animals prevents factory farming with probability 1 (which you earlier gave an unconditional probability of 0.3)
The effect of factory farming on the utility of the far future is weighted by the probability that it exists, so the spreadsheet does this correctly given that you accept the inputs.
I do have one concern with several of the claims you make. I mostly agree that the claims seem reasonable, but I’m skeptical of putting too high a confidence on claims like “there won’t be biological beings in the far future.” I’m familiar with the arguments, but I don’t believe we can make particularly strong claims about civilization’s technological capabilities 1000 years from now. Maybe it would be reasonable to say with 90% confidence that far-future humans won’t be made of biology, but I wouldn’t use a probability much higher than that.
“There won’t be biological beings in the future’ isn’t of central importance in your spreadsheet. What is dubious is a divide between 40% probability of a widely colonized universe where almost all the energy goes unused and there is no appreciable contribution of machine intelligence (since if even a tiny fraction of said energy went to such it would be more numerous than the numbers you put in for biological life), vs all machine intelligence and no biological life.
A world with biological life and orders of magnitude more machine life isn’t a possibility in the Global sheet, but looks a lot more likely than all biological life across many galaxies with no machine life.
There is a 0.2 probability given to ‘civilization stays on Earth.’ 0.4 for galactic colonization combined with absurdly primitive food production and no creation of machine intelligence is a lot. You’re talking about an incredibly advanced technological base.
What is a David Pearce-style scenario?
Engineering wild ecologies to have high welfare, with things like contraception-regulated populations, increased pleasure and reduced pain.
Okay, that’s probably too high. I do think it’s extremely unlikely that we’ll end up with dolorium because normal humans decide to, but there’s a non-astronomically-small chance that we’ll end up with a malevolent AI.
By which you mean an actively malevolent AI, like one designed to make the world as bad as possible by utilitarian lights? Some kind of sign-flip in a utility function switching from ‘maximize good’ to ‘maximize evil’? Or that an AI indifferent to human concerns (a wireheader, or paperclipper, or digits-of-pi calculator) would have instrumental reason to make the worst possible world by utilitarian lights?
What is dubious is a divide between 40% probability of a widely colonized universe where almost all the energy goes unused and there is no appreciable contribution of machine intelligence (since if even a tiny fraction of said energy went to such it would be more numerous than the numbers you put in for biological life), vs all machine intelligence and no biological life.
Well it looks like you are right yet again.
By which you mean [...]?
I believe the most likely possibilities are (1) a sign-flip utility function and (2) people design an AI with the purpose of conquering other nations or something malevolent-ish like that, and these goals end up causing the AI to maximize dolorium. These possibilities do seem pretty remote though.
I agree Carl—I’d say advocating for or denying the (overwhelming) importance of hedonism are both arguable positions but expecting it to be of a similar level of importance to other lives is wrongest of all.
The point is stronger: if you posit the most efficient arrangement of matter for producing welfare is less efficient than a bunch of animal brains that you could be using as your arrangement, then you get a contradiction.
I also really think that you need to address the super-strong empirical claims implicit in your prior (held fixed, with no updating on evidence that it’s wrong, and with no mixture of other models) at the tails. I’ve added to threads under your previous post on priors, with links to other discussions.
Why think that reduced poverty (with better education, health, and slower population growth as consequences, not to mention potential for better international relations) is more likely to harm than help the long-run?
I don’t think it’s obviously going to harm more than help, but it could harm, based on accelerating greenhouse gas emissions and worsened international relations (Thucydides’ traps and instability to existing hegemonic structures).
Good work putting all of this effort into the topic and sharing your thinking for others to read and comment on.
I have many issues with the model, some of which I have mentioned under your previous posts (e.g. re the prior, the inconsistent treatment of different kinds of evidence, only some of which get discounted by the prior). Here I’ll mention some of the things that jumped out at me regarding your parameter values in the spreadsheet.
Are you assuming that you have good reasons to believe in particular values, but that the actions of society over millions of years (which will affect many of the other variables) won’t be responsive to those same reasons?
In your spreadsheet the physically optimal physical structure for producing well-being is much less efficient at doing so than insects, even after your sentience adjustment, rather than orders of magnitude better. You have insects with absolute value of utility of 4 multiplied by a sentience adjustment of 0.01, giving 0.04. For hedonium and dolorium you have 1000 and 1, giving a ratio of 25,000:1 ratio for utility per human-brain-energy-requirement worth of optimized computronium to utility per insect.
Human brains have somewhat less than 100 billion neurons, while relatively huge-brained insects like honeybees approach a million, and the vast bulk of insects have far less. So just taking neuron count would give a factor of a million (and the computronium can be subdivided however you like), before taking into account the difference between typical insect existence and continuous peak experience, or the room for tremendously greater computational efficiency than the brain (super-cooling, reversible computing, reallocating neural machinery).
So this number is orders of magnitude off on your premises.
Really, people would be 1/5th as likely to try and make the lowest possible total welfare as the highest possibe welfare? People have rather different attitudes about those two states. What kind of poll results do you think you would get if you asked people “should we create as much suffering as possible, in inhuman beings optimized to maximize ethical badness according to some brand of utilitarianism, or should we instead try to maximize happiness?”
So you give a lot of credence to a civilization of human-like computer minds, but their expected well-being is only that of wealthy humans today? This when they don’t need to suffer from illness, disease, hunger, fatigue, etc? When they can have tremendous experiences for cheap (virtual versions of things are cheap), reliably high mood, modifiable drives, abundant beauty, access to immense knowledge (including knowledge of how to live well), etc?In a world of super advanced technology?
Seems low. If things are being optimized for efficiency and brutal competition then you don’t get ems for long, and otherwise you get a world shaped by the desires of the people in it.
Human brains, compared to physical limits, are much less able to deal with temperatures of a few Kelvin, far less energy-efficient, far more subject to injury and death, and otherwise limited. They are also far, far, more difficult to transport across the interstellar gulfs, and have far lower reproductive potential than AI. Most of the habitable scope of the cosmos and potential for computation is inhospitable to human brains but friendly to AI.
Galaxies of factory farming, with that level of technology? People going tremendously out of their way to get nutrition and taste that could be had far more cheaply without inflicting pain on a cosmic scale? I find this credence bizarrely high. And it’s combined with credence 0 for lots of happy pets, or even superhappy agriculture.
And David Pearce-style scenarios get 0.
If there are so few humans per star there is going to be a tremendous amount of computer intelligence per human or any animals on planetary habitats. In the Solar System today life captures far less than a millionth of solar output.
Why think that reduced poverty (with better education, health, and slower population growth as consequences, not to mention potential for better international relations) is more likely to harm than help the long-run?
Is this accounting for the likelihood of factory farming ending otherwise?
You have the total research community going up in size many, many times, but very little impact of current early field-building on that later size. Is this supposed to reflect that it is only 1 year of work?
AI developers caring about animals prevents factory farming with probability 1 (which you earlier gave an unconditional probability of 0.3), but prevents wild vertebrate animal suffering 0.1-0.4, insects 0.05-0.2. Why the discrepancies? Why do these people have absolute power over factory farming (rather than governments, corporate superiors, the general public, etc)? Do you really see it as twice as likely that people comprehensively intervene in nature to help or exterminate every little frog or fish in the world as to do the same to insects?
If the action is not building Earthlike ecologies all over the place, or displacing nature with artificial structures, then one and the same action suffices. If you’re doing comprehensive genetic and robotic interventions in a rich futuristic society, it won’t cost much more to throw in the insects with the frogs and tiny fish.
In your implicit account of hedonium and dolorium here it’s made up of micro-minds (rather than mega-minds, and other alternatives). That’s why you rate it more highly than big brain emulations. And you doubt that people would bother to create hedonium of your current favored guess variety because you think they won’t care about tiny minds.
But if people don’t care about tiny minds so they are uninterested in making optimized happy ones, why would they care about making optimized unhappy ones? Conditional on the (weird) premise of people trying to make the world as bad as possible by utilitarian lights, people being concerned about small minds makes things worse, since when they try to make things bad they will do so orders of magnitude more efficiently on your premises.
So why think that expected dolorium probability goes down more than hedonium goes up from increased concern for small minds, rather than both hedonium and dolorium expectations increasing?
Tons of useful comments as usual. Thanks for taking the time to look at everything.
I agree with most of your comments and I’ll change the spreadsheet accordingly.
I do have one concern with several of the claims you make. I mostly agree that the claims seem reasonable, but I’m skeptical of putting too high a confidence on claims like “there won’t be biological beings in the far future.” I’m familiar with the arguments, but I don’t believe we can make particularly strong claims about civilization’s technological capabilities 1000 years from now. Maybe it would be reasonable to say with 90% confidence that far-future humans won’t be made of biology, but I wouldn’t use a probability much higher than that.
Okay, that’s probably too high. I do think it’s extremely unlikely that we’ll end up with dolorium because normal humans decide to, but there’s a non-astronomically-small chance that we’ll end up with a malevolent AI. I reduced the probability difference from 1⁄5 to 1⁄50.
What is a David Pearce-style scenario?
The effect of factory farming on the utility of the far future is weighted by the probability that it exists, so the spreadsheet does this correctly given that you accept the inputs.
“There won’t be biological beings in the future’ isn’t of central importance in your spreadsheet. What is dubious is a divide between 40% probability of a widely colonized universe where almost all the energy goes unused and there is no appreciable contribution of machine intelligence (since if even a tiny fraction of said energy went to such it would be more numerous than the numbers you put in for biological life), vs all machine intelligence and no biological life.
A world with biological life and orders of magnitude more machine life isn’t a possibility in the Global sheet, but looks a lot more likely than all biological life across many galaxies with no machine life.
There is a 0.2 probability given to ‘civilization stays on Earth.’ 0.4 for galactic colonization combined with absurdly primitive food production and no creation of machine intelligence is a lot. You’re talking about an incredibly advanced technological base.
Engineering wild ecologies to have high welfare, with things like contraception-regulated populations, increased pleasure and reduced pain.
By which you mean an actively malevolent AI, like one designed to make the world as bad as possible by utilitarian lights? Some kind of sign-flip in a utility function switching from ‘maximize good’ to ‘maximize evil’? Or that an AI indifferent to human concerns (a wireheader, or paperclipper, or digits-of-pi calculator) would have instrumental reason to make the worst possible world by utilitarian lights?
Well it looks like you are right yet again.
I believe the most likely possibilities are (1) a sign-flip utility function and (2) people design an AI with the purpose of conquering other nations or something malevolent-ish like that, and these goals end up causing the AI to maximize dolorium. These possibilities do seem pretty remote though.
I agree Carl—I’d say advocating for or denying the (overwhelming) importance of hedonism are both arguable positions but expecting it to be of a similar level of importance to other lives is wrongest of all.
The point is stronger: if you posit the most efficient arrangement of matter for producing welfare is less efficient than a bunch of animal brains that you could be using as your arrangement, then you get a contradiction.
Yeah that was pretty clearly a mistake.
I also really think that you need to address the super-strong empirical claims implicit in your prior (held fixed, with no updating on evidence that it’s wrong, and with no mixture of other models) at the tails. I’ve added to threads under your previous post on priors, with links to other discussions.
I don’t think it’s obviously going to harm more than help, but it could harm, based on accelerating greenhouse gas emissions and worsened international relations (Thucydides’ traps and instability to existing hegemonic structures).