This post is half object level, half experiment with “semicoherent audio monologue ramble → prose” AI (presumably GPT-3.5/4 based) program audiopen.ai.
In the interest of the latter objective, I’m including 3 mostly-redundant subsections:
A ’final’ mostly-AI written text, edited and slightly expanded just enough so that I endorse it in full (though recognize it’s not amazing or close to optimal)
The raw AI output
The raw transcript
1) Dubious asymmetry argument in WWOTF
In Chapter 9 of his book, What We Are the Future, Will MacAskill argues that the future holds positive moral value under a total utilitarian perspective. He posits that people generally use resources to achieve what they want—either for themselves or for others—and thus good outcomes are easily explained as the natural consequence of agents deploying resources for their goals. Conversely, bad outcomes tend to be side effects of pursuing other goals. While malevolence and sociopathy do exist, they are empirically rare.
MacAskill argues that in a future with continued economic growth and no existential risk, we will likely direct more resources towards doing good things due to self-interest and increased impartial altruism. He contrasts this eutopian scenario with an anti-eutopia: the worst possible world, which he argues (compellingly, I think) less probable because it requires convoluted explanations as opposed to simple desires like enjoying ice cream. He concludes that the probability of achieving a eutopia outweighs the low likelihood but extreme negative consequences of an anti-eutopia.
However, I believe McCaskill’s analysis neglects an important aspect: considering not only these two extremes but also the middle distribution where neither significant amounts of resources nor agentic intervention occur.
When physics operates without agency-driven resource allocation, we have good reason to expect evolution to create conscious beings whose suffering we can attribute to the ease with which animals (or animal-like beings) can lose all future expected genetic reproduction—as MacAskill himself argues elsewhere in the book.
Importantly, though, this non-agentic suffering, seems more likely to complement agentic resource deployment—not substitute for it as one might intuit. That’s because human or post-human expansion necessarily entails the expansion of concentrated physical energy, and seems likely to entail the expansion of other scarce, pro-biotic resources such as DNA, water, and computation.
Although McCaskill does not explicitly claim his binary model comparing eutopia and anti-eutopia is sufficient for understanding this complex problem, it seems to me to be implied;
Only upon attempting to draft a blog post revisiting his work did I noticed the line “We can make some progress by focusing on just two extreme scenarios: the best or worst possible futures, eutopia and anti-eutopia,” acknowledging status of this model as a bit of evidence in the larger question of the future’s value.
In sum, I think MacAskill’s analysis would benefit from addressing the morally relevant middle distribution to provide a more accurate representation of the future under total utilitarianism.
2) Utilitarian Utopia, Anti-Utopia and the Neglected Middle
In Chapter 9 of his book, What We Are the Future, Will McCaskill argues that the future holds positive moral value under a total utilitarian perspective. He posits that people generally use resources to achieve what they want—either for themselves or others—and thus good outcomes are often intentional. Conversely, bad outcomes tend to be side effects of pursuing other goals. While malevolence and sociopathy do exist, they are empirically rare.
McCaskill then extrapolates this argument to suggest that in a future with continued economic growth (assuming no existential risk), we will likely direct more resources towards doing good things due to self-interest and increased impartial altruism. He contrasts this utopian scenario with an anti-utopia: the worst possible world which is less probable because it requires convoluted explanations as opposed to simple desires like enjoying ice cream.
He concludes that the probability of achieving a utopia outweighs the low likelihood but extreme negative consequences of an anti-utopia. However, I believe McCaskill’s analysis neglects an important aspect: considering not only these two extremes but also the middle distribution where neither significant amounts of resources nor agentic intervention occur.
In such cases where physics operates without agency-driven resource allocation, evolution can create conscious beings like plants and animals who experience suffering without any intentionality behind it. This middle distribution may actually skew negatively since evolution favors reproductive fitness at any cost; as a result, sentient beings could suffer more than they experience happiness during their lives.
I argue that wild animal welfare is net negative overall; if given a choice between having them continue living or ceasing their existence altogether, I would choose nonexistence on moral grounds. Although McCaskill does not explicitly claim his heuristic comparison between utopia and anti-utopia is sufficient for understanding this complex problem, he strongly implies it throughout most of his chapter.
Upon revisiting his work while drafting my response blog post, I noticed a single line acknowledging the limitations of his approach. However, this caveat does not adequately emphasize that his argument should be considered only as a first pass and not definitive proof. In my opinion, McCaskill’s analysis would benefit from addressing the morally relevant middle distribution to provide a more accurate representation of the future under total utilitarianism.
-------
3) Original Transcript
Okay, so I’m going to describe where I think I disagree with Will McCaskill in Chapter 9 of his book, What We Are the Future, where he basically makes an argument that the future is positive in expectation, positive moral value under a total utilitarian perspective. And so his argument is basically that people, it’s very easy to see that people deploy the resources in order to get what they want, which is either to help themselves and sometimes to help other people, whether it’s just their family or more impartial altruism. Basically you can always explain why somebody does something good just because it’s good and they want it, which is kind of, I think that’s correct and compelling. Whereas when something bad happens, it’s generally the side effect of something else. At least, yeah. So while there is malevolence and true sociopathy, those things are in fact empirically quite rare, but if you undergo a painful procedure, like a medical procedure, it’s because there’s something affirmative that you want and that’s a necessary side effect. It’s not because you actually sought that out in particular. And all this I find true and correct and compelling. And so then he uses this to basically say that in the future, presumably conditional on continued economic growth, which basically just means no existential risk and humans being around, we’ll be employing a lot of resources in the direction of doing things well or doing good. Largely just because people just want good things for themselves and hopefully to some extent because there will be more impartial altruists willing to both trade and to put their own resources in order to help others. And once again, all true, correct, compelling in my opinion. So on the other side, so basically utopia in this sense, utopia basically meaning employing a lot of, the vast majority of resources in the direction of doing good is very likely and very good. On the other side, it’s how likely and how bad is what he calls anti-utopia, which is basically the worst possible world. And he basically using… I don’t need to get into the particulars, but basically I think he presents a compelling argument that in fact it would be worse than the best world is good, at least to the best of our knowledge right now. But it’s very unlikely because it’s hard to see how that comes about. You actually can invent stories, but they get kind of convoluted. And it’s not nearly as simple as, okay, people like ice cream and so they buy ice cream. It’s like, you have to explain why so many resources are being deployed in the direction of doing good things and you still end up with a terrible world. Then he basically says, okay, all things considered, the probability of good utopia wins out relative to the badness, but very low probability of anti-utopia. Again, a world full of misery. And where I think he goes wrong is that he neglects the middle of the distribution where the distribution is ranging from… I don’t know how to formalize this, but something like percentage or amount of… Yeah, one of those two, percentage or amount of resources being deployed in the direction of on one side of the spectrum causing misery and then the other side of the spectrum causing good things to come about. And so he basically considers the two extreme cases. But I claim that, in fact, the middle of the distribution is super important. And actually when you include that, things look significantly worse because the middle of the distribution is basically like, what does the world look like when you don’t have agents essentially deploying resources in the direction of anything? You just have the universe doing its thing. We can set aside the metaphysics or physics technicalities of where that becomes problematic. Anyway, so basically the middle of the distribution is just universe doing its thing, physics operating. I think there’s the one phenomenon that results from this that we know of to be morally important or we have good reason to believe is morally important is basically evolution creating conscious beings that are not agentic in the sense that I care about now, but basically like plants and animals. And presumably I think you have good reason to believe animals are sentient. And evolution, I claim, creates a lot of suffering. And so you look at the middle of the distribution and it’s not merely asymmetrical, but it’s asymmetrical in the opposite direction. So I claim that if you don’t have anything, if you don’t have lots of resources being deployed in any direction, this is a bad world because you can expect evolution to create a lot of suffering. The reason for that is, as he gets into, something like either suffering is intrinsically more important, which I put some weight on that. It’s not exactly clear how to distinguish that from the empirical case. And the empirical case is basically it’s very easy to lose all your reproductive fitness in the evolutionary world very quickly. It’s relatively hard to massively gain a ton. Reproduction is like, even having sex, for example, only increases your relative reproductive success a little bit, whereas you can be killed in an instant. And so this creates an asymmetry where if you buy a functional view of qualia, then it results in there being an asymmetry where animals are just probably going to experience more pain over their lives, by and large, than happiness. And I think this is definitely true. I think wild animal welfare is just net negative. I wish if I could just… If these are the only two options, have there not be any wild animals or have them continue living as they are, I think it would be overwhelmingly morally important to not have them exist anymore. And so tying things back. Yeah, so McCaskill doesn’t actually… I don’t think he makes a formally incorrect statement. He just strongly implies that this case, that his heuristic of comparing the two tails is a pretty good proxy for the best we can do. And that’s where I disagree. I think there’s actually one line in the chapter where he basically says, we can get a grip on this very hard problem by doing the following. But I only noticed that when I went back to start writing a blog post. And the vast majority of the chapter is basically just the object level argument or evidence presentation. There’s no repetition emphasizing that this is a really, I guess, sketchy, for lack of a better word, dubious case. Or first pass, I guess, is a better way of putting it. This is just a first pass, don’t put too much weight on this. That’s not how it comes across, at least in my opinion, to the typical reader. And yeah, I think that’s everything.
I think there’s a case to be made for exploring the wide range of mediocre outcomes the world could become.
Recent history would indicate that things are getting better faster though. I think MacAskill’s bias towards a range of positive future outcomes is justified, but I think you agree too.
Maybe you could turn this into a call for more research into the causes of mediocre value lock-in. Like why have we had periods of growth and collapse, why do some regions regress, what tools can society use to protect against sinusoidal growth rates.
This post is half object level, half experiment with “semicoherent audio monologue ramble → prose” AI (presumably GPT-3.5/4 based) program audiopen.ai.
In the interest of the latter objective, I’m including 3 mostly-redundant subsections:
A ’final’ mostly-AI written text, edited and slightly expanded just enough so that I endorse it in full (though recognize it’s not amazing or close to optimal)
The raw AI output
The raw transcript
1) Dubious asymmetry argument in WWOTF
In Chapter 9 of his book, What We Are the Future, Will MacAskill argues that the future holds positive moral value under a total utilitarian perspective. He posits that people generally use resources to achieve what they want—either for themselves or for others—and thus good outcomes are easily explained as the natural consequence of agents deploying resources for their goals. Conversely, bad outcomes tend to be side effects of pursuing other goals. While malevolence and sociopathy do exist, they are empirically rare.
MacAskill argues that in a future with continued economic growth and no existential risk, we will likely direct more resources towards doing good things due to self-interest and increased impartial altruism. He contrasts this eutopian scenario with an anti-eutopia: the worst possible world, which he argues (compellingly, I think) less probable because it requires convoluted explanations as opposed to simple desires like enjoying ice cream. He concludes that the probability of achieving a eutopia outweighs the low likelihood but extreme negative consequences of an anti-eutopia.
However, I believe McCaskill’s analysis neglects an important aspect: considering not only these two extremes but also the middle distribution where neither significant amounts of resources nor agentic intervention occur.
When physics operates without agency-driven resource allocation, we have good reason to expect evolution to create conscious beings whose suffering we can attribute to the ease with which animals (or animal-like beings) can lose all future expected genetic reproduction—as MacAskill himself argues elsewhere in the book.
Importantly, though, this non-agentic suffering, seems more likely to complement agentic resource deployment—not substitute for it as one might intuit. That’s because human or post-human expansion necessarily entails the expansion of concentrated physical energy, and seems likely to entail the expansion of other scarce, pro-biotic resources such as DNA, water, and computation.
Although McCaskill does not explicitly claim his binary model comparing eutopia and anti-eutopia is sufficient for understanding this complex problem, it seems to me to be implied;
Only upon attempting to draft a blog post revisiting his work did I noticed the line “We can make some progress by focusing on just two extreme scenarios: the best or worst possible futures, eutopia and anti-eutopia,” acknowledging status of this model as a bit of evidence in the larger question of the future’s value.
In sum, I think MacAskill’s analysis would benefit from addressing the morally relevant middle distribution to provide a more accurate representation of the future under total utilitarianism.
2) Utilitarian Utopia, Anti-Utopia and the Neglected Middle
In Chapter 9 of his book, What We Are the Future, Will McCaskill argues that the future holds positive moral value under a total utilitarian perspective. He posits that people generally use resources to achieve what they want—either for themselves or others—and thus good outcomes are often intentional. Conversely, bad outcomes tend to be side effects of pursuing other goals. While malevolence and sociopathy do exist, they are empirically rare.
McCaskill then extrapolates this argument to suggest that in a future with continued economic growth (assuming no existential risk), we will likely direct more resources towards doing good things due to self-interest and increased impartial altruism. He contrasts this utopian scenario with an anti-utopia: the worst possible world which is less probable because it requires convoluted explanations as opposed to simple desires like enjoying ice cream.
He concludes that the probability of achieving a utopia outweighs the low likelihood but extreme negative consequences of an anti-utopia. However, I believe McCaskill’s analysis neglects an important aspect: considering not only these two extremes but also the middle distribution where neither significant amounts of resources nor agentic intervention occur.
In such cases where physics operates without agency-driven resource allocation, evolution can create conscious beings like plants and animals who experience suffering without any intentionality behind it. This middle distribution may actually skew negatively since evolution favors reproductive fitness at any cost; as a result, sentient beings could suffer more than they experience happiness during their lives.
I argue that wild animal welfare is net negative overall; if given a choice between having them continue living or ceasing their existence altogether, I would choose nonexistence on moral grounds. Although McCaskill does not explicitly claim his heuristic comparison between utopia and anti-utopia is sufficient for understanding this complex problem, he strongly implies it throughout most of his chapter.
Upon revisiting his work while drafting my response blog post, I noticed a single line acknowledging the limitations of his approach. However, this caveat does not adequately emphasize that his argument should be considered only as a first pass and not definitive proof. In my opinion, McCaskill’s analysis would benefit from addressing the morally relevant middle distribution to provide a more accurate representation of the future under total utilitarianism.
-------
3) Original Transcript
Okay, so I’m going to describe where I think I disagree with Will McCaskill in Chapter 9 of his book, What We Are the Future, where he basically makes an argument that the future is positive in expectation, positive moral value under a total utilitarian perspective. And so his argument is basically that people, it’s very easy to see that people deploy the resources in order to get what they want, which is either to help themselves and sometimes to help other people, whether it’s just their family or more impartial altruism. Basically you can always explain why somebody does something good just because it’s good and they want it, which is kind of, I think that’s correct and compelling. Whereas when something bad happens, it’s generally the side effect of something else. At least, yeah. So while there is malevolence and true sociopathy, those things are in fact empirically quite rare, but if you undergo a painful procedure, like a medical procedure, it’s because there’s something affirmative that you want and that’s a necessary side effect. It’s not because you actually sought that out in particular. And all this I find true and correct and compelling. And so then he uses this to basically say that in the future, presumably conditional on continued economic growth, which basically just means no existential risk and humans being around, we’ll be employing a lot of resources in the direction of doing things well or doing good. Largely just because people just want good things for themselves and hopefully to some extent because there will be more impartial altruists willing to both trade and to put their own resources in order to help others. And once again, all true, correct, compelling in my opinion. So on the other side, so basically utopia in this sense, utopia basically meaning employing a lot of, the vast majority of resources in the direction of doing good is very likely and very good. On the other side, it’s how likely and how bad is what he calls anti-utopia, which is basically the worst possible world. And he basically using… I don’t need to get into the particulars, but basically I think he presents a compelling argument that in fact it would be worse than the best world is good, at least to the best of our knowledge right now. But it’s very unlikely because it’s hard to see how that comes about. You actually can invent stories, but they get kind of convoluted. And it’s not nearly as simple as, okay, people like ice cream and so they buy ice cream. It’s like, you have to explain why so many resources are being deployed in the direction of doing good things and you still end up with a terrible world. Then he basically says, okay, all things considered, the probability of good utopia wins out relative to the badness, but very low probability of anti-utopia. Again, a world full of misery. And where I think he goes wrong is that he neglects the middle of the distribution where the distribution is ranging from… I don’t know how to formalize this, but something like percentage or amount of… Yeah, one of those two, percentage or amount of resources being deployed in the direction of on one side of the spectrum causing misery and then the other side of the spectrum causing good things to come about. And so he basically considers the two extreme cases. But I claim that, in fact, the middle of the distribution is super important. And actually when you include that, things look significantly worse because the middle of the distribution is basically like, what does the world look like when you don’t have agents essentially deploying resources in the direction of anything? You just have the universe doing its thing. We can set aside the metaphysics or physics technicalities of where that becomes problematic. Anyway, so basically the middle of the distribution is just universe doing its thing, physics operating. I think there’s the one phenomenon that results from this that we know of to be morally important or we have good reason to believe is morally important is basically evolution creating conscious beings that are not agentic in the sense that I care about now, but basically like plants and animals. And presumably I think you have good reason to believe animals are sentient. And evolution, I claim, creates a lot of suffering. And so you look at the middle of the distribution and it’s not merely asymmetrical, but it’s asymmetrical in the opposite direction. So I claim that if you don’t have anything, if you don’t have lots of resources being deployed in any direction, this is a bad world because you can expect evolution to create a lot of suffering. The reason for that is, as he gets into, something like either suffering is intrinsically more important, which I put some weight on that. It’s not exactly clear how to distinguish that from the empirical case. And the empirical case is basically it’s very easy to lose all your reproductive fitness in the evolutionary world very quickly. It’s relatively hard to massively gain a ton. Reproduction is like, even having sex, for example, only increases your relative reproductive success a little bit, whereas you can be killed in an instant. And so this creates an asymmetry where if you buy a functional view of qualia, then it results in there being an asymmetry where animals are just probably going to experience more pain over their lives, by and large, than happiness. And I think this is definitely true. I think wild animal welfare is just net negative. I wish if I could just… If these are the only two options, have there not be any wild animals or have them continue living as they are, I think it would be overwhelmingly morally important to not have them exist anymore. And so tying things back. Yeah, so McCaskill doesn’t actually… I don’t think he makes a formally incorrect statement. He just strongly implies that this case, that his heuristic of comparing the two tails is a pretty good proxy for the best we can do. And that’s where I disagree. I think there’s actually one line in the chapter where he basically says, we can get a grip on this very hard problem by doing the following. But I only noticed that when I went back to start writing a blog post. And the vast majority of the chapter is basically just the object level argument or evidence presentation. There’s no repetition emphasizing that this is a really, I guess, sketchy, for lack of a better word, dubious case. Or first pass, I guess, is a better way of putting it. This is just a first pass, don’t put too much weight on this. That’s not how it comes across, at least in my opinion, to the typical reader. And yeah, I think that’s everything.
I think there’s a case to be made for exploring the wide range of mediocre outcomes the world could become.
Recent history would indicate that things are getting better faster though. I think MacAskill’s bias towards a range of positive future outcomes is justified, but I think you agree too.
Maybe you could turn this into a call for more research into the causes of mediocre value lock-in. Like why have we had periods of growth and collapse, why do some regions regress, what tools can society use to protect against sinusoidal growth rates.