Wow thanks for your long and thoughtful reply. I really do appreciate your thinking and I’m glad CU is working for you and you’re happy with it...that is a good thing.
I do think you’ve given me a little boost in my argument against CU unfortunately, though, in the idea that our brain just doesn’t have enough compute. There was a post a while back from a well know EA about their long experience starting orgs and “doing EA stuff” and how the lesson they’d taken from it all is that there are just too many unknown variables in life for anything we try to build and plan outcomes for to really work out how we hoped...it’s a lot of shots in the dark and sometimes you hit. That is similar to my experience as well...and the reason is we just don’t have enough data nor enough compute to process it all...nor adequate points or spectrums of input. The thing that better fits in that kind of category is a robot who with an AI mind can do far more compute...but even they are challenged. So for me that’s another good reason against CU optimizing well for humans.
And the other big thing I haven’t mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness...I think the attraction of CU is that it adds to us the logic side that our inner life doesn’t always have...and so the answer is to live with both together...to use CU thinking for the effective things it does, but also to realize where it is very ineffective toward human thriving...and so that may be similar to the differences you see between naive and mature CU. Maybe that’s how we synthesize our two views.
How I would apply this to the Original Post here is that we should see “the gaping hole where the art should be” in EA as a form of evidence of a bug in EA that we should seek to fix. I personally hope as we turn this corner toward a third wave, we will include that on the list of priorities.
Well, okay. I’ve argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if you’re Christian you may be thinking “I have my Rock” so you feel no need for another.
If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/rules, such as requirements of honesty or glorifying God/etc.
You could do this, but you’d be arguing axiomatically. A claim like “my axioms are above those of utilitarians!” would just be a bare assertion with nothing to back it up. As I mentioned, I have axioms too, but only the bare minimum necessary, because axioms are unprovable, and years of reflection led me to reject all unnecessary axioms.
You could say something like the production of art/beauty is intrinsically valuable apart from the well-being it produces and thus utilitarianism is flawed in that it fails to capture this intrinsic value (and only captures the instrumental value.
The most important thing to realize is that “things with intrinsic value” is a choice that lies outside consequentialism. A consequentialist could indeed choose an axiom that “art is intrinsically valuable”. Calling it “utilitarian” feels like nonstandard terminology, but such value assignment seems utilitarian-adjacent unless you treat it merely as a virtue or rule rather than as a goal you seek after.
Note, however, that beauty doesn’t exist apart from an observer to view it, which is part of the reason I think this choice would be a mistake. Imagine a completely dead universe―no people, no life, no souls/God/heaven/hell, and no chance life will ever arise. Just an endless void pockmarked by black holes. Suppose there is art drifting through the void (perhaps Boltzmann art, by analogy to a Boltzmann brain). Does it have value? I say it does not. But if, in the endless void, a billion light years beyond the light cone of this art that can never be seen, there should be one solitary civilization left alive, I argue that this civilization’s art is infinitely more valuable. More pointedly, I would say that it is the experience of art that is valuable and not the art itself, or that art is instrumentally valuable, not intrinsically. Thus a great work of art viewed by a million people has delivered 100 times as much value as the same art only seen by 10,000 people―though one should take into account countervailing factors such as the fact that the first 10,000 who see it are more likely to be connoisseurs who appreciate it a lot, and the fact that any art you experience takes away time you might have spent seeing other art. e.g. for me I wish I could listen to more EA and geeky songs (as long as they have beautiful melodies), but lacking that, I still enjoy hearing nice music that isn’t tailored as much to my tastes. Thus EA art is less valuable in a world that is already full of art. But EA art could be instrumentally valuable both in the experiences it creates (experiences with intrinsic value) and in its tendency to make the EA movement healthy and growing.
So, to be clear, I don’t see a bug in utilitarianism; I see the other views as the ones with bugs. This is simply because I see no flaws in my moral system, but I see flaws in other systems. There are of course flaws in myself , as I illustrate below.
And the other big thing I haven’t mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness
I think it’s important to understand and accept that humans cannot be maximally moral; we are all flawed. And this is not a controversial statement to a Christian, right? We can be flawed even in our own efforts to act morally! I’ll give an example from last year, when Russia invaded Ukraine. I was suddenly and deeply interested in helping, seeing that a rapid response was needed. But other EAs weren’t nearly as interested as I was. I would’ve argued that although Ukrainian lives are less cost-effective to save than Africans, there was a meaningful longtermist angle: Russia was tipping the global balance of power from democracy to dictatorship, and if we don’t respond strongly against Putin, Xi Jinping could be emboldened to invade Taiwan; in the long term, this could lead to tyranny taking over the world (EA relies on freedom to work, so this is a threat to us). Yet I didn’t make this argument to EAs; I kept it to myself (perhaps for the same reason I waited ~6 months to publish this, I was afraid EAs wouldn’t care). I ended up thinking deeply about the war―about what it might be like as a civilian on the front lines, about what kinds of things would help Ukraine most on a small-ish budget, about how the best Russians weren’t getting the support they deserved, and about how events might play out (which turned into a hobby of learning and forecasting, and the horrors of war I’ve seen are like―holy shit, did I burn out my empathy unit?). But not a single EA organization offered any way to help Ukraine and I was left looking for other ways to help. So, I ended up giving $2000 CAD to Ukraine-related causes, half of which went to Ripley’s Heroes which turned out to be a (probably, mostly) fraudulent organization. Not my best EA moment! From a CU perspective, I performed badly. I f**ked up. I should’ve been able to look at the situation and accept that there was no way I could give military aid effectively with the information I had. And I certainly knew that military aid was far from an ideal intervention and that there were probably better interventions, I just didn’t have access to them AFAIK. The correct course of action was not to donate to Ukraine (understanding that some people could help effectively, just not me). But emotionally I couldn’t accept that. But you know, I have no doubt that it was myself that was flawed and not my moral system; CU’s name be praised! 😉 Also I don’t really feel guilty about it, I just think “well, I’m human, I’ll make some mistakes and no one’s judging me anyway, hopefully I’ll do better next time.”
In sum: humans can’t meet the ideals of (M)CU, but that doesn’t mean (M)CU isn’t the correct standard by which to make and evaluate choices. There is no better standard. And again, the Christian view is similar, just with a different axiomatic foundation.
5.6: Isn’t utilitarianism hostile to music and art and nature and maybe love?
No. Some people seem to think this, but it doesn’t make a whole lot of sense. If a world with music and art and nature and love is better than a world without them (and everyone seems to agree that it is) and if they make people happy (and everyone seems to agree that they do) then of course utilitarians will support these things.
There’s a more comprehensive treatment of this objection in 7.8 below.
Wow thanks for your long and thoughtful reply. I really do appreciate your thinking and I’m glad CU is working for you and you’re happy with it...that is a good thing.
I do think you’ve given me a little boost in my argument against CU unfortunately, though, in the idea that our brain just doesn’t have enough compute. There was a post a while back from a well know EA about their long experience starting orgs and “doing EA stuff” and how the lesson they’d taken from it all is that there are just too many unknown variables in life for anything we try to build and plan outcomes for to really work out how we hoped...it’s a lot of shots in the dark and sometimes you hit. That is similar to my experience as well...and the reason is we just don’t have enough data nor enough compute to process it all...nor adequate points or spectrums of input. The thing that better fits in that kind of category is a robot who with an AI mind can do far more compute...but even they are challenged. So for me that’s another good reason against CU optimizing well for humans.
And the other big thing I haven’t mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness...I think the attraction of CU is that it adds to us the logic side that our inner life doesn’t always have...and so the answer is to live with both together...to use CU thinking for the effective things it does, but also to realize where it is very ineffective toward human thriving...and so that may be similar to the differences you see between naive and mature CU. Maybe that’s how we synthesize our two views.
How I would apply this to the Original Post here is that we should see “the gaping hole where the art should be” in EA as a form of evidence of a bug in EA that we should seek to fix. I personally hope as we turn this corner toward a third wave, we will include that on the list of priorities.
Well, okay. I’ve argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if you’re Christian you may be thinking “I have my Rock” so you feel no need for another.
You could do this, but you’d be arguing axiomatically. A claim like “my axioms are above those of utilitarians!” would just be a bare assertion with nothing to back it up. As I mentioned, I have axioms too, but only the bare minimum necessary, because axioms are unprovable, and years of reflection led me to reject all unnecessary axioms.
The most important thing to realize is that “things with intrinsic value” is a choice that lies outside consequentialism. A consequentialist could indeed choose an axiom that “art is intrinsically valuable”. Calling it “utilitarian” feels like nonstandard terminology, but such value assignment seems utilitarian-adjacent unless you treat it merely as a virtue or rule rather than as a goal you seek after.
Note, however, that beauty doesn’t exist apart from an observer to view it, which is part of the reason I think this choice would be a mistake. Imagine a completely dead universe―no people, no life, no souls/God/heaven/hell, and no chance life will ever arise. Just an endless void pockmarked by black holes. Suppose there is art drifting through the void (perhaps Boltzmann art, by analogy to a Boltzmann brain). Does it have value? I say it does not. But if, in the endless void, a billion light years beyond the light cone of this art that can never be seen, there should be one solitary civilization left alive, I argue that this civilization’s art is infinitely more valuable. More pointedly, I would say that it is the experience of art that is valuable and not the art itself, or that art is instrumentally valuable, not intrinsically. Thus a great work of art viewed by a million people has delivered 100 times as much value as the same art only seen by 10,000 people―though one should take into account countervailing factors such as the fact that the first 10,000 who see it are more likely to be connoisseurs who appreciate it a lot, and the fact that any art you experience takes away time you might have spent seeing other art. e.g. for me I wish I could listen to more EA and geeky songs (as long as they have beautiful melodies), but lacking that, I still enjoy hearing nice music that isn’t tailored as much to my tastes. Thus EA art is less valuable in a world that is already full of art. But EA art could be instrumentally valuable both in the experiences it creates (experiences with intrinsic value) and in its tendency to make the EA movement healthy and growing.
So, to be clear, I don’t see a bug in utilitarianism; I see the other views as the ones with bugs. This is simply because I see no flaws in my moral system, but I see flaws in other systems. There are of course flaws in myself , as I illustrate below.
I think it’s important to understand and accept that humans cannot be maximally moral; we are all flawed. And this is not a controversial statement to a Christian, right? We can be flawed even in our own efforts to act morally! I’ll give an example from last year, when Russia invaded Ukraine. I was suddenly and deeply interested in helping, seeing that a rapid response was needed. But other EAs weren’t nearly as interested as I was. I would’ve argued that although Ukrainian lives are less cost-effective to save than Africans, there was a meaningful longtermist angle: Russia was tipping the global balance of power from democracy to dictatorship, and if we don’t respond strongly against Putin, Xi Jinping could be emboldened to invade Taiwan; in the long term, this could lead to tyranny taking over the world (EA relies on freedom to work, so this is a threat to us). Yet I didn’t make this argument to EAs; I kept it to myself (perhaps for the same reason I waited ~6 months to publish this, I was afraid EAs wouldn’t care). I ended up thinking deeply about the war―about what it might be like as a civilian on the front lines, about what kinds of things would help Ukraine most on a small-ish budget, about how the best Russians weren’t getting the support they deserved, and about how events might play out (which turned into a hobby of learning and forecasting, and the horrors of war I’ve seen are like―holy shit, did I burn out my empathy unit?). But not a single EA organization offered any way to help Ukraine and I was left looking for other ways to help. So, I ended up giving $2000 CAD to Ukraine-related causes, half of which went to Ripley’s Heroes which turned out to be a (probably, mostly) fraudulent organization. Not my best EA moment! From a CU perspective, I performed badly. I f**ked up. I should’ve been able to look at the situation and accept that there was no way I could give military aid effectively with the information I had. And I certainly knew that military aid was far from an ideal intervention and that there were probably better interventions, I just didn’t have access to them AFAIK. The correct course of action was not to donate to Ukraine (understanding that some people could help effectively, just not me). But emotionally I couldn’t accept that. But you know, I have no doubt that it was myself that was flawed and not my moral system; CU’s name be praised! 😉 Also I don’t really feel guilty about it, I just think “well, I’m human, I’ll make some mistakes and no one’s judging me anyway, hopefully I’ll do better next time.”
In sum: humans can’t meet the ideals of (M)CU, but that doesn’t mean (M)CU isn’t the correct standard by which to make and evaluate choices. There is no better standard. And again, the Christian view is similar, just with a different axiomatic foundation.
Edit: P.S. a relevant bit of the Consequentialism FAQ: