There is a difference between cost effectiveness the methodology, and utilitarianism or other impartial philosophy.
You could just as easily use cost-effectiveness for personal daily goals, and some people do with things such as health and fitness, but generally speaking our minds and society happen to be sufficiently well-adapted to let us achieve these goals without needing to think about cost-effectiveness. Even if we are only concerned with the global good, it’s not worthwhile or effective to have explicit cost-effectiveness evaluation of everything in our daily lives, though that shouldn’t stop us from being ready and willing to use it where appropriate.
Conversely, you could pursue the global good without explicitly thinking about cost-effectiveness even in domains like charity evaluation, but the prevailing view in EA is (rightfully) that this would be a bad idea.
What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic. Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results. Assuming anti-realism, I don’t have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude. I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that’s not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don’t think in literal cost-effectiveness terms, but global benefits are still my general goal. It’s not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.
It’s important to remember that having parochial attitudes towards some things in your own life doesn’t necessarily justify attempts to spread analogous attitudes among other people.
What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic.
Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results.
Assuming anti-realism, I don’t have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude.
I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that’s not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don’t think in literal cost-effectiveness terms, but global benefits are still my general goal. It’s not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.
(I broke the quoted text into more paragraphs so that I could parse it more easily. I’m thinking about a reply – the questions you’re posing here do definitely deserve a serious response. I have some sense that people have already written the response somewhere – Minding Our Way by Nate Soares comes close, although I don’t think he addresses the “what if there actually exist moral obligations?” question, instead assuming mostly non-moral-realism)
It’s not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.
On a different note though:
I actually agree with this claim, but it’s a weirder claim.
People used to have real communities. And engaging with them was actually a part of being emotionally healthy.
Now, we live in an atomized society where where community mostly doesn’t exist, or is a pale shadow of it’s former self. So there exist a lot of people who donate to the local arts club or whatever out of a vague sense of obligation rather than because it’s actually helping them be healthy.
And yes, that should be challenged. But not because those people should instead be donating to the global good (although maybe they should consider that). Rather, those people should figure out how to actually be healthy, actually have a community, and make sure to support those things so they can continue to exist.
Sometimes this does mean a local arts program, or dance community, or whatever. If that’s something you’re actually getting value from.
The rationalist community (and to a lesser extent the EA community) have succeeded in being, well, more of a “real community” than most things do. So there are times when I want to support projects within them, not from the greater-good standpoint, but from the “I want to live in a world with nice things, this is a nice thing” standpoint. (More thoughts here in my Thoughts on the REACH Patreon article)
I feel that my folk dance community is a pretty solidly real one—people help each other move, etc. The duration is reassuring to me—the community has been in roughly its current form since the 1970s, so folk dancers my age are attending each other’s weddings and baby showers but we eventually expect to attend each other’s funerals. But I agree that a lot of community institutions aren’t that solid.
I recently chatted with someone who said they’ve been part of ~5 communities over their life, and that all but one of them was more “real community” like than the rationalists. So maybe there’s plenty of good stuff out there and I’ve just somehow filtered it out of my life.
The “real communities” I’ve been part of are mostly longer-established, intergenerational ones. I think starting a community with almost entirely 20-somethings is a hard place to start from. Of course most communities started like that, but not all of them make it to being intergenerational.
I saw what seemed like potential communities over the years “soccer club, improv comedy club, local toastmasters” but I was afraid… to be myself, being judged, making a fool of me, worried about being liked… so I passed. Here I am now in EA giving it a shot. I may go to the improv comedy mtgs soon. According to Hari’s “Lost connections” finding a community is very important; we social animals and don’t do well in loneliness.
Meanwhile, my previously written thoughts on this topic, not quite addressing your claims but covering a lot of related issues, is here. Crossposting for ease of reference, warning that it includes some weird references that may not be relevant.
Context: Responding to Zvi Mowshowitz who is arguing to be wary of organizations/movements/philosophies that encourage you to give them all your resources (even your favorite political cause, yes, yours, yes, even effective altruism)
Point A: The Sane Response to The World Being On Fire (While Human)
Myself, and most EA folk I talk to extensively (including all the leaders I know of) seem to share the following mindset:
The set of ideas in EA (whether focused on poverty, X-Risk, or whatever), do naturally lead one down a path of “sacrifice everything because do you really need that $4 Mocha when people are dying the future is burning everything is screwed but maybe you can help?”
But, as soon as you’ve thought about this for any length of time, clearly, stressing yourself out about that all the time is bad. It is basically not possible to hold all the relevant ideas and values in your head at once without going crazy or otherwise getting twisted/consumed-in-a-bad-way.
There are a few people who are able to hold all of this in their head and have a principled approach to resolving everything in a healthy way. (Nate Soares is the only one who comes to mind, see his “replacing guilt” series). But for most people, there doesn’t seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
You can resolve this by saying “well then, the obvious-implications-of-EA-thinking must be wrong”, or “I guess maybe I don’t need to live healthily”.
But, like, the world is on fire and you can do something about it and you do obviously need to be healthy. And part of being healthy is not just saying things like “okay, I guess I can indulge things like not spending 100% of my resources on saving the world in order to remain healthy but it’s a necessary evil that I feel guilty about.”
AFAICT, the only viable, sane approach is to acknowledge all the truths at once, and then apply a crude patch that says “I’m just going to not think about this too hard, try generally to be healthy, put whatever bit of resources towards having the world not-be-on-fire that I can do safely.
Then, maybe check out Nate Soare’s writing and see if you’re able to integrate it in a more sane way, if you are the sort of person who is interested in doing that, and if so, carefully go from there.
Point B: What Should A Movement Trying To Have the World Not Be On Fire Do?
The approach in Point A seems sane and fine to me. I think it is in fact good to try to help the world not be on fire, and that the correct sane response is to proactively look for ways to do so that are sustainable and do not harm yourself.
I think this is generally the mindset held by EA leadership.
It is not out-of-the-question that EA leadership in fact really wants everyone to Give Their All and that it’s better to err on the side of pushing harder for that even if that means some people end up doing unhealthy things. And the only reason they say things like Point A is as a ploy to get people to give their all.
But, since I believe Point A is quite sane, and most of the leadership I see is basically saying Point A, and I’m in a community that prioritizes saying true things even if they’re inconvenient, I’m willing to assume the leadership is saying Part A because it is true as opposed to for Secret Manipulative Reasons.
This still leaves us with some issues:
1) Getting to the point where you’re on board with Point-A-the-way-I-meant-Point-A-to-be-interpreted requires going through some awkward and maybe unhealthy stages where you haven’t fully integrated everything, which means you are believing some false things and perhaps doing harm to yourself.
Even if you read a series of lengthy posts before taking any actions, even if the Giving What We Can Pledge began with “we really think you should read some detailed blogposts about the psychology of this before you commit” (this may be a good idea), reading the blogposts wouldn’t actually be enough to really understand everything.
So, people who are still in the process of grappling with everything end up on EA forum and EA Facebook and EA Tumblr saying things like “if you live off more than $20k a year that’s basically murder”. (And also, you have people on Dank EA Memes saying all of this ironically except maybe not except maybe it’s fine who knows?)
And stopping all this from happening would be pretty time consuming.
2) The world is in fact on fire, and people disagree on what the priorities should be on what are acceptable things to do in order for that to be less the case. And while the Official Party Line is something like Point A, there’s still a fair number of prominent people hanging around who do earnestly lean towards “it’s okay to make costs hidden, it’s okay to not be as dedicated to truth as Zvi or Ben Hoffman or Sarah Constantin would like, because it is Worth It.”
And present_day_Raemon thinks those people are wrong, but not obviously so wrong that it’s not worth talking about and taking seriously as a consideration.
Maybe it’s the case that being emotionally healthy is only valuable insofar as it translates into the global good (if you assume moral realism, which I don’t).
But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.
But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.
Whether it typically requires it to the degree advocated by OP or Zvi is (a) probably false, on my basic perception, but (b) requires proper psychological research before drawing firm conclusions.
But for most people, there doesn’t seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
This is a crux, because IMO the way that the people who frequently write and comment on this topic seem to talk about altruism represents a much more neurotic response to minor moral problems than what I consider to be typical or desirable for a human being. Of course the people who feel anxiety about morality will be the ones who talk about how to handle anxiety about morality, but that doesn’t mean their points are valid recommendations for the more general population. Deciding not to have a mocha doesn’t necessarily mean stressing out about it, and we shouldn’t set norms and expectations that lead people to perceive it as such. It creates an availability cascade of other people parroting conventional wisdom about too-much-sacrifice when they haven’t personally experienced confirmation of that point of view.
If I think I shouldn’t have the mocha, I just… don’t get the mocha. Sometimes I do get the mocha, but then I don’t feel anxiety about it, I know I just acted compulsively or whatever and I then think “oh gee I screwed up” and get on with my life.
The problem can be alleviated by having shared standards and doctrine for budgeting and other decisions. GWWC with its 10% pledge, or Singer’s “about a third” principle, is a first step in this direction.
Minding Our Way by Nate Soares comes close, although I don’t think he addresses the “what if there actually exist moral obligations?” question, instead assuming mostly non-moral-realism)
Not sure what he says (haven’t got the interest to search through a whole series of posts for the relevant ones, sorry) but my point assuming antirealism (or subjectivism) seems to have been generally neglected by philosophy both inside and outside the academia: just because the impartial good isn’t everything doesn’t mean that it is rational to generically promote other people’s pursuits of their own respective partial goods. The whole reason humans created impartial morality in the first place is that we realized that it works better than for us to each pursue partialist goals.
So, regardless of most moral points of view, the shared standards and norms around how-much-to-sacrifice must be justified on consequentialist grounds.
I should emphasize that antirealism != agent-relative morality, I just happen to think that there is a correlation in plausibility here.
There is a difference between cost effectiveness the methodology, and utilitarianism or other impartial philosophy.
You could just as easily use cost-effectiveness for personal daily goals, and some people do with things such as health and fitness, but generally speaking our minds and society happen to be sufficiently well-adapted to let us achieve these goals without needing to think about cost-effectiveness. Even if we are only concerned with the global good, it’s not worthwhile or effective to have explicit cost-effectiveness evaluation of everything in our daily lives, though that shouldn’t stop us from being ready and willing to use it where appropriate.
Conversely, you could pursue the global good without explicitly thinking about cost-effectiveness even in domains like charity evaluation, but the prevailing view in EA is (rightfully) that this would be a bad idea.
What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic. Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results. Assuming anti-realism, I don’t have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude. I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that’s not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don’t think in literal cost-effectiveness terms, but global benefits are still my general goal. It’s not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.
It’s important to remember that having parochial attitudes towards some things in your own life doesn’t necessarily justify attempts to spread analogous attitudes among other people.
(I broke the quoted text into more paragraphs so that I could parse it more easily. I’m thinking about a reply – the questions you’re posing here do definitely deserve a serious response. I have some sense that people have already written the response somewhere – Minding Our Way by Nate Soares comes close, although I don’t think he addresses the “what if there actually exist moral obligations?” question, instead assuming mostly non-moral-realism)
On a different note though:
I actually agree with this claim, but it’s a weirder claim.
People used to have real communities. And engaging with them was actually a part of being emotionally healthy.
Now, we live in an atomized society where where community mostly doesn’t exist, or is a pale shadow of it’s former self. So there exist a lot of people who donate to the local arts club or whatever out of a vague sense of obligation rather than because it’s actually helping them be healthy.
And yes, that should be challenged. But not because those people should instead be donating to the global good (although maybe they should consider that). Rather, those people should figure out how to actually be healthy, actually have a community, and make sure to support those things so they can continue to exist.
Sometimes this does mean a local arts program, or dance community, or whatever. If that’s something you’re actually getting value from.
The rationalist community (and to a lesser extent the EA community) have succeeded in being, well, more of a “real community” than most things do. So there are times when I want to support projects within them, not from the greater-good standpoint, but from the “I want to live in a world with nice things, this is a nice thing” standpoint. (More thoughts here in my Thoughts on the REACH Patreon article)
I feel that my folk dance community is a pretty solidly real one—people help each other move, etc. The duration is reassuring to me—the community has been in roughly its current form since the 1970s, so folk dancers my age are attending each other’s weddings and baby showers but we eventually expect to attend each other’s funerals. But I agree that a lot of community institutions aren’t that solid.
I recently chatted with someone who said they’ve been part of ~5 communities over their life, and that all but one of them was more “real community” like than the rationalists. So maybe there’s plenty of good stuff out there and I’ve just somehow filtered it out of my life.
The “real communities” I’ve been part of are mostly longer-established, intergenerational ones. I think starting a community with almost entirely 20-somethings is a hard place to start from. Of course most communities started like that, but not all of them make it to being intergenerational.
I saw what seemed like potential communities over the years “soccer club, improv comedy club, local toastmasters” but I was afraid… to be myself, being judged, making a fool of me, worried about being liked… so I passed. Here I am now in EA giving it a shot. I may go to the improv comedy mtgs soon. According to Hari’s “Lost connections” finding a community is very important; we social animals and don’t do well in loneliness.
fold dance community sounds wonderful and fun :)
Meanwhile, my previously written thoughts on this topic, not quite addressing your claims but covering a lot of related issues, is here. Crossposting for ease of reference, warning that it includes some weird references that may not be relevant.
Context: Responding to Zvi Mowshowitz who is arguing to be wary of organizations/movements/philosophies that encourage you to give them all your resources (even your favorite political cause, yes, yours, yes, even effective altruism)
The tldr I guess is:
Maybe it’s the case that being emotionally healthy is only valuable insofar as it translates into the global good (if you assume moral realism, which I don’t).
But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.
Whether it typically requires it to the degree advocated by OP or Zvi is (a) probably false, on my basic perception, but (b) requires proper psychological research before drawing firm conclusions.
This is a crux, because IMO the way that the people who frequently write and comment on this topic seem to talk about altruism represents a much more neurotic response to minor moral problems than what I consider to be typical or desirable for a human being. Of course the people who feel anxiety about morality will be the ones who talk about how to handle anxiety about morality, but that doesn’t mean their points are valid recommendations for the more general population. Deciding not to have a mocha doesn’t necessarily mean stressing out about it, and we shouldn’t set norms and expectations that lead people to perceive it as such. It creates an availability cascade of other people parroting conventional wisdom about too-much-sacrifice when they haven’t personally experienced confirmation of that point of view.
If I think I shouldn’t have the mocha, I just… don’t get the mocha. Sometimes I do get the mocha, but then I don’t feel anxiety about it, I know I just acted compulsively or whatever and I then think “oh gee I screwed up” and get on with my life.
The problem can be alleviated by having shared standards and doctrine for budgeting and other decisions. GWWC with its 10% pledge, or Singer’s “about a third” principle, is a first step in this direction.
Not sure what he says (haven’t got the interest to search through a whole series of posts for the relevant ones, sorry) but my point assuming antirealism (or subjectivism) seems to have been generally neglected by philosophy both inside and outside the academia: just because the impartial good isn’t everything doesn’t mean that it is rational to generically promote other people’s pursuits of their own respective partial goods. The whole reason humans created impartial morality in the first place is that we realized that it works better than for us to each pursue partialist goals.
So, regardless of most moral points of view, the shared standards and norms around how-much-to-sacrifice must be justified on consequentialist grounds.
I should emphasize that antirealism != agent-relative morality, I just happen to think that there is a correlation in plausibility here.