The biggest reason I am / have been disillusioned is that ethics are subjective (in my view, though I feel very confident). I don’t understand how a movement like EA can even exist within this paradigm unless
The movement only serves as a knowledge keeper of how to apply epistemics to the real world, with a specific focus on making things “better”, better being left undefined, but does not engage in object level/non research work outside of education. Which is almost just LW.
The movement splits into a series of submovements which each have an agreed upon ethical framework. Thus every cause area/treatment can be compared under a standardized cost benefit analysis, and legitimate epistemic progress can be made. Trades can be made between the movements to accomplish shared ethical goals. Wars can be waged when consensus is impossible (sad).
Clearly, neither of the above suggestions are what we are currently doing. It feels low integrity to me. I’m not sure how we have coalesced around a few cause areas, admitted each is justified basically because of total utilitarianism, and then still act like we are ethically agnostic. That seems like a very clear example of mental gymnastics.
Once you get in this mindset, I feel like it immediately becomes clear that EA in fact doesn’t have particularly good epistemics. We are constantly doing CBAs (or even worse, just vaguely implying things are good without clear evidence and analysis) with ill defined goals. Out of this many problems emerge. We have no institutional system for deciding where our knowledge is at and checking decision making powers (decentralization is good and bad though). Billionaires have an outsized ability to imprint their notion of ethics on our movement. We hero worship. We pick our careers as much on what looks like it will be funded well by EA and by what other top EAs are doing as what seems in theory to be the best thing to us. Did you get into AI safety because it was justified under your world view or did you adopt a worldview because people who seemed smart convinced you of AI safety before you even had a clearly defined worldview?
One reason I’ve never really made comments like this on the forum is that it feels sort of silly. I would get it if people feel like there isn’t a place for anti-realists here, since once you go down the rabbit hole literally everything is arbitrary. Still I find myself more aligned with EAs by far than anyone else in my thinking patterns, so I never leave.
EA is not ethically agnostic. It is unquestionably utilitarian (although I feel there is still some debate over the “total” part). Is this a problem for people of other ethical viewpoints? I don’t know, I can’t speak for other people. But I think there’s a lot of value to where the utilitarian rubber meets the road even if you have other moral considerations, in the ruthless focus on what works. For example, I still maintain a monthly donation to GiveDirectly even though I know it is much less cost effective than other givewell top charities. Why? Because I care about the dignity afforded by cash transfers in a non-consequentialist way, and I will comfortably make that choice instead of having a 10x larger impact from AMF or something like that. So I follow the utilitarian train up to a certain point (cash transfers work, we measure the impact with evidence) and then get off the train.
In this metaphor, EA keeps the train going until the end of the line (total utilitarianism?) But you don’t need to stay on until the end of the line. You can get off whenever you want. That makes it pretty freeing. The problem comes only when you feel like you need to stay on the train until the end, because of social pressure or because you feel clueless and want to be in with the smart people.
The eternal mantra: EA is a question, not an answer. And even if the majority of people believe in an answer that I don’t, I don’t care—it’s my question as much as it is theirs.
If EA is unquestionably utilitarian I don’t really like the vocabulary we use. Positive impact, altruism, global priorities research are all words that imply ethical agnosticism imo, or just seem somewhat disingenuous if not without proper context.
Also it’s a bit unclear to me that EA is unquestionably utilitarian. Is there some official statement by a top org saying as much?
On 80k’s “common misconceptions about ea” : “ Misconception #4: Effective altruism is just utilitarianism”.
Open Phil talks about world views they consider “plausible”, which isn’t explicitly anything nor is it compatible with anti realism.
I don’t doubt that EA operates as a utilitarian movement. But if this is more or less official than there should be more transparency.
I ostensibly agree but who would decide such a stance? Our community has no voting or political system. I would be relatively happy for us to go with welfarism/beneficentrism, but I feel uncomfortable with the idea of a bunch of higher ups in the community getting together and just outright deciding this.
This thread does not fit my view, to be honest: to talk about “the community” as a single body with an “official” stance, to talk about “EA being utilitarian”...
EA is, at least for me, a set of ideas much more than an identity. Certainly, these ideas influence my life a lot, have caused me to change jobs, etc.; yet I would still describe EA as a diverse group of people with many stances, backgrounds, religions, ethical paradigms, united by thinking about the best ways for doing good.
In my life, I’ve always been interested in doing good. I think most humans are. At some point, I’ve found out that there are people who have thought deeply about this, and found really effective ways to do good. This was, and still is, very welcome to me, even if some conclusions are hard to digest. I see EA ideas as ways to get better at doing what I always wanted, and this seems like a good way to avoid disillusionment.
(Charles_Guthmann, sorry for having taken your thread into a tangent. This post and many of the comments hinges somewhat on “EA as part of people’s identity” and “EA as a single body with an official stance”, and your thread was where this became most apparent to me.)
I agree with a lot of this, although I’m not sure I see why standardized cost benefit analysis would be necessary for legitimate epistemic progress to be made? There are many empirical questions that seem important from a wide range of ethical views, and people with shared interest in these questions can work together to figure these out, while drawing their own normative conclusions. (This seems to line up with what most organizations affiliated with this community actually do—my impression is that lots more research goes into empirical questions than into drawing ethical conclusions.)
And even if having a big community were not ideal for epistemic progress, it could be worth it on other grounds, e.g. community size being helpful for connecting people to employers, funders, and cofounders.
I think I overstated my case somewhat or used the wrong wording. I don’t think standardized cbas are completely necessary for epistemic progress. In fact as long as the cba is done with outputs per dollar rather than outcomes per dollar or includes the former in the analysis it shouldn’t be much of a problem because as you said people can overlay their normative concerns.
I do think that most posts here aren’t prefaced with normative frameworks, and this is sometimes completely unimportant(in the case of empirical stuff), or in other cases more important(how do we approach funding research, how should we act as a community and individuals as a part of the community). I think a big part of the reason that it isn’t more confusing is that as the other commenter said, almost everyone here is a utilitarian.
I agree that there is a reason to have the ea umbrella outside of epistemic reasons. So again I used overly strongly wording or was maybe just plainly incorrect.
A lot of what was going on in my head with respect to cost benefit analyses when I wrote this comment was about grantmaking. For instance, If a grantmaker says it’s funding based on projects that will help the long term of humanity, I feel like that leaves a lot on the table. Do you care about pain or pleasure? Humans or everyone?
Inevitably they will use some sort of rubric. If they haven’t thought through what normative considerations the rubric is based on, the rubric may be somewhat incoherent to any specific value system or even worse completely aligned with a specific one by accident. I could imagine this creating non Bayesian value drift, since while research cbas allow us to overlay our own normative frameworks, grants are real world decisions. I can’t overlay my own framework over someone else’s decision to give a grant.
Also I do feel a bit bad about my original comment because I meant the comment to really just be a jumping off point for other anti-realists to express confusion about how to talk about their disillusionment or whether there is even a place for that here but I got side tracked ranting as I often do.
The biggest reason I am / have been disillusioned is that ethics are subjective (in my view, though I feel very confident). I don’t understand how a movement like EA can even exist within this paradigm unless
The movement only serves as a knowledge keeper of how to apply epistemics to the real world, with a specific focus on making things “better”, better being left undefined, but does not engage in object level/non research work outside of education. Which is almost just LW.
The movement splits into a series of submovements which each have an agreed upon ethical framework. Thus every cause area/treatment can be compared under a standardized cost benefit analysis, and legitimate epistemic progress can be made. Trades can be made between the movements to accomplish shared ethical goals. Wars can be waged when consensus is impossible (sad).
Clearly, neither of the above suggestions are what we are currently doing. It feels low integrity to me. I’m not sure how we have coalesced around a few cause areas, admitted each is justified basically because of total utilitarianism, and then still act like we are ethically agnostic. That seems like a very clear example of mental gymnastics.
Once you get in this mindset, I feel like it immediately becomes clear that EA in fact doesn’t have particularly good epistemics. We are constantly doing CBAs (or even worse, just vaguely implying things are good without clear evidence and analysis) with ill defined goals. Out of this many problems emerge. We have no institutional system for deciding where our knowledge is at and checking decision making powers (decentralization is good and bad though). Billionaires have an outsized ability to imprint their notion of ethics on our movement. We hero worship. We pick our careers as much on what looks like it will be funded well by EA and by what other top EAs are doing as what seems in theory to be the best thing to us. Did you get into AI safety because it was justified under your world view or did you adopt a worldview because people who seemed smart convinced you of AI safety before you even had a clearly defined worldview?
One reason I’ve never really made comments like this on the forum is that it feels sort of silly. I would get it if people feel like there isn’t a place for anti-realists here, since once you go down the rabbit hole literally everything is arbitrary. Still I find myself more aligned with EAs by far than anyone else in my thinking patterns, so I never leave.
EA is not ethically agnostic. It is unquestionably utilitarian (although I feel there is still some debate over the “total” part). Is this a problem for people of other ethical viewpoints? I don’t know, I can’t speak for other people. But I think there’s a lot of value to where the utilitarian rubber meets the road even if you have other moral considerations, in the ruthless focus on what works. For example, I still maintain a monthly donation to GiveDirectly even though I know it is much less cost effective than other givewell top charities. Why? Because I care about the dignity afforded by cash transfers in a non-consequentialist way, and I will comfortably make that choice instead of having a 10x larger impact from AMF or something like that. So I follow the utilitarian train up to a certain point (cash transfers work, we measure the impact with evidence) and then get off the train.
In this metaphor, EA keeps the train going until the end of the line (total utilitarianism?) But you don’t need to stay on until the end of the line. You can get off whenever you want. That makes it pretty freeing. The problem comes only when you feel like you need to stay on the train until the end, because of social pressure or because you feel clueless and want to be in with the smart people.
The eternal mantra: EA is a question, not an answer. And even if the majority of people believe in an answer that I don’t, I don’t care—it’s my question as much as it is theirs.
If EA is unquestionably utilitarian I don’t really like the vocabulary we use. Positive impact, altruism, global priorities research are all words that imply ethical agnosticism imo, or just seem somewhat disingenuous if not without proper context.
Also it’s a bit unclear to me that EA is unquestionably utilitarian. Is there some official statement by a top org saying as much?
On 80k’s “common misconceptions about ea” : “ Misconception #4: Effective altruism is just utilitarianism”.
Open Phil talks about world views they consider “plausible”, which isn’t explicitly anything nor is it compatible with anti realism.
I don’t doubt that EA operates as a utilitarian movement. But if this is more or less official than there should be more transparency.
Yes, EA is much broader than utilitarianism. See comment above and Will’s paper.
I agree EA needs some kind of stance on what ‘the good’ means.
In this paper, MacAskill proposes it should (tentatively) be welfarism, which makes sense to me.
It’s specific enough to be meaningful and capture a lot of what we care about, but still broad enough to have a place for many moral views.
See also this recent post by Richard Chappell.
I ostensibly agree but who would decide such a stance? Our community has no voting or political system. I would be relatively happy for us to go with welfarism/beneficentrism, but I feel uncomfortable with the idea of a bunch of higher ups in the community getting together and just outright deciding this.
This thread does not fit my view, to be honest: to talk about “the community” as a single body with an “official” stance, to talk about “EA being utilitarian”...
EA is, at least for me, a set of ideas much more than an identity. Certainly, these ideas influence my life a lot, have caused me to change jobs, etc.; yet I would still describe EA as a diverse group of people with many stances, backgrounds, religions, ethical paradigms, united by thinking about the best ways for doing good.
In my life, I’ve always been interested in doing good. I think most humans are. At some point, I’ve found out that there are people who have thought deeply about this, and found really effective ways to do good. This was, and still is, very welcome to me, even if some conclusions are hard to digest. I see EA ideas as ways to get better at doing what I always wanted, and this seems like a good way to avoid disillusionment.
(Charles_Guthmann, sorry for having taken your thread into a tangent. This post and many of the comments hinges somewhat on “EA as part of people’s identity” and “EA as a single body with an official stance”, and your thread was where this became most apparent to me.)
I agree with a lot of this, although I’m not sure I see why standardized cost benefit analysis would be necessary for legitimate epistemic progress to be made? There are many empirical questions that seem important from a wide range of ethical views, and people with shared interest in these questions can work together to figure these out, while drawing their own normative conclusions. (This seems to line up with what most organizations affiliated with this community actually do—my impression is that lots more research goes into empirical questions than into drawing ethical conclusions.)
And even if having a big community were not ideal for epistemic progress, it could be worth it on other grounds, e.g. community size being helpful for connecting people to employers, funders, and cofounders.
I think I overstated my case somewhat or used the wrong wording. I don’t think standardized cbas are completely necessary for epistemic progress. In fact as long as the cba is done with outputs per dollar rather than outcomes per dollar or includes the former in the analysis it shouldn’t be much of a problem because as you said people can overlay their normative concerns.
I do think that most posts here aren’t prefaced with normative frameworks, and this is sometimes completely unimportant(in the case of empirical stuff), or in other cases more important(how do we approach funding research, how should we act as a community and individuals as a part of the community). I think a big part of the reason that it isn’t more confusing is that as the other commenter said, almost everyone here is a utilitarian.
I agree that there is a reason to have the ea umbrella outside of epistemic reasons. So again I used overly strongly wording or was maybe just plainly incorrect.
A lot of what was going on in my head with respect to cost benefit analyses when I wrote this comment was about grantmaking. For instance, If a grantmaker says it’s funding based on projects that will help the long term of humanity, I feel like that leaves a lot on the table. Do you care about pain or pleasure? Humans or everyone?
Inevitably they will use some sort of rubric. If they haven’t thought through what normative considerations the rubric is based on, the rubric may be somewhat incoherent to any specific value system or even worse completely aligned with a specific one by accident. I could imagine this creating non Bayesian value drift, since while research cbas allow us to overlay our own normative frameworks, grants are real world decisions. I can’t overlay my own framework over someone else’s decision to give a grant.
Also I do feel a bit bad about my original comment because I meant the comment to really just be a jumping off point for other anti-realists to express confusion about how to talk about their disillusionment or whether there is even a place for that here but I got side tracked ranting as I often do.