I feel Iâm not informed enough to reply to this, and it feels weird to speculate about orgs I know very little about, but I worry that the people most informed wonât reply here for various reasons, so Iâm sharing some thoughts based on what very little information I have (almost entirely from reading posts on this forum, and of course speaking only for myself). This is all very low confidence.
âEffectiveâ Altruism implies a value judgement that requires strong evidence to back up
I think if you frame it as a question, something like âWe are trying to do altruism effectively, this is the best that weâre able to do so farâ, it doesnât require that much evidence (for better or worse)
âAmbitious Impactâ implies more speculative, less easy to measure activities in pursuit of even higher impact returns.
That is not clear to me, one can be very ambitious by working on things that are very easy to measure. For example, people going through AIMâs âfounding to giveâ program seem to have a goal thatâs easy to measure: money donated in 10 years[1], but I still think of them as clearly âambitiousâ if they try to donate millions. Google defines ambitious as âhaving or showing a strong desire and determination to succeedâ
My understanding is that Open Philanthropy split from GiveWell because of the realisation that there was more marginal funding required for âDo-gooding R&Dâ with a lower existing evidence base.
IMO AIM has outcompeted CEA on a number of fronts (their training is better, their content (if not their marketing) is better, they are agile and improve over time). Probably 80% of the useful and practical things Iâve learned about how to do effective altruism, Iâve learned from them.
I agree that AIM is more impressive than CEA on many fronts, but I think they mostly have different scopes.[2] My impression is that CEA doesnât focus much on specific ways to implement specific approaches to âhow to do effective altruismâ, but on things like âwhy to do effective altruismâ and âhere are some things people are doing/âwriting about how to do good, go read/âtalk to them for detailsâ.
If not for CEA, I think I probably wouldnât have heard of AIM (or GWWC, or 80k, or effective animal advocacy as a whole field). And if I had only interacted with AIM, Iâm not sure if I would have been exposed to as many different perspectives on things like animal welfare, longtermism, and spreading positive values[3]
The AIM folks Iâve spoken to are frustrated that their resultsâbased on exploiting cost-effective high-evidence base interventionsâare used to launder the reputation of OP funded low evidence base âDo-gooding R&D.â
I understand the frustration, especially given the brand concerns below and because I think many AIM folks think that a lot of the assumptions behind longtermism donât hold.[4] But I donât know if this âreputation launderingâ is actually happening that much:
My sense is that the (vague) relation to the Shrimp Welfare Project is not helping the reputation of some other EA-Adjacent projects
I think AIM is just really small compared to e.g. GiveWell, which I think is more often used to claim that EA is doing some good
When e.g. 80000hours interviews a LEEP cofounder, I strongly believe that itâs because they think that LEEP is really amazing (as everyone does), they want to promote it, and they want more people to do similar amazing things. I think the reason people talk about the best AIM projects is usually not to look better by association but to promote them as examples of things that are clearly great.
If we think about EA brand as a product, Iâd guess weâre in âThe Chasmâ below as the EA brand is too associated with the âweirdâ stuff that innovators are doing to be effectively sold to lower risk tolerance markets.
I personally believe that the EA brand is in a pretty bad place, and at the moment often associated with things like FTX, TESCREAL and OpenAI, and that is a bigger issue. I think EA is seen as a group non-altruistic people, not as a group of altruistic people who are too âweirdâ. (But I have even lower confidence on this than on the rest of this comment)
AIM should be the face of EA and should be feeding in A LOT more to general outreach efforts.
Related to the point above, itâs not clear to me why AIM should be the face of âEAâ instead of any other âdoing the most goodâ movement (e.g. Roots of Progress, School for Moral Ambition, Center for High Impact Philanthropy, âŚ). I think none of these would make a lot of sense, and donât see why âAIM being the face of AIMâ would be worse than AIM being the face of something else. You can see in their 2023 annual review that they did deeply consider building a new community âbut ultimately feel that a more targeted approach focusing on certain careers with the most impact would be better for usâ.[5]
In general, I agree with your conclusions on wishing for more professionalization, and increasing the size of the pie (but it might be harder than one would think, and it might make sense to instead increase the number of separate pies)
I imagine positive externalities from the new organizations will also be a big part of their impact, but I expect the main measure will be amount donated.
AIM obviously does a lot for animal welfare, but I donât think they focus on helping people reson about how to prioritize human vs non-human welfare/ârights/âpreferences.
I canât link to the quote, so Iâll copy-paste it here.
JOEY: Yeah, I basically think I donât find a really highly uncertain, but high-value expected value calculation as compelling. And they tend to be a lot more concretely focused on whatâs the specific outcome of this? Like, okay, how much are we banking on a very narrow sort of set of outcomes and how confident are we that weâre going to affect that, and whatâs the historical track record of people whoâve tried to affect the future and this sort of thing. Thereâs a million and a half weeds and assumptions that go in. And I think, most people on both sides of this issue in terms of near-term causes versus long-term causes just have not actually engaged that deeply with all the different arguments. Thereâs like a lot of assumptions made on either side of the spectrum. But I actually have gotten fairly deeply into this. I had this conversation a lot of times and thought about it quite thoroughly. And yeah, just a lot of the assumptions donât hold.
An option many people have been asking us about in the wake of the struggles of the EA movement is if CE would consider building out a movement that brings back some of the strengths of EA 1.0. We considered this idea pretty deeply but ultimately feel that a more targeted approach focusing on certain careers with the most impact would be better for us. The logistical and time costs of running a movement are quite large and it seems as though often a huge % of the movementâs impact comes from a small number of actors and orgs. Although we like some things the EA movement has brought to the table when comparing it to more focused uses (e.g. GiveWell focuses more on GiveWellâs growth), we have ended up more pessimistic about the impact of new movements.
I donât think they linked to their 2024 annual report on the forum, so this might be different now.
This is helpful and I agree with most of it. I think my take here is mostly driven by:
EA atm doesnât seem very practical to take action on except for donating and/âor applying to a small set of jobs funded mostly by one source. My guess is this is reducing the number of âoperatorâ types that get involved and selects for cerebral philosophising types. I heard about 80k and CEA first but it was the practical testable AIM charities that sparked my interest THEN Iâve developed more of an interest in implications from AI and GCRs.
When Iâve run corporate events, Iâve avoided using the term Effective Altruism (despite it being useful and descriptive) because of the existing brand.
I think current cause prioritisation methods are limiting innovation in the field because itâs not teaching people about tools they can then use in different areas. Thereâs probably low hanging fruit that isnât being picked because of this narrow philosophical approach.
Iâm not a comms person so my AIM should be the face of EA thing is too strong. But I do think itâs a better face for more practical less abstract thinkers
I agree with 3 of your points but I disagree with the first one:
EA atm doesnât seem very practical to take action on except for donating and/âor applying to a small set of jobs funded mostly by one source.
On jobs: 80k, Probably Good and Animal Advocacy Careers have job boards with lots of jobs (including all AIM jobs) and get regularly recommended to people seeking jobs. I met someone new to EA at EAGx Berlin a month ago, and 3 days ago they posted on LinkedIn that they started working at The Life You Can Save.
On donations: Iâm biased but I think donations can be a really valuable action, and EA promotes donations to a large number of causes (including AIM).
My guess is this is reducing the number of âoperatorâ types that get involved and selects for cerebral philosophising types.
Itâs really hard for me to tell if this is a good or bad thing, especially because I think itâs possible that things like animal welfare or GCR reduction can plausibly be significantly more effective than more obviously good âpractical testableâ work (and the reason to favour âR&Dâ mentioned previously)
I heard about 80k and CEA first but it was the practical testable AIM charities that sparked my interest THEN Iâve developed more of an interest in implications from AI and GCRs.
Not really a disagreement, but I think itâs great that thereâs cross-pollination, with people getting into AIM from 80k and CEA, and into 80k and CEA from AIM
Earning to give is not a good description for what I do because Iâm not optimising across career paths for high pay for donationsâmore like the highest pay I can get for a 9-5.
I think of it more as âSelf-funded community builderâ
On cross pollination, yeah I think we agree. The self sorting between cause areas based on intuition and instinct isnât great thoughâit means that there are opportunities to innovate that are missed in both camps.
I feel Iâm not informed enough to reply to this, and it feels weird to speculate about orgs I know very little about, but I worry that the people most informed wonât reply here for various reasons, so Iâm sharing some thoughts based on what very little information I have (almost entirely from reading posts on this forum, and of course speaking only for myself). This is all very low confidence.
I think if you frame it as a question, something like âWe are trying to do altruism effectively, this is the best that weâre able to do so farâ, it doesnât require that much evidence (for better or worse)
That is not clear to me, one can be very ambitious by working on things that are very easy to measure. For example, people going through AIMâs âfounding to giveâ program seem to have a goal thatâs easy to measure: money donated in 10 years[1], but I still think of them as clearly âambitiousâ if they try to donate millions. Google defines ambitious as âhaving or showing a strong desire and determination to succeedâ
That is not my understanding, reading their public comms I thought OP split from GiveWell to better serve 7-figure donors âOur current product is a poor fit with the people who may represent our most potentially impactful audience.â (which I assumed implicitly meant that Moskovitz and Tuna could use more bespoke recommendations)
I agree with this! I liked Finding before funding: Why EA should probably invest more in research, but I expect that the âR&Dâ work itself might be tricky to do in practice. Still, Iâm very excited about GiveWellâs RCT grants
I agree that AIM is more impressive than CEA on many fronts, but I think they mostly have different scopes.[2] My impression is that CEA doesnât focus much on specific ways to implement specific approaches to âhow to do effective altruismâ, but on things like âwhy to do effective altruismâ and âhere are some things people are doing/âwriting about how to do good, go read/âtalk to them for detailsâ.
If not for CEA, I think I probably wouldnât have heard of AIM (or GWWC, or 80k, or effective animal advocacy as a whole field). And if I had only interacted with AIM, Iâm not sure if I would have been exposed to as many different perspectives on things like animal welfare, longtermism, and spreading positive values[3]
I understand the frustration, especially given the brand concerns below and because I think many AIM folks think that a lot of the assumptions behind longtermism donât hold.[4] But I donât know if this âreputation launderingâ is actually happening that much:
My sense is that the (vague) relation to the Shrimp Welfare Project is not helping the reputation of some other EA-Adjacent projects
I think AIM is just really small compared to e.g. GiveWell, which I think is more often used to claim that EA is doing some good
When e.g. 80000hours interviews a LEEP cofounder, I strongly believe that itâs because they think that LEEP is really amazing (as everyone does), they want to promote it, and they want more people to do similar amazing things. I think the reason people talk about the best AIM projects is usually not to look better by association but to promote them as examples of things that are clearly great.
I personally believe that the EA brand is in a pretty bad place, and at the moment often associated with things like FTX, TESCREAL and OpenAI, and that is a bigger issue. I think EA is seen as a group non-altruistic people, not as a group of altruistic people who are too âweirdâ. (But I have even lower confidence on this than on the rest of this comment)
Related to the point above, itâs not clear to me why AIM should be the face of âEAâ instead of any other âdoing the most goodâ movement (e.g. Roots of Progress, School for Moral Ambition, Center for High Impact Philanthropy, âŚ). I think none of these would make a lot of sense, and donât see why âAIM being the face of AIMâ would be worse than AIM being the face of something else.
You can see in their 2023 annual review that they did deeply consider building a new community âbut ultimately feel that a more targeted approach focusing on certain careers with the most impact would be better for usâ.[5]
In general, I agree with your conclusions on wishing for more professionalization, and increasing the size of the pie (but it might be harder than one would think, and it might make sense to instead increase the number of separate pies)
I imagine positive externalities from the new organizations will also be a big part of their impact, but I expect the main measure will be amount donated.
And that this does not say that much about CEA as imho AIM is more impressive than the vast majority of other projects.
AIM obviously does a lot for animal welfare, but I donât think they focus on helping people reson about how to prioritize human vs non-human welfare/ârights/âpreferences.
I canât link to the quote, so Iâll copy-paste it here.
Linked from this post.
Full quote:
I donât think they linked to their 2024 annual report on the forum, so this might be different now.
This is helpful and I agree with most of it. I think my take here is mostly driven by:
EA atm doesnât seem very practical to take action on except for donating and/âor applying to a small set of jobs funded mostly by one source. My guess is this is reducing the number of âoperatorâ types that get involved and selects for cerebral philosophising types. I heard about 80k and CEA first but it was the practical testable AIM charities that sparked my interest THEN Iâve developed more of an interest in implications from AI and GCRs.
When Iâve run corporate events, Iâve avoided using the term Effective Altruism (despite it being useful and descriptive) because of the existing brand.
I think current cause prioritisation methods are limiting innovation in the field because itâs not teaching people about tools they can then use in different areas. Thereâs probably low hanging fruit that isnât being picked because of this narrow philosophical approach.
Iâm not a comms person so my AIM should be the face of EA thing is too strong. But I do think itâs a better face for more practical less abstract thinkers
I agree with 3 of your points but I disagree with the first one:
On jobs: 80k, Probably Good and Animal Advocacy Careers have job boards with lots of jobs (including all AIM jobs) and get regularly recommended to people seeking jobs. I met someone new to EA at EAGx Berlin a month ago, and 3 days ago they posted on LinkedIn that they started working at The Life You Can Save.
On donations: Iâm biased but I think donations can be a really valuable action, and EA promotes donations to a large number of causes (including AIM).
Itâs really hard for me to tell if this is a good or bad thing, especially because I think itâs possible that things like animal welfare or GCR reduction can plausibly be significantly more effective than more obviously good âpractical testableâ work (and the reason to favour âR&Dâ mentioned previously)
Not really a disagreement, but I think itâs great that thereâs cross-pollination, with people getting into AIM from 80k and CEA, and into 80k and CEA from AIM
I agree donations and switching careers are really important! HoweverâI think those shouldnât be the only ways.
Having your job be EA makes it difficult to be independentâlivelihoods rely on this and so it makes EA as a whole less robust IMO. I like the Tour of Service model https://ââforum.effectivealtruism.org/ââposts/ââwaeDDnaQBTCNNu7hq/ââea-tours-of-service
Earning to give is not a good description for what I do because Iâm not optimising across career paths for high pay for donationsâmore like the highest pay I can get for a 9-5.
I think of it more as âSelf-funded community builderâ
On cross pollination, yeah I think we agree. The self sorting between cause areas based on intuition and instinct isnât great thoughâit means that there are opportunities to innovate that are missed in both camps.