I strongly disliked this post for reasons that I’m not sure how to articulate. It seems to be advocating for a sort of lack of grounding in cost-effectiveness that is the thing that makes EA good. Or maybe my issue is that this post advocates for things that are difficult to disagree with (“full-spectrum knowing”; “wisdom”), without acknowledging tradeoffs (why do EAs allegedly not put enough priority on full-spectrum knowing?) or not saying anything concrete about how EAs could do more good.
I don’t think they are trying to convert the EA community into something else—they are pretty clearly creating separate spaces for their movement/community. [1]
Describing their post as using “applause lights” seems at best uncharitable, and “absolute nonsense” is just rude. There are several well-received posts on the forum around “[a]ugmenting decision-making with meditative (e.g. mindfulness) [practices]” like this one and this one. It’s fine to dislike their principles, but I think it’s worth making an effort to be encouraging when fellow altruists try to build on the “project” of Effective Altruism;.
e.g. they say “That being said, we’re also aware of the danger of potential zero-sum dynamics between int/a and EA, and would like to avoid them as much as possible. One thing we are afraid of is int/a gravitating towards the “just bitching about EA” attractor state, which is definitely not the vibe we’re going for. Another concern is “taking people away from EA”. We don’t intend to dissuade people from doing impactful work by EA lights, in fact many of us in the movement are doing incredibly canonical EA jobs.” and have run many events themselves under their own banner.
FWIW I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good. E.g. insofar as you think that AI safety is valuable, reasoning about cost-effectiveness actually cut against EA’s ability to pivot towards that. Instead, the thing that helped most was something like intellectual openness.
Cost-effectiveness is precisely the reason why I focus on AI safety. I can only speak for myself but I think the same is true for a lot of people. The thing that cuts against AI safety is more like “rigorously measurable cost-effectiveness”, but that’s not what I mean by “cost-effectiveness”. You can’t give a precise cost-effectiveness estimate for AI safety work, but it’s pretty easy to show that it’s orders of magnitude more cost-effective than GiveDirectly on any plausible set of assumptions.*
*unless it’s net negative, which unfortunately much EA-adjacent AI safety work turned out to be...but at least we can say that it’s orders of magnitude higher absolute impact
IMO you should think of global health/factory farming etc as one paradigm of EA—which did focus on cost effectiveness—and AI safety as a different paradigm in which the concept of cost-effectiveness is simply not very useful, for a few reasons (see also this related comment):
Talent bottlenecks are a far bigger obstacle than financial bottlenecks, and you can’t buy talent. Often you can’t even spend money to persuade talent—e.g. the AI safety community ended up convincing several of the most influential AI researchers of AGI risk, and even they mostly don’t seem to be able to think clearly about the issue (e.g. it’s hard to imagine a coherent strategy behind SSI).
Insofar as there are financial bottlenecks, it’s mostly because the biggest funder is ideologically and politically constrained, and because the trust networks in AI safety aren’t robust enough to distribute most of the money available. This will be only more true as Anthropic equity becomes liquid.
There’s an extreme principal-agent problem where not only is it difficult for funders to tell who will do good research in advance, but it’s even difficult for them to tell what was good research in hindsight.
As you mention, a lot of the action is in figuring out how not to be net-negative.
It’s very hard to cash out what AI safety is even trying to achieve in terms of metrics of cost-effectiveness. Once you start talking about the transhuman future, then almost every metric you could come up with is better optimized via weird futuristic stuff than simply “keeping humans alive”.
Re “it’s pretty easy to show that it’s orders of magnitude more cost-effective than GiveDirectly”: the kinds of reasoning that one might use to show that are pretty similar to e.g. the kinds that a communist could use to show that a proletarian revolution is orders of magnitude more cost-effective than GiveDirectly. In other words, it’s mostly reasoning within the confines of a worldview, and then slapping on a “cost-effectiveness” framing at the end, rather than having the cost-effectiveness part of the reasoning be load-bearing in any meaningful way.
>I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good. I don’t think it’s unreasonable to support AI safety work, but I think it’s throwing away most of the epistemics that could make EA a long-term robustly positive influence. EA’s original tagline used to be ‘using evidence and reason’, but the extreme AI safety focus seems to drop the ‘evidence’ part.
To believe you should focus on AI safety, you need to believe all of
short timelines
either
no trend of convergence between intelligence and morality or
the view that convergence wouldn’t matter or wouldn’t be enough to avoid moral disaster
either
long timelines on other GCRs or
other GCRs not really mattering to humanity’s long term prospects and
zero discounting on future people
that a flourishing human future is +EV
that trying to improve average welfare in a flourishing future is less good than trying to increase the probability of a flourishing future
reasonable confidence that AI safety work has learned from its past mistakes and will be reliably +EV
there won’t be a sufficient public shift towards AI safety to make it low enough leverage that less
you personally have more comparative advantage working on AI safety than any other cause
and surely some further assumptions I’ve missed, and many ways to further unpack these premises. To advocate work on AI safety as the primary EA cause you need to believe that the final bullet applies to the majority of your audience.
But I think there’s plenty in that list of assumptions that’s easy to disagree with, and a lot of entangled assumptions whose entanglement to my knowledge hasn’t really been explored (e.g. I find it hard to credit both that there’s no convergence between intelligence and morality and that there’s a long term equilibrium which is both stable and in some nontrivial sense positive or desirable).
So I semiagree with @MichaelDickens’s original comment’s in-principle scepticism while wondering whether in practice int/a might end up promoting causes that feel closer to the what I view as the original spirit of the movement.
There’s also some practical concerns in the OP that I think EA has dropped the ball on, such as building the sort of real community that would have retained greater support/membership over the years (my impression is the substantial majority of EAs who joined the movement more than 6 or 7 years ago have largely disengaged with it).
So I guess I’m noncommittally hopeful that this becomes something valuable—and remains, and Euan said, symbiotic with EA. If it just gives people who would have been somewhat supportive but felt too constrained a way to stay engaged with an encouraging community, that seems like it could be high value.
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good.
In hindsight I shouldn’t have used the phrase “what made EA good”, since by this point I’m skeptical about both the AI safety version and the “original spirit” of EA. I guess that makes me one of the people you’re describing who joined a long time ago (in my case, over a decade) and have now disengaged.
I do think that int/a is less likely than EA to be significantly harmful, and I’m excited about that. Whether or not it has a decent chance of actually doing something meaningful will depend a lot on the vision of the founders (and Euan in particular). Right now I’m not seeing what will prevent it from dissolving into the background of general hippie-adjacent things (kinda like a lot of the Game B and metacrisis stuff seems to have done). But we’ll see.
I suspect whether it helped or hindered folks likely depends on where they were pre-EA. Did they need to learn to pay more or less attention to cost effectiveness?
I can’t speak for everyone associated with integral autism, but at least for myself I don’t see it as about avoiding tradeoffs so much as about making tradeoffs against a wider set of considerations that are often left out by EAs. For example, I’m generally more willing to evaluate interventions in non-consequentialist terms since looking at the consequentialist framing only, as many EAs do, can lead to classic right-magnitude-wrong-direction errors that would be easily caught by a deontological or virtue ethics frame.
But in practice I expect int/a to make its own errors that will need correcting. One way someone I know put it, EA is fundamentally Protestant, int/a is fundamentally Buddhist. Both want to do good in the world, but each has a different view of what that world is.
I think it more often goes the other way, in that there are interventions that look good to EAs that look less good to int/a. For example, I’m relative negative on unconditional cash transfers, and I think most of the evidence showing they work is too narrowly scoped and fails to consider what happens to a society that is nicer only because of handouts and is failing to build a self-sustaining economic engine needed for the niceness to persist. I know some such programs are aware of this problem and try to address it, but it also leaves me feeling like there might be better solutions.
I guess on the other side maybe I’d say EA is by default too negative on arts charities. I’m not saying that your typical arts charity is effective, but I am saying I think it’d be a mistake if we reallocated all arts funding to top GiveWell charities, as access to museums is worth something even if it’s hard to qualify against human lives (perhaps more generally, I think not all goods are actually as fungible as the typical EA thinks).
FWIW I doubt there are many (any?) EAs that would advocate for reallocating “ all arts funding to top GiveWell charities”. Everything is at the margin!
It’s worth restating what I said in the post here:
We intend the presentation here to be descriptive rather than convincing—arguing for the merits of these principles is beyond the scope of this post.
And
The principles are also currently somewhat abstract, in the future we hope to translate these to be more concrete & action-guiding.
Getting into more detailed arguments of what EA is missing and precisely what they should do differently is quite a big project due to the inferential difference between EA and liminal land, and one that I hope int/a can attempt in the future.
I strongly disliked this post for reasons that I’m not sure how to articulate. It seems to be advocating for a sort of lack of grounding in cost-effectiveness that is the thing that makes EA good. Or maybe my issue is that this post advocates for things that are difficult to disagree with (“full-spectrum knowing”; “wisdom”), without acknowledging tradeoffs (why do EAs allegedly not put enough priority on full-spectrum knowing?) or not saying anything concrete about how EAs could do more good.
[edited to be more polite]
I don’t think they are trying to convert the EA community into something else—they are pretty clearly creating separate spaces for their movement/community. [1]
Describing their post as using “applause lights” seems at best uncharitable, and “absolute nonsense” is just rude. There are several well-received posts on the forum around “[a]ugmenting decision-making with meditative (e.g. mindfulness) [practices]” like this one and this one. It’s fine to dislike their principles, but I think it’s worth making an effort to be encouraging when fellow altruists try to build on the “project” of Effective Altruism;.
e.g. they say “That being said, we’re also aware of the danger of potential zero-sum dynamics between int/a and EA, and would like to avoid them as much as possible. One thing we are afraid of is int/a gravitating towards the “just bitching about EA” attractor state, which is definitely not the vibe we’re going for. Another concern is “taking people away from EA”. We don’t intend to dissuade people from doing impactful work by EA lights, in fact many of us in the movement are doing incredibly canonical EA jobs.” and have run many events themselves under their own banner.
You’re right, I was unnecessarily hostile. I edited the comment to tone it down.
FWIW I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good. E.g. insofar as you think that AI safety is valuable, reasoning about cost-effectiveness actually cut against EA’s ability to pivot towards that. Instead, the thing that helped most was something like intellectual openness.
Cost-effectiveness is precisely the reason why I focus on AI safety. I can only speak for myself but I think the same is true for a lot of people. The thing that cuts against AI safety is more like “rigorously measurable cost-effectiveness”, but that’s not what I mean by “cost-effectiveness”. You can’t give a precise cost-effectiveness estimate for AI safety work, but it’s pretty easy to show that it’s orders of magnitude more cost-effective than GiveDirectly on any plausible set of assumptions.*
*unless it’s net negative, which unfortunately much EA-adjacent AI safety work turned out to be...but at least we can say that it’s orders of magnitude higher absolute impact
IMO you should think of global health/factory farming etc as one paradigm of EA—which did focus on cost effectiveness—and AI safety as a different paradigm in which the concept of cost-effectiveness is simply not very useful, for a few reasons (see also this related comment):
Talent bottlenecks are a far bigger obstacle than financial bottlenecks, and you can’t buy talent. Often you can’t even spend money to persuade talent—e.g. the AI safety community ended up convincing several of the most influential AI researchers of AGI risk, and even they mostly don’t seem to be able to think clearly about the issue (e.g. it’s hard to imagine a coherent strategy behind SSI).
Insofar as there are financial bottlenecks, it’s mostly because the biggest funder is ideologically and politically constrained, and because the trust networks in AI safety aren’t robust enough to distribute most of the money available. This will be only more true as Anthropic equity becomes liquid.
There’s an extreme principal-agent problem where not only is it difficult for funders to tell who will do good research in advance, but it’s even difficult for them to tell what was good research in hindsight.
As you mention, a lot of the action is in figuring out how not to be net-negative.
It’s very hard to cash out what AI safety is even trying to achieve in terms of metrics of cost-effectiveness. Once you start talking about the transhuman future, then almost every metric you could come up with is better optimized via weird futuristic stuff than simply “keeping humans alive”.
Re “it’s pretty easy to show that it’s orders of magnitude more cost-effective than GiveDirectly”: the kinds of reasoning that one might use to show that are pretty similar to e.g. the kinds that a communist could use to show that a proletarian revolution is orders of magnitude more cost-effective than GiveDirectly. In other words, it’s mostly reasoning within the confines of a worldview, and then slapping on a “cost-effectiveness” framing at the end, rather than having the cost-effectiveness part of the reasoning be load-bearing in any meaningful way.
>I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good. I don’t think it’s unreasonable to support AI safety work, but I think it’s throwing away most of the epistemics that could make EA a long-term robustly positive influence. EA’s original tagline used to be ‘using evidence and reason’, but the extreme AI safety focus seems to drop the ‘evidence’ part.
To believe you should focus on AI safety, you need to believe all of
short timelines
either
no trend of convergence between intelligence and morality or
the view that convergence wouldn’t matter or wouldn’t be enough to avoid moral disaster
either
long timelines on other GCRs or
other GCRs not really mattering to humanity’s long term prospects and
zero discounting on future people
that a flourishing human future is +EV
that trying to improve average welfare in a flourishing future is less good than trying to increase the probability of a flourishing future
reasonable confidence that AI safety work has learned from its past mistakes and will be reliably +EV
there won’t be a sufficient public shift towards AI safety to make it low enough leverage that less
you personally have more comparative advantage working on AI safety than any other cause
and surely some further assumptions I’ve missed, and many ways to further unpack these premises. To advocate work on AI safety as the primary EA cause you need to believe that the final bullet applies to the majority of your audience.
But I think there’s plenty in that list of assumptions that’s easy to disagree with, and a lot of entangled assumptions whose entanglement to my knowledge hasn’t really been explored (e.g. I find it hard to credit both that there’s no convergence between intelligence and morality and that there’s a long term equilibrium which is both stable and in some nontrivial sense positive or desirable).
So I semiagree with @MichaelDickens’s original comment’s in-principle scepticism while wondering whether in practice int/a might end up promoting causes that feel closer to the what I view as the original spirit of the movement.
There’s also some practical concerns in the OP that I think EA has dropped the ball on, such as building the sort of real community that would have retained greater support/membership over the years (my impression is the substantial majority of EAs who joined the movement more than 6 or 7 years ago have largely disengaged with it).
So I guess I’m noncommittally hopeful that this becomes something valuable—and remains, and Euan said, symbiotic with EA. If it just gives people who would have been somewhat supportive but felt too constrained a way to stay engaged with an encouraging community, that seems like it could be high value.
In hindsight I shouldn’t have used the phrase “what made EA good”, since by this point I’m skeptical about both the AI safety version and the “original spirit” of EA. I guess that makes me one of the people you’re describing who joined a long time ago (in my case, over a decade) and have now disengaged.
I do think that int/a is less likely than EA to be significantly harmful, and I’m excited about that. Whether or not it has a decent chance of actually doing something meaningful will depend a lot on the vision of the founders (and Euan in particular). Right now I’m not seeing what will prevent it from dissolving into the background of general hippie-adjacent things (kinda like a lot of the Game B and metacrisis stuff seems to have done). But we’ll see.
I suspect whether it helped or hindered folks likely depends on where they were pre-EA. Did they need to learn to pay more or less attention to cost effectiveness?
I can’t speak for everyone associated with integral autism, but at least for myself I don’t see it as about avoiding tradeoffs so much as about making tradeoffs against a wider set of considerations that are often left out by EAs. For example, I’m generally more willing to evaluate interventions in non-consequentialist terms since looking at the consequentialist framing only, as many EAs do, can lead to classic right-magnitude-wrong-direction errors that would be easily caught by a deontological or virtue ethics frame.
But in practice I expect int/a to make its own errors that will need correcting. One way someone I know put it, EA is fundamentally Protestant, int/a is fundamentally Buddhist. Both want to do good in the world, but each has a different view of what that world is.
Any examples of interventions EA might overlook that int/a rates highly in your view? (No need to speak for others)
I think it more often goes the other way, in that there are interventions that look good to EAs that look less good to int/a. For example, I’m relative negative on unconditional cash transfers, and I think most of the evidence showing they work is too narrowly scoped and fails to consider what happens to a society that is nicer only because of handouts and is failing to build a self-sustaining economic engine needed for the niceness to persist. I know some such programs are aware of this problem and try to address it, but it also leaves me feeling like there might be better solutions.
I guess on the other side maybe I’d say EA is by default too negative on arts charities. I’m not saying that your typical arts charity is effective, but I am saying I think it’d be a mistake if we reallocated all arts funding to top GiveWell charities, as access to museums is worth something even if it’s hard to qualify against human lives (perhaps more generally, I think not all goods are actually as fungible as the typical EA thinks).
FWIW I doubt there are many (any?) EAs that would advocate for reallocating “ all arts funding to top GiveWell charities”. Everything is at the margin!
It’s worth restating what I said in the post here:
And
Getting into more detailed arguments of what EA is missing and precisely what they should do differently is quite a big project due to the inferential difference between EA and liminal land, and one that I hope int/a can attempt in the future.