FWIW I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good. E.g. insofar as you think that AI safety is valuable, reasoning about cost-effectiveness actually cut against EA’s ability to pivot towards that. Instead, the thing that helped most was something like intellectual openness.
Cost-effectiveness is precisely the reason why I focus on AI safety. I can only speak for myself but I think the same is true for a lot of people. The thing that cuts against AI safety is more like “rigorously measurable cost-effectiveness”, but that’s not what I mean by “cost-effectiveness”. You can’t give a precise cost-effectiveness estimate for AI safety work, but it’s pretty easy to show that it’s orders of magnitude more cost-effective than GiveDirectly on any plausible set of assumptions.*
*unless it’s net negative, which unfortunately much EA-adjacent AI safety work turned out to be...but at least we can say that it’s orders of magnitude higher absolute impact
IMO you should think of global health/factory farming etc as one paradigm of EA—which did focus on cost effectiveness—and AI safety as a different paradigm in which the concept of cost-effectiveness is simply not very useful, for a few reasons (see also this related comment):
Talent bottlenecks are a far bigger obstacle than financial bottlenecks, and you can’t buy talent. Often you can’t even spend money to persuade talent—e.g. the AI safety community ended up convincing several of the most influential AI researchers of AGI risk, and even they mostly don’t seem to be able to think clearly about the issue (e.g. it’s hard to imagine a coherent strategy behind SSI).
Insofar as there are financial bottlenecks, it’s mostly because the biggest funder is ideologically and politically constrained, and because the trust networks in AI safety aren’t robust enough to distribute most of the money available. This will be only more true as Anthropic equity becomes liquid.
There’s an extreme principal-agent problem where not only is it difficult for funders to tell who will do good research in advance, but it’s even difficult for them to tell what was good research in hindsight.
As you mention, a lot of the action is in figuring out how not to be net-negative.
It’s very hard to cash out what AI safety is even trying to achieve in terms of metrics of cost-effectiveness. Once you start talking about the transhuman future, then almost every metric you could come up with is better optimized via weird futuristic stuff than simply “keeping humans alive”.
Re “it’s pretty easy to show that it’s orders of magnitude more cost-effective than GiveDirectly”: the kinds of reasoning that one might use to show that are pretty similar to e.g. the kinds that a communist could use to show that a proletarian revolution is orders of magnitude more cost-effective than GiveDirectly. In other words, it’s mostly reasoning within the confines of a worldview, and then slapping on a “cost-effectiveness” framing at the end, rather than having the cost-effectiveness part of the reasoning be load-bearing in any meaningful way.
>I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good. I don’t think it’s unreasonable to support AI safety work, but I think it’s throwing away most of the epistemics that could make EA a long-term robustly positive influence. EA’s original tagline used to be ‘using evidence and reason’, but the extreme AI safety focus seems to drop the ‘evidence’ part.
To believe you should focus on AI safety, you need to believe all of
short timelines
either
no trend of convergence between intelligence and morality or
the view that convergence wouldn’t matter or wouldn’t be enough to avoid moral disaster
either
long timelines on other GCRs or
other GCRs not really mattering to humanity’s long term prospects and
zero discounting on future people
that a flourishing human future is +EV
that trying to improve average welfare in a flourishing future is less good than trying to increase the probability of a flourishing future
reasonable confidence that AI safety work has learned from its past mistakes and will be reliably +EV
there won’t be a sufficient public shift towards AI safety to make it low enough leverage that less
you personally have more comparative advantage working on AI safety than any other cause
and surely some further assumptions I’ve missed, and many ways to further unpack these premises. To advocate work on AI safety as the primary EA cause you need to believe that the final bullet applies to the majority of your audience.
But I think there’s plenty in that list of assumptions that’s easy to disagree with, and a lot of entangled assumptions whose entanglement to my knowledge hasn’t really been explored (e.g. I find it hard to credit both that there’s no convergence between intelligence and morality and that there’s a long term equilibrium which is both stable and in some nontrivial sense positive or desirable).
So I semiagree with @MichaelDickens’s original comment’s in-principle scepticism while wondering whether in practice int/a might end up promoting causes that feel closer to the what I view as the original spirit of the movement.
There’s also some practical concerns in the OP that I think EA has dropped the ball on, such as building the sort of real community that would have retained greater support/membership over the years (my impression is the substantial majority of EAs who joined the movement more than 6 or 7 years ago have largely disengaged with it).
So I guess I’m noncommittally hopeful that this becomes something valuable—and remains, and Euan said, symbiotic with EA. If it just gives people who would have been somewhat supportive but felt too constrained a way to stay engaged with an encouraging community, that seems like it could be high value.
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good.
In hindsight I shouldn’t have used the phrase “what made EA good”, since by this point I’m skeptical about both the AI safety version and the “original spirit” of EA. I guess that makes me one of the people you’re describing who joined a long time ago (in my case, over a decade) and have now disengaged.
I do think that int/a is less likely than EA to be significantly harmful, and I’m excited about that. Whether or not it has a decent chance of actually doing something meaningful will depend a lot on the vision of the founders (and Euan in particular). Right now I’m not seeing what will prevent it from dissolving into the background of general hippie-adjacent things (kinda like a lot of the Game B and metacrisis stuff seems to have done). But we’ll see.
I suspect whether it helped or hindered folks likely depends on where they were pre-EA. Did they need to learn to pay more or less attention to cost effectiveness?
FWIW I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good. E.g. insofar as you think that AI safety is valuable, reasoning about cost-effectiveness actually cut against EA’s ability to pivot towards that. Instead, the thing that helped most was something like intellectual openness.
Cost-effectiveness is precisely the reason why I focus on AI safety. I can only speak for myself but I think the same is true for a lot of people. The thing that cuts against AI safety is more like “rigorously measurable cost-effectiveness”, but that’s not what I mean by “cost-effectiveness”. You can’t give a precise cost-effectiveness estimate for AI safety work, but it’s pretty easy to show that it’s orders of magnitude more cost-effective than GiveDirectly on any plausible set of assumptions.*
*unless it’s net negative, which unfortunately much EA-adjacent AI safety work turned out to be...but at least we can say that it’s orders of magnitude higher absolute impact
IMO you should think of global health/factory farming etc as one paradigm of EA—which did focus on cost effectiveness—and AI safety as a different paradigm in which the concept of cost-effectiveness is simply not very useful, for a few reasons (see also this related comment):
Talent bottlenecks are a far bigger obstacle than financial bottlenecks, and you can’t buy talent. Often you can’t even spend money to persuade talent—e.g. the AI safety community ended up convincing several of the most influential AI researchers of AGI risk, and even they mostly don’t seem to be able to think clearly about the issue (e.g. it’s hard to imagine a coherent strategy behind SSI).
Insofar as there are financial bottlenecks, it’s mostly because the biggest funder is ideologically and politically constrained, and because the trust networks in AI safety aren’t robust enough to distribute most of the money available. This will be only more true as Anthropic equity becomes liquid.
There’s an extreme principal-agent problem where not only is it difficult for funders to tell who will do good research in advance, but it’s even difficult for them to tell what was good research in hindsight.
As you mention, a lot of the action is in figuring out how not to be net-negative.
It’s very hard to cash out what AI safety is even trying to achieve in terms of metrics of cost-effectiveness. Once you start talking about the transhuman future, then almost every metric you could come up with is better optimized via weird futuristic stuff than simply “keeping humans alive”.
Re “it’s pretty easy to show that it’s orders of magnitude more cost-effective than GiveDirectly”: the kinds of reasoning that one might use to show that are pretty similar to e.g. the kinds that a communist could use to show that a proletarian revolution is orders of magnitude more cost-effective than GiveDirectly. In other words, it’s mostly reasoning within the confines of a worldview, and then slapping on a “cost-effectiveness” framing at the end, rather than having the cost-effectiveness part of the reasoning be load-bearing in any meaningful way.
>I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good. I don’t think it’s unreasonable to support AI safety work, but I think it’s throwing away most of the epistemics that could make EA a long-term robustly positive influence. EA’s original tagline used to be ‘using evidence and reason’, but the extreme AI safety focus seems to drop the ‘evidence’ part.
To believe you should focus on AI safety, you need to believe all of
short timelines
either
no trend of convergence between intelligence and morality or
the view that convergence wouldn’t matter or wouldn’t be enough to avoid moral disaster
either
long timelines on other GCRs or
other GCRs not really mattering to humanity’s long term prospects and
zero discounting on future people
that a flourishing human future is +EV
that trying to improve average welfare in a flourishing future is less good than trying to increase the probability of a flourishing future
reasonable confidence that AI safety work has learned from its past mistakes and will be reliably +EV
there won’t be a sufficient public shift towards AI safety to make it low enough leverage that less
you personally have more comparative advantage working on AI safety than any other cause
and surely some further assumptions I’ve missed, and many ways to further unpack these premises. To advocate work on AI safety as the primary EA cause you need to believe that the final bullet applies to the majority of your audience.
But I think there’s plenty in that list of assumptions that’s easy to disagree with, and a lot of entangled assumptions whose entanglement to my knowledge hasn’t really been explored (e.g. I find it hard to credit both that there’s no convergence between intelligence and morality and that there’s a long term equilibrium which is both stable and in some nontrivial sense positive or desirable).
So I semiagree with @MichaelDickens’s original comment’s in-principle scepticism while wondering whether in practice int/a might end up promoting causes that feel closer to the what I view as the original spirit of the movement.
There’s also some practical concerns in the OP that I think EA has dropped the ball on, such as building the sort of real community that would have retained greater support/membership over the years (my impression is the substantial majority of EAs who joined the movement more than 6 or 7 years ago have largely disengaged with it).
So I guess I’m noncommittally hopeful that this becomes something valuable—and remains, and Euan said, symbiotic with EA. If it just gives people who would have been somewhat supportive but felt too constrained a way to stay engaged with an encouraging community, that seems like it could be high value.
In hindsight I shouldn’t have used the phrase “what made EA good”, since by this point I’m skeptical about both the AI safety version and the “original spirit” of EA. I guess that makes me one of the people you’re describing who joined a long time ago (in my case, over a decade) and have now disengaged.
I do think that int/a is less likely than EA to be significantly harmful, and I’m excited about that. Whether or not it has a decent chance of actually doing something meaningful will depend a lot on the vision of the founders (and Euan in particular). Right now I’m not seeing what will prevent it from dissolving into the background of general hippie-adjacent things (kinda like a lot of the Game B and metacrisis stuff seems to have done). But we’ll see.
I suspect whether it helped or hindered folks likely depends on where they were pre-EA. Did they need to learn to pay more or less attention to cost effectiveness?