As a biased mostly near termist (full disclosure), I’ve got some comments and questions ;)
First a concern about the framing ”Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns, there has recently been a fair amount of discussion among EAs whether it would make sense to rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes.”
This framing for the discussion seems a bit unclear. First I don’t see the direct logical connection between “Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns” and “rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes.” There must be a some implied assumptions filling the gap between these two statements that I’m missing, its certainly not A + B= C. I’m guessing it something like FTX collapse causing potential significant reputational loss to the EA brand etc. I think being explicit is important when framing a discussion.
Second, when we are talking about “focus on the constituent causes”, and “cause specific” does that practically mean growth in AI Safety focused groups while general EA groups remain, or further specialisation within EA with Global health / Animal advocacy / Climate change / Biorisk etc? (I might be very wrong here). Does “constituent cause” and “cause specific” mostly translate to “AI safety”in the context of this article or not?
One other comment that concerned me was that among this group there was a “consensus that a non-trivial fraction of outreach efforts that are framed in EA terms are still worth supporting.” This is quite shocking to me, the idea that EA framed outreach should perhaps be downgraded from the status quo (most of the outreach) to “non-trivial” (which I interpret as very little of the outreach). That’s a big change which personally I don’t like and I wonder what the wider community thinks.
it already seems like a lot (the majority?) of community building is bent towards AI safety, so its interesting that the push from EA thought leaders seems to be to move further in this direction.
As this post itself states, 80,000 hours in practise seems pretty close to an AI/long termist career advice platform , here are their latest 3 releases.
There have already been concerns raised that EA university groups can often intentionally or unintentionally push AI safety as the clearly most important cause, to the point where it may be compromising epistemics.
Finally this didn’t sit well with me “There was significant disagreement whether OP should start a separate program (distinct from Claire’s and James’ teams) focused on “EA-as-a-principle”/”EA qua EA”-grantmaking. Five respondents agreed with a corresponding prompt (answers between 6-9), two respondents disagreed (answers between 2-4), one neither agreed nor disagreed.”
These discussions are important but I don’t love the idea of the most important discussion with the important people steering the EA ship being led by Openphil (a funding organisation), rather than perhaps by CEA or even perhaps a wider forum. Should the most important discussions about the potential future of a movement should be led by a funding body?
Things I agreed with/liked - I instinctively like the idea of an AI conference, as maybe that will mean the other conferences have a much higher proportion of EAs who are into other things. - More support to AI safety specific groups in general. Even as a near termist, that makes a lot of sense as there is a big buzz about it right now and they can attract non-EA people to the field and provide focus for those groups. - I’m happy that 5 out of 8 disagreed with renaming the forum (although was surprised that even 3 voted for it). For branding/understanding/broadchurch and other reasons I struggle to see positive EV in that one. - I’m happy that 5 out of the 8 agreed that 80,000 hours should be more explicit about its longtermism focus. It feels a bit disingenous at the moment—although I know that isn’t the intention.
Looking forward to more discussion along these lines!
I’ve written about this idea before FTX and think that FTX is a minor influence compared to the increased interest in AI risk.
My original reasoning was that AI safety is a separate field but doesn’t really have much movement building work being put into it outside of EA/longtermism/x-risk framed activities.
Another reason why AI takes up a lot of EA space, is that there aren’t many other places to go to discuss these topics, which is bad for the growth of AI safety if it’s hidden behind donating 10% and going vegan and bad for EA if it gets overcrowded by something that should have it’s own institutions/events/etc.
“Which is bad for the growth of AI safety if it’s hidden behind donating 10% and going vegan”
This may be true and the converse is also possible concurrently, with the growth of giving 10% and going vegan potentially being hidden at times behind AI safety ;)
From an optimistic angle “Big tent EA” and AI safety can be synergistic—much AI safety funding comes from within the EA community. A huge reason those hundreds of millions are available, is because the AI safety cause grew out of and is often melded with founding EA principles, which includes giving what we can to high EV causes. This has motivated people to provide the very money EA safety work relies on.
Community dynamics are complicated and I don’t think the answers are straightforward.
We’ve had a lot of AI episodes in a row lately, so those of you who aren’t that interested in AI or perhaps just aren’t in a position to work on it, might be wondering if this is an all AI show now.
But don’t unsubscribe because we’re working on plenty of non-AI episodes that I think you’ll love — over the next year we plan to do roughly half our episodes on AI and AI-relevant topics, and half on things that have nothing to do with AI.
What happened here is that in March it hit Keiran and Luisa and me that so much very important stuff had happened in the AI space that had simply never been talked about on the show, and we’ve been working down that coverage backlog, which felt pretty urgent to do.
But soon we’ll get back to a better balance between AI and non-AI interviews. I’m looking forward to mixing it up a bit myself.
As a biased mostly near termist (full disclosure), I’ve got some comments and questions ;)
First a concern about the framing
”Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns, there has recently been a fair amount of discussion among EAs whether it would make sense to rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes.”
This framing for the discussion seems a bit unclear. First I don’t see the direct logical connection between “Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns” and “rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes.” There must be a some implied assumptions filling the gap between these two statements that I’m missing, its certainly not A + B= C. I’m guessing it something like FTX collapse causing potential significant reputational loss to the EA brand etc. I think being explicit is important when framing a discussion.
Second, when we are talking about “focus on the constituent causes”, and “cause specific” does that practically mean growth in AI Safety focused groups while general EA groups remain, or further specialisation within EA with Global health / Animal advocacy / Climate change / Biorisk etc? (I might be very wrong here). Does “constituent cause” and “cause specific” mostly translate to “AI safety” in the context of this article or not?
One other comment that concerned me was that among this group there was a “consensus that a non-trivial fraction of outreach efforts that are framed in EA terms are still worth supporting.” This is quite shocking to me, the idea that EA framed outreach should perhaps be downgraded from the status quo (most of the outreach) to “non-trivial” (which I interpret as very little of the outreach). That’s a big change which personally I don’t like and I wonder what the wider community thinks.
it already seems like a lot (the majority?) of community building is bent towards AI safety, so its interesting that the push from EA thought leaders seems to be to move further in this direction.
As this post itself states, 80,000 hours in practise seems pretty close to an AI/long termist career advice platform , here are their latest 3 releases.
There have already been concerns raised that EA university groups can often intentionally or unintentionally push AI safety as the clearly most important cause, to the point where it may be compromising epistemics.
Finally this didn’t sit well with me “There was significant disagreement whether OP should start a separate program (distinct from Claire’s and James’ teams) focused on “EA-as-a-principle”/”EA qua EA”-grantmaking. Five respondents agreed with a corresponding prompt (answers between 6-9), two respondents disagreed (answers between 2-4), one neither agreed nor disagreed.”
These discussions are important but I don’t love the idea of the most important discussion with the important people steering the EA ship being led by Openphil (a funding organisation), rather than perhaps by CEA or even perhaps a wider forum. Should the most important discussions about the potential future of a movement should be led by a funding body?
Things I agreed with/liked
- I instinctively like the idea of an AI conference, as maybe that will mean the other conferences have a much higher proportion of EAs who are into other things.
- More support to AI safety specific groups in general. Even as a near termist, that makes a lot of sense as there is a big buzz about it right now and they can attract non-EA people to the field and provide focus for those groups.
- I’m happy that 5 out of 8 disagreed with renaming the forum (although was surprised that even 3 voted for it). For branding/understanding/broadchurch and other reasons I struggle to see positive EV in that one.
- I’m happy that 5 out of the 8 agreed that 80,000 hours should be more explicit about its longtermism focus. It feels a bit disingenous at the moment—although I know that isn’t the intention.
Looking forward to more discussion along these lines!
I’ve written about this idea before FTX and think that FTX is a minor influence compared to the increased interest in AI risk.
My original reasoning was that AI safety is a separate field but doesn’t really have much movement building work being put into it outside of EA/longtermism/x-risk framed activities.
Another reason why AI takes up a lot of EA space, is that there aren’t many other places to go to discuss these topics, which is bad for the growth of AI safety if it’s hidden behind donating 10% and going vegan and bad for EA if it gets overcrowded by something that should have it’s own institutions/events/etc.
“Which is bad for the growth of AI safety if it’s hidden behind donating 10% and going vegan”
This may be true and the converse is also possible concurrently, with the growth of giving 10% and going vegan potentially being hidden at times behind AI safety ;)
From an optimistic angle “Big tent EA” and AI safety can be synergistic—much AI safety funding comes from within the EA community. A huge reason those hundreds of millions are available, is because the AI safety cause grew out of and is often melded with founding EA principles, which includes giving what we can to high EV causes. This has motivated people to provide the very money EA safety work relies on.
Community dynamics are complicated and I don’t think the answers are straightforward.
Some added context on the 80k podcasts:
At the beginning of the Jan Leike episode, Rob says: