This looks pretty much right, as a description of how EA has responded tactically to important events and vibe shifts. Nevertheless it doesn’t answer OP’s questions, which I’ll repeat:
What ideas that were considered wrong/low status have been championed here?
What has the movement acknowledged it was wrong about previously?
What new, effective organisations have been started?
Your reply is not about new ideas, or the movement acknowledging it was wrong (except about Bankman-Fried personally, which doesn’t seem like what OP is asking about), or new organizations.
It seems important, to me, that EA’s history over the last two years is instead mainly the story of changes in funding, in popular discourse, and in the social strategy of preexisting institutions. e.g. the FLI pause letter was the start of a significant PR campaign, but all the *ideas* in it would’ve been perfectly familiar to an EA in 2014 (except for “Should we let machines flood our information channels with propaganda and untruth?”, which is a consequence of then-unexpected developments in AI technology rather than of intellectual work by EAs).
I’m not sure I understand the expectations enough about what these questions are looking for to answer.
Firstly, I don’t think “the movement” is centralized enough to explicitly acknowledge things as a whole—that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.
Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to “how do we reduce AI risk?” was “I don’t know, I guess we should urgently figure that out” and now there’s been an explosion of analysis, threat modeling, and policy ideas—for example Luke’s 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there’s way more too.
Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there’s been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT, and CURVE were all done within the past two years. Additionally, the Rethink Priorities model estimating the value of research influencing funders flew under the EA radar IMO but actually has led to very significant internal shifts in Rethink Priorities’s thinking on which funders to work for and why.
I also think a lot of the genesis of the current focus on lead started in 2021 but significant work on pushing this forward happened in the 2022-2024 window.
As for new effective organizations, a bit of this depends on your opinions about what is “effective” and to what extent new organizations are “EA”, but there are many new initiatives around, especially in the AI space.
This looks pretty much right, as a description of how EA has responded tactically to important events and vibe shifts. Nevertheless it doesn’t answer OP’s questions, which I’ll repeat:
What ideas that were considered wrong/low status have been championed here?
What has the movement acknowledged it was wrong about previously?
What new, effective organisations have been started?
Your reply is not about new ideas, or the movement acknowledging it was wrong (except about Bankman-Fried personally, which doesn’t seem like what OP is asking about), or new organizations.
It seems important, to me, that EA’s history over the last two years is instead mainly the story of changes in funding, in popular discourse, and in the social strategy of preexisting institutions. e.g. the FLI pause letter was the start of a significant PR campaign, but all the *ideas* in it would’ve been perfectly familiar to an EA in 2014 (except for “Should we let machines flood our information channels with propaganda and untruth?”, which is a consequence of then-unexpected developments in AI technology rather than of intellectual work by EAs).
I’m not sure I understand the expectations enough about what these questions are looking for to answer.
Firstly, I don’t think “the movement” is centralized enough to explicitly acknowledge things as a whole—that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.
Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to “how do we reduce AI risk?” was “I don’t know, I guess we should urgently figure that out” and now there’s been an explosion of analysis, threat modeling, and policy ideas—for example Luke’s 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there’s way more too.
Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there’s been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT, and CURVE were all done within the past two years. Additionally, the Rethink Priorities model estimating the value of research influencing funders flew under the EA radar IMO but actually has led to very significant internal shifts in Rethink Priorities’s thinking on which funders to work for and why.
I also think a lot of the genesis of the current focus on lead started in 2021 but significant work on pushing this forward happened in the 2022-2024 window.
As for new effective organizations, a bit of this depends on your opinions about what is “effective” and to what extent new organizations are “EA”, but there are many new initiatives around, especially in the AI space.