I’m not sure I understand the expectations enough about what these questions are looking for to answer.
Firstly, I don’t think “the movement” is centralized enough to explicitly acknowledge things as a whole—that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.
Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to “how do we reduce AI risk?” was “I don’t know, I guess we should urgently figure that out” and now there’s been an explosion of analysis, threat modeling, and policy ideas—for example Luke’s 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there’s way more too.
Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there’s been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT, and CURVE were all done within the past two years. Additionally, the Rethink Priorities model estimating the value of research influencing funders flew under the EA radar IMO but actually has led to very significant internal shifts in Rethink Priorities’s thinking on which funders to work for and why.
I also think a lot of the genesis of the current focus on lead started in 2021 but significant work on pushing this forward happened in the 2022-2024 window.
As for new effective organizations, a bit of this depends on your opinions about what is “effective” and to what extent new organizations are “EA”, but there are many new initiatives around, especially in the AI space.
I’m not sure I understand the expectations enough about what these questions are looking for to answer.
Firstly, I don’t think “the movement” is centralized enough to explicitly acknowledge things as a whole—that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.
Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to “how do we reduce AI risk?” was “I don’t know, I guess we should urgently figure that out” and now there’s been an explosion of analysis, threat modeling, and policy ideas—for example Luke’s 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there’s way more too.
Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there’s been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT, and CURVE were all done within the past two years. Additionally, the Rethink Priorities model estimating the value of research influencing funders flew under the EA radar IMO but actually has led to very significant internal shifts in Rethink Priorities’s thinking on which funders to work for and why.
I also think a lot of the genesis of the current focus on lead started in 2021 but significant work on pushing this forward happened in the 2022-2024 window.
As for new effective organizations, a bit of this depends on your opinions about what is “effective” and to what extent new organizations are “EA”, but there are many new initiatives around, especially in the AI space.