I’m a former data scientist with 5 years industry experience now working in Washington DC to bridge the gap between policy and emerging technology. AI is moving very quickly and we need to help the government keep up!
I work at IAPS, a think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI. Our mission is to identify and promote strategies that maximize the benefits of AI for society and develop thoughtful solutions to minimize its risks.
I’m also a professional forecaster with specializations in geopolitics and electoral forecasting.
Peter Wildeford
I published my list here: https://www.pasteurscube.com/ai-list/
I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.
I get a general vibe that in EA (and probably the world at large), that being a “deep thinking researcher”-type is way higher status than being an “operations/management/doer”-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).
I see many EAs erroneously try to go into research and stick to research despite having very clear strengths on the operational side and insist that they shouldn’t do operations work unless they clearly fail at research first.
I’ve personally felt this at times where I started my career very oriented towards research, was honestly only average or even below-average at it, and then switched into management, which I think has been much higher impact (and likely counterfactually generated at least a dozen or more researchers).
Reading list on AI agents and associated policy
I really appreciate these dates being announced in advance—it makes it much easier to plan!
I’m not sure I understand the expectations enough about what these questions are looking for to answer.
Firstly, I don’t think “the movement” is centralized enough to explicitly acknowledge things as a whole—that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.
Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to “how do we reduce AI risk?” was “I don’t know, I guess we should urgently figure that out” and now there’s been an explosion of analysis, threat modeling, and policy ideas—for example Luke’s 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there’s way more too.
Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there’s been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT, and CURVE were all done within the past two years. Additionally, the Rethink Priorities model estimating the value of research influencing funders flew under the EA radar IMO but actually has led to very significant internal shifts in Rethink Priorities’s thinking on which funders to work for and why.
I also think a lot of the genesis of the current focus on lead started in 2021 but significant work on pushing this forward happened in the 2022-2024 window.
As for new effective organizations, a bit of this depends on your opinions about what is “effective” and to what extent new organizations are “EA”, but there are many new initiatives around, especially in the AI space.
It’s very difficult to underrate how much EA has changed over the past two years.
For context, two years ago was 2022 July 30. It was 17 days prior to the “What We Owe the Future” book launch. It was also about three months before the FTX fraud was discovered (but at this time it was massively underway in secret) and the ensuing bankruptcy. We were still at the height of the Big Money Big Longtermism era.
It was also about eight months before the FLI Pause Letter, which I think coincided with roughly when the US and UK governments took very serious and intense interest in AI risk.
I think these two events were really key changes for the EA movement and led to a huge vibe shift. “Longtermism” feels very antiquated now and feels abandoned in the name of “holy crap we have to deal with AI risk occurring within the next ten years”. Big Money is out, but we still have a lot of money, and it feels more responsible and somewhat more sustainable now. There are no longer regrantors running around everywhere, for better and for worse.
Many of the people previously working on longtermism have pivoted to “pandemics and AI” and many of the people previously working on pandemic risk have pivoted to “AI x bio intersections”. WWOTF captures the current mid-2024 vibe of EA much less than Leopold’s “Situational Awareness”.
There also has been a massive pivot towards mainstream engagement. Many EAs have edited their LinkedIns to purge that two-word phrase and now barely and begrudgingly admit to being “EA-adjacent”. These people now take meetings in DC and engage in the mainstream policy process (whereas previously “politics was the mindkiller”). Many AI policy orgs have popped up or become more prominent as a result. Even MIRI, which had just announced “Death with Dignity” only about three months prior to that date of 2022 July 30, has now given up on giving up and pivoted to policy work. DC is a much bigger EA hub than it was two years ago, but the people working in DC certainly wouldn’t refer to it as that.
The vibe shift towards AI has also continued to cannibalize the rest of EA as well, for better and for worse. This trend was already in full swing in 2022 but became much more prominent over 2023-2024. There’s a lot less money available for global health and animal welfare work than before, especially if you worked on more weird stuff like shrimp. Shrimp welfare kinda peaked in 2022 and the past two years have unfortunately not been kind to shrimp.
I agree with all this advice. I also want to emphasize that I think researchers ought to spend more time talking to people relevant to their work.
Once you’ve identified your target audience, spend a bunch of time talking to them at the beginning, middle, and end of the project. At the beginning learn and take into account their constraints, at the middle refine your ideas, and at the end actually try to get your research into action.
I think it’s not crazy to spend half of your time on the research project talking.
That’s fair—you’re right to make this distinction where I failed and I’m sorry. I think I have a good point but I got heated in describing it and strayed further from charitableness than I should. I regret that.
Thanks Linch. I appreciate the chance to step back here. So I want to apologize to @Austin and @Rachel Weinberg and @Saul Munn if I stressed them out with my comments. (Tagging means they’ll see it, right?)
I want to be very clear that while I disagree with some of the choices made, I have absolutely no ill will towards them or any other Manifest organizer, I very much want Manifold and Manifest to succeed, and I very much respect their rights to have their conference the way they want. If I see any of them I will be very warm and friendly and there’s really no need from me to talk about this further if they don’t want to. I hope we can be friends and engage productively in other areas—even if I don’t attend Manifest or trade on Manifold, I’d be happy to interact with them in other ways that don’t involve Hanania.
While I dislike Hanania’s ideas greatly, and I still think inviting Hanania was a mistake, and I still will not attend events or participate in places where Hanania is given a platform… I don’t want to practice guilt by association for those who do not hold Hanania’s detestable ideas. Just because someone interacted with him does not make them also bad people. I apologize for not being clear about this from the beginning and I regret that I may have lead people to think otherwise.
BTW I want to add—to all those who champion Hanania because they think free speech should mean that anyone should be able to be platformed without criticism or condemnation, Hanania is no ally to those principles:
Here’s Hanania:
I don’t feel particularly oppressed by leftists. They give me a lot more free speech than I would give them if the tables were turned. If I owned Twitter, I wouldn’t let feminists, trans activists, or socialists post. Why should I? They’re wrong about everything and bad for society. Twitter [pre-Musk] is a company that is overwhelmingly liberal, and I’m actually impressed they let me get away with the things I’ve been saying for this long.
Yeah, because there’s such a geographically clustered dichotomy in views between the London set and the SF set, it seems pretty important to me to give it 24 hours yeah.
Also just a general generic caution: we should know that this poll will mainly be seen by only the most active and most engaged people, which may not be representative enough to generalize.
I think the diurnal effect is real and is based on there being a lot of people in both the UK and the SF Bay Area that have opposite and geographically correlated views on this topic.
It’s pretty interesting that Hanania just happens to frequently make these kinds of accidents, right?
To be clear, I haven’t cut ties with anyone other than Manifold (and Hanania). Manifold is a very voluntary use of my non-professional time and I found the community to be exhausting. I have a right to decline to participate there, just as much as you have a right to participate there. There’s nothing controlling about this.
The precise quote for others to assess is “Daniel Penny getting charged. These people are animals, whether they’re harassing people in subways or walking around in suits.”
I was not at Manifest. And I’d like to be very clear that I totally respect Manifest’s right to host Hanania and make him a speaker.
I disagree with the decision and I would never do such a thing if I were King of Manifest, but I’m not King of Manifest and I am not trying to control anything about it. Notably, Manifest came and went, Hanania was there just fine and nothing happened, and all I did was exercise my right to not go and to complain about it to some friends. At no point did I ever do anything to attempt to cancel Manifest.
But since people took the conversation here to the EA Forum which I like and are trying to tell people that Hanania is fine actually, I’m now also going to complain about it here on my Forum.
I don’t think there’s any equivalence between any of the things I have ever said and the most vile things that Hanania / Chau / Yarvin has said. I don’t think it’s a matter of finding quotes and misinterpreting them. They’re pretty blatant. I’m quite confident you could audit my entire writing history and I’d stand by that.
And people don’t have a right to a platform near me. It’s not like they’re losing their job. Or even their blog or their book deal or their platform somewhere else. I just don’t want them to be near me.
~
You would just consider them to have been rightly deplatformed for being racist, whereas I would consider them to have been silenced due to things where reasonable people can disagree.
I’m curious—is there anything for you that reasonable people couldn’t disagree? Anything someone could say that would make them worth deplatforming, in your mind?
I disagree.
Firstly, you’re totally welcome to read, listen, or say what you want. I have never aimed to harm anyone through “cancel culture”, I have never called for anyone to lose their job, etc. My point is simple: if you’re thing involves calling black people animals, I don’t want that to happen anywhere near me. I’m not trying to control you, I’m trying to control my own surroundings. I think nearly all communities are better with some degree of moderation. But maybe you disagree. I’m fine for you to go your own way.
I’ve personally left Manifold over this after being a daily active user and putting a few thousand real dollars on the site. I’m fine to learn that Manifold is not for me. It’s sad, but I’ll move on. It would be really sad to learn that EA or the EA Forum is not for me. But I think we can exercise some degree of control as a community here about what we are and are not okay with. That’s a very normal thing for communities to do.
I basically endorse this post, as well as the use of the tools created by Rethink Priorities that collectively point to quite strong but not overwhelming confidence in the marginal value of farmed animal welfare.