Founder and organizer of EA Eindhoven, EA Tilburg and their respective AI safety groups.
BSc. Biomedical Engineering > Community building gap year on Open Phil grant > MSc. Philosophy of Data and Digital Society. Interested in many cause areas, but increasingly focusing on AI governance and field building for my own career.
Jelle Donders
This post appears to be a duplicate
How decision making actually works in EA has always been one big question mark to me, so thanks for the transparency!
One thing I still wonder: How do big donors like Moskovitz and Tuna and what they want factor into all this?
Somewhat sceptical of this, mainly because of the first 2 counterarguments mentioned:
In my view, a surprisingly large fraction of people now doing valuable x-risk work originally came in from EA (though also a lot of people have come in via the rationality community), compared to how many I would have expected, even given the historical strong emphasis on EA recruiting.
We’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important.
Focusing on the underlying search for what is most impactful seems a lot more robust than focusing on the main opportunity this search currently nets. An EA/longtermist is likely to take x-risk seriously as long as this is indeed a top priority, but you can’t flip this. The ability of the people working on the world’s most pressing problems updating on what is most impactful to work on (arguable the core of what makes EA ‘work’) would decline without any impact-driven meta framework.
An “x-risk first” frame could quickly become more culty/dogmatic and less epistemically rigorous, especially if it’s paired with a lower resolution understanding of the arguments and assumptions for taking x-risk reduction (especially) seriously, less comparison with and dialogue between different cause areas, and less of a drive for keeping your eyes and ears open for impactful opportunities outside of the thing you’re currently working on, all of which seems hard to avoid.
It definitely makes sense to give x-risk reduction a prominent place in EA/longtermist outreach, and I think it’s important to emphasize that you don’t need to “buy into EA” to take a cause area seriously and contribute to it. We should probably also build more bridges to communities that form natural allies. But I think this can (and should) be done while maintaining strong reasoning transparency about what we actually care about and how x-risk reduction fits in our chain of reasoning. A fundamental shift in framing seems quite rash.
EDIT:More broadly, I think we should be running lots of experiments (communicating a wide range of messages in a wide range of styles) to increase our “surface area”.
Agreed that more experimentation would be welcome though!
Effective Altruism Social in Eindhoven
Effective Altruism Social in Eindhoven
I really want to create an environment in my EA groups that’s high in what is labelled “psychological safety” here, but it’s hard to make this felt known to others, especially in larger groups. The best I’ve got is to just explicitly state the kind of environment I would like to create, but I feel like there’s more I could do. Any suggestions?
Effective Altruism Social in Eindhoven
What do the recent developments mean for AI safety career paths? I’m in the process of shifting my career plans toward ‘trying to robustly set myself up for meaningfully contributing to making transformative AI go well’ (whatever that means), but everything is developing so rapidly now and I’m not sure in what direction to update my plans, let alone develop a solid inside view on what the AI(S) ecosystem will look like and what kind of skillset and experience will be most needed several years down the line.
I’m mainly looking into governance and field building (which I’m already involved in) over technical alignment research, though I want to ask this question in a more general sense since I’m guessing it would be helpful for others as well.
Effective Altruism Eindhoven—Quiz Night
Effective Altruism Social in Eindhoven
The Existential Risk Observatory aims to inform the public about existential risks and recently published this, so maybe consider getting in touch with them.
Here’s the EAG London talk that Toby gave on this topic (maybe link it in the post?).