Founder and organizer of EA Eindhoven, EA Tilburg and their respective AI safety groups.
BSc. Biomedical Engineering > Community building gap year on Open Phil grant > MSc. Philosophy of Data and Digital Society. Interested in many cause areas, but increasingly focusing on AI governance and field building for my own career.
Jelle Donders
If OP disagrees, they should practice reasoning transparency by clarifying their views
OP believes in reasoning transparency, but their reasoning has not been transparent
Regardless of what Open Phil ends up doing, would really appreciate them to at least do this :)
Regardless of whether there is an economic argument to be made for this decision, as Nathan Young and others are implying, large expenses being clearly communicated and justified seems like a worthwhile endeavor for the sake of transparency alone. If people are finding out about “EA buying a castle” from Émile Torres or the New Yorker (EDIT: and we can’t point to any kind of public statement or justification), then we’re probably doing something wrong.
- 21 Nov 2023 23:13 UTC; 4 points) 's comment on Elizabeth’s Quick takes by (
It will take a while to break all of this down, but in the meantime, thank you so much for posting this. This level of introspection is much appreciated.
Good point, I didn’t make clear what I meant with the last sentence. Would this rephrasing make sense to you?
If people are finding out about “EA buying a castle” from Émile Torres or the New Yorker and we can’t point to any kind of public statement or justification, then we’re probably doing something wrong
I also agree the content of some of these criticisms wouldn’t change even if there were a public post, but I don’t think the same applies to people’s responses to it. If a reasonable person stumbles across Torres or the New Yorker criticizing EA for buying a castle, they would probably be a lot more forgiving towards EA if they can be pointed to a page on CEA’s website that provides an explanation behind the decision, written before any of these criticisms, as opposed to finding a complete lack of records or acknowledgements on (C)EA’s side.
In general, taking reasoning transparency more seriously seems like low hanging fruit for making the communication from EA orgs to both the movement and the public at large more robust, though I might be missing something, in which case I’d love if someone could point it out to me.
A clear-thinking EA should strongly oppose “ends justify the means” reasoning.
This has indeed always been the case, but I’m glad it is so explicitly pointed out now. The overgeneralization from “FTX/SBF did unethical stuff” to “EA people think the end always justifies the means” is very easy to make for people that are less familiar with EA—or perhaps even SBF fell for this kind of reasoning, though his motivations are speculations for now.
It would probably be for the better to make the faulty nature of “end justify the means” reasoning (or the distinction between naive and prudent utilitarianism) a core EA cultural norm that people can’t miss.
How decision making actually works in EA has always been one big question mark to me, so thanks for the transparency!
One thing I still wonder: How do big donors like Moskovitz and Tuna and what they want factor into all this?
Somewhat sceptical of this, mainly because of the first 2 counterarguments mentioned:
In my view, a surprisingly large fraction of people now doing valuable x-risk work originally came in from EA (though also a lot of people have come in via the rationality community), compared to how many I would have expected, even given the historical strong emphasis on EA recruiting.
We’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important.
Focusing on the underlying search for what is most impactful seems a lot more robust than focusing on the main opportunity this search currently nets. An EA/longtermist is likely to take x-risk seriously as long as this is indeed a top priority, but you can’t flip this. The ability of the people working on the world’s most pressing problems updating on what is most impactful to work on (arguable the core of what makes EA ‘work’) would decline without any impact-driven meta framework.
An “x-risk first” frame could quickly become more culty/dogmatic and less epistemically rigorous, especially if it’s paired with a lower resolution understanding of the arguments and assumptions for taking x-risk reduction (especially) seriously, less comparison with and dialogue between different cause areas, and less of a drive for keeping your eyes and ears open for impactful opportunities outside of the thing you’re currently working on, all of which seems hard to avoid.
It definitely makes sense to give x-risk reduction a prominent place in EA/longtermist outreach, and I think it’s important to emphasize that you don’t need to “buy into EA” to take a cause area seriously and contribute to it. We should probably also build more bridges to communities that form natural allies. But I think this can (and should) be done while maintaining strong reasoning transparency about what we actually care about and how x-risk reduction fits in our chain of reasoning. A fundamental shift in framing seems quite rash.
EDIT:More broadly, I think we should be running lots of experiments (communicating a wide range of messages in a wide range of styles) to increase our “surface area”.
Agreed that more experimentation would be welcome though!
The Human Future (x-risk and longtermism-themed video by melodysheep)
Wouldn’t this run the risk of worsening the lack of intellectual diversity and epistemic health that the post mentions? The growing divide between long/neartermism might have led to tensions, but I’m happy that at least there’s still conferences, groups and meet-ups where these different people are still talking to each other!
There might be an important trade-off here, and it’s not clear to me what direction makes more sense.
Wishing much strength to everyone affected by this. Let’s support each other and get through this together.
Here’s the EAG London talk that Toby gave on this topic (maybe link it in the post?).
EA Documentary
I continue to be surprised by how little talk there is about creating some kind of EA documentary. Making a well-produced, easily accessible 1-2 hour visual introduction to EA that is optimized to get people up to speed with EA ideas and motivated to contribute seems like a very worthwhile thing to do.Additionally, it is so easy for people to get a warped impression of EA when first hearing about it. I can’t even blame them, given how EA encompasses so many interconnected and complementary ideas and frameworks for looking at the world. You need quite a lengthy introduction to EA for it to fully make sense and be optimally convincing. Sending people a bunch of links that introduce (standalone) EA ideas in text can fail to do this. Making a documentary ourselves that serves as the perfect holistic introduction to what EA is, why it matters and how people can contribute could fix this.
Finally, I don’t know anything about this, but shouldn’t it in theory be possible to just give Netflix and every other streaming service under the sun free rights to put this on their platforms? If it’s well-produced I’d imagine these services to be quite eager to expand their libraries for free.
Wouldn’t be surprised if there are solid reasons for why there’s next to no talk about this, so feel free to let me know what I’m missing here.
Besides Will himself, congrats to the people that coordinated the media campaign around this book! Besides the many articles such as the ones in Time, the New Yorker, the New York Times, a ridiculous number of youtube channels that I follow uploaded a WWOTF related video recently.
The bottleneck for longtermism becoming mainstream seems to conveying these inherently unintuitive ideas in an intuitive and high fidelity way. From the first half I’ve read so far, I think this book can help a lot in alleviating this bottleneck. Excited for more people to become familiar with these ideas and get in touch with EA! I think us community builders are going to be busy for a while.
FHI almost singlehandedly made salient so many obscure yet important research topics. To everyone that contributed over the years, thank you!
Sounds good overall. 1% each for priorities, cb and giving seems pretty low. 1.75% for mental health might also be on the low side, as there appears to be quite a bit of interest for global mental health in NL. I think the focus on entrepreneurship is great!
The board must have thought things through in detail before pulling the trigger, so I’m still putting some credence on there being good reasons for their move and the subsequent radio silence, which might involve crucial info they have and we don’t.
If not, all of this indeed seems like a very questionable move.
I’ve shared very similar concerns for a while. The risk of successful narrow EA endeavors that lack transparency backfiring in this manner feels very predictable to me, but many seem to disagree.
What do the recent developments mean for AI safety career paths? I’m in the process of shifting my career plans toward ‘trying to robustly set myself up for meaningfully contributing to making transformative AI go well’ (whatever that means), but everything is developing so rapidly now and I’m not sure in what direction to update my plans, let alone develop a solid inside view on what the AI(S) ecosystem will look like and what kind of skillset and experience will be most needed several years down the line.
I’m mainly looking into governance and field building (which I’m already involved in) over technical alignment research, though I want to ask this question in a more general sense since I’m guessing it would be helpful for others as well.
And now even Kurzgesagt, albeit indirectly!
If you and other core org EAs have thoroughly considered many of the issues the post raises, why isn’t there more reasoning transparency on this? Besides being a good practice in general (especially when the topic is how the EA ecosystem fundamentally operates), it would make it a lot easier for the authors and others on the forum to deliver more constructive critiques that target cruxes.
As far as I know, the cruxes of core org EAs are nowhere to be found for many of the topics this post covers.