Founder and organizer of EA Eindhoven, EA Tilburg and their respective AI safety groups.
BSc. Biomedical Engineering > Community building gap year on Open Phil grant > MSc. Philosophy of Data and Digital Society. Interested in many cause areas, but increasingly focusing on AI governance and field building for my own career.
Jelle Donders
We are using PISE’s intro fellowship for EA Utrecht right now and as a facilitator I have no complaints so far! It seems likely to me that the shorter duration has indeed lead to more sign-ups, though we’d have to add a question about this on the feedback form to test this.
Regarding the compulsory nature of the fellowship to join the group, to what extent do you recommend this to new groups whose first priority probably is to find enough people that are interested in order to get the ball rolling? I imagine a solid chunk of people showing up to EA Utrecht’s events right now would have left if the fellowship was compulsory to become a member.
They’re currently making the logo at (1644, 570)
The letters are being typed out next to the logo now with surprisingly little resistance! Holding on to it may be difficult, but we’ve already secured a spot on the timelapses that millions will see on youtube this way (though they will have to look at the right time in the right place of course)
Not sure how effective it would be to try and get EA ideas more on TV in general, but I continue to be surprised by how little talk there is about creating some kind of EA documentary. Making a well-produced, easily accessible 1-2 hour visual introduction to EA that is optimized to get people up to speed with EA ideas and motivated to contribute seems like a very worthwhile thing to do (at least on the surface).
Finally, I don’t know anything about this, but shouldn’t it in theory be possible to just give Netflix and every other streaming service under the sun free rights to put this on their platforms? If it’s well-produced I’d imagine these services to be quite eager to expand their libraries for free.
EA Documentary
I continue to be surprised by how little talk there is about creating some kind of EA documentary. Making a well-produced, easily accessible 1-2 hour visual introduction to EA that is optimized to get people up to speed with EA ideas and motivated to contribute seems like a very worthwhile thing to do.Additionally, it is so easy for people to get a warped impression of EA when first hearing about it. I can’t even blame them, given how EA encompasses so many interconnected and complementary ideas and frameworks for looking at the world. You need quite a lengthy introduction to EA for it to fully make sense and be optimally convincing. Sending people a bunch of links that introduce (standalone) EA ideas in text can fail to do this. Making a documentary ourselves that serves as the perfect holistic introduction to what EA is, why it matters and how people can contribute could fix this.
Finally, I don’t know anything about this, but shouldn’t it in theory be possible to just give Netflix and every other streaming service under the sun free rights to put this on their platforms? If it’s well-produced I’d imagine these services to be quite eager to expand their libraries for free.
Wouldn’t be surprised if there are solid reasons for why there’s next to no talk about this, so feel free to let me know what I’m missing here.
Agreed, this appears to be the most neutral interpretation. Since the marginal value of increasing “EA coffers” depends on what EA as a whole spends its money on, it could function as a pretty useful metric for intuitively communicating value across cause areas imo.
A disadvantage might be that it’s not a very concrete metric, unlike something like the QALY. Additionally, someone needs to have a somewhat accurate understanding of what the funding distribution in EA looks like (and what the funding at the margin is being used on!) for this metric to make any sense.
Collaborations like these have so much potential! Because of this video alone, millions will learn about these ideas and many thousands will get inspired by them. Very glad Open Phil pursued this and hoping to see more efforts like this.
Very excited to see how this major nation-wide community building push will turn out! As someone that has gotten involved with setting up several of these new local groups, it has been really motivating and encouraging to have this support network of other organizers starting groups around the same time. If you run into a problem, there’s a good chance others are struggling with the same issues and you can collaborate on solutions! In my experience this lowers the barrier for getting involved as an organizer quite considerably, and I’m predicting it will make it easier to find future organizers as well.
To add to this, I would like to emphasize the lack of reasoning transparency in the current estimates as one of our main concerns—and not just the estimates of the value of additional high-impact EAs—but especially those of the value of community building roles at (top) universities and to what degree university groups ‘create’ these high-impact EAs (which potentially becomes even more dubious when HEAs are used as a proxy metric for impact, for reasons similar to your first bullet point).
We originally had these estimates as the main red teaming topic in mind, but we soon figured out there wasn’t enough substance to turn this topic into a red team by itself, as the estimates mainly seemed to stem from guesstimates.
I think this post touches on some really important topics, so thanks a lot for writing it! To push back on some things:
Would you say the same applies to newer university groups? It seems likely to me that following through on the advice of this post would limit the amount of people that hear about EA at your university. If you don’t already have a mature university group, a lack of growth means it may never become one, which would be a very steep opportunity cost.
Phrased differently, this post appears to come from the perspective of a mature groups where the impact bottleneck is the organizers and active members setting themselves up for impactful work and projects, as opposed to a less mature groups where the impact bottleneck is finding and reaching out to people that would be interested in EA. If you limit intro talks, fellowships and 1:1s, how much value can you really provide for people that aren’t already EAs?
Although the advice in this post could be very beneficial for groups in the second stage, it could possibly be harmful for the multitude of groups in the first stage.
Claim 1: Having the most promising people market EA is inefficient.
For newer groups, if the most engaged and promising people don’t market EA, it is likely no one will. To change that, you need to build up a group and find various organizers, which requires growth, which can be hard to achieve without marketing.
Claim 2: Too much marketing causes bad epistemics in the group.
Can’t say I’ve noticed this much, personally. From speaker events to fellowships and book clubs, marketing will generally point an event or program where exploring ideas and skilling up are central (though admittedly not necessarily for the organizers).
Especially if the organizers doing the outreach don’t have a good understanding of problems themselves, they might be perceived as unconvincing by the epistemically-rigorous people they want to attract.
Doesn’t this also function as an argument for why the marketing should be done by the most engaged and promising people?
Claim 4: Leaders marketing EA too much causes bad perceptions of EA around campus.
I agree that the difficulty of accurately conveying what EA is and does through the brief moments of first impressions definitely poses a risk for reputation around campus. However, I feel this could also be used as an argument for spending more rather than less time thinking through how you signal and market EA.
Perhaps a good takeaway from all this is that marketing for university groups should ultimately be self-defeating, rather than self-reinforcing. To use marketing as a means of creating a solid core group of people interested in EA, after which it can take a backseat in the list of priorities and skilling up this core group becomes the focus.
Finally, I feel like a lot of this could be avoided by creating standardized pipelines for marketing EA and setting up the digital infrastructure (think of website, linkedin, instagram, facebook, slack, discord, circle, mailing list, announcement chat, calendar for events, calendly for 1:1s, sharable QR code to a linktree, and perhaps most importantly, which ones of these you even need in the first place). This would both free up a lot of time for organizers, as well as allow for fine tuning the messaging to the extent that this is possible. Luckily it appears this is being worked on.
Self-criticism may be a necessary component of achieving the real goal of improving EA for the better. Yet it’s not sufficient. The extent of self-criticism is now excessive.
I completely agree with the first sentence, but am not sure about the second. If more were to be done to implement changes, would you still call the current level of self-criticism excessive? Generating a lot of ideas about how we could better steer the ship before selecting the best ones and actually pulling on the ship’s wheel would be crucial, no?
I think it gave some valuable commentary on the tensions between how people inside and outside the movement view EA. How we deal with these stark contrasts would only become more relevant if EA becomes more mainstream, and the What We Owe the Future release with its ambitious media campaign appears to be gearing up to make a serious push in this direction! Some excerpts related to these tensions:
Money, which no longer seemed an object, was increasingly being reinvested in the community itself. The math could work out: it was a canny investment to spend thousands of dollars to recruit the next Sam Bankman-Fried. But the logic of the exponential downstream had some kinship with a multilevel-marketing ploy. Similarly, if you assigned an arbitrarily high value to an E.A.’s hourly output, it was easy to justify luxuries such as laundry services for undergraduate groups, or, as one person put it to me, wincing, “retreats to teach people how to run retreats.” Josh Morrison, a kidney donor and the founder of a pandemic-response organization, commented on the forum, “The Ponzi-ishness of the whole thing doesn’t quite sit well.”
The community’s priorities were prone to capture by its funders. Cremer said, of Bankman-Fried, “Now everyone is in the Bahamas, and now all of a sudden we have to listen to three-hour podcasts with him, because he’s the one with all the money. He’s good at crypto so he must be good at public policy . . . what?!”
It does, in any case, seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientists.
Members of the mutinous cohort told me that the movement’s leaders were not to be taken at their word—that they would say anything in public to maximize impact. Some of the paranoia—rumor-mill references to secret Google docs and ruthless clandestine councils—seemed overstated, but there was a core cadre that exercised control over public messaging; its members debated, for example, how to formulate their position that climate change was probably not as important as runaway A.I. without sounding like denialists or jerks. … Was MacAskill’s gambit with me—the wild swimming in the frigid lake—merely a calculation that it was best to start things off with a showy abdication of the calculus?
And most eloquently:
From the outside, E.A. could look like a chipper doomsday cult intent on imposing its narrow vision on the world. From the inside, its adherents feel as though they are just trying to figure out how to allocate limited resources—a task that most charities and governments undertake with perhaps one thought too few.
Even if these if there are good arguments for why EA is doing what it is/heading in the direction that it is, will these arguments be communicated with enough fidelity to make the process of EA becoming more mainstream go well? Although the author of this article is looking in from the outside, they did a lot of research and made a very conscious effort to make a proper analysis, which probably makes it closer to a best rather than worst case scenario as far as impressions of EA go. Still very excited to see the reactions of more people becoming familiar with EA, just also a bit anxious.
Besides Will himself, congrats to the people that coordinated the media campaign around this book! Besides the many articles such as the ones in Time, the New Yorker, the New York Times, a ridiculous number of youtube channels that I follow uploaded a WWOTF related video recently.
The bottleneck for longtermism becoming mainstream seems to conveying these inherently unintuitive ideas in an intuitive and high fidelity way. From the first half I’ve read so far, I think this book can help a lot in alleviating this bottleneck. Excited for more people to become familiar with these ideas and get in touch with EA! I think us community builders are going to be busy for a while.
There’s been a lot of community building happening in the Netherlands lately, very excited for this!
I think it’s valuable to have a low-barrier way for people to engage with EA without having years of experience and spending hours on writing a high-quality post. Do you have any ideas for how to avoid the trade-off between quality and accessibility?
Personally, I find the ‘frontpage’ and ‘curated’ filters to work pretty well. Regardless of the average quality of posts, as long as the absolute number of high-quality posts doesn’t decrease (and I see no reason why that would happen), filters like these should be able to keep a more curated experience intact, no?
I think reasoning transparency is a very important concept that I wouldn’t mind EA orgs adopting more, so was surprised I couldn’t find anything about it except for Open Phil’s 2017 blog post, thank you for this!
Edit: Effective Thesis is giving reasoning transparency workshops now!
How far and wide should people (and especially community builders) spread this and encourage others to fill it in? For example, I could ask people from my local group to fill in the survey, but I don’t want to skew the results.
Wishing much strength to everyone affected by this. Let’s support each other and get through this together.
Very glad you’re emphasizing that last question! I can easily see the narrative shift from ‘SBF/FTX did unethical stuff’ to ‘EA people think the end always justify the means’, even though shallow utilitarian calculus that ignores all second-order effects rarely holds up (e.g. doctors killing patients if they can save more lives by harvesting their organs being normalized would lead to a paranoid dystopia where everyone fears hospitals. Even the purest of utilitarians shouldn’t support this).
However, for someone less familiar with EA this overgeneralization is very easy to make, so I think we should be more explicit about refuting this type of reasoning.
Is there a comprehensive list of EA Slack workspaces somewhere? The couple I know of I’ve come across by accident. I suppose knowing what’s already out there is important to know for both potential participants as well as those considering setting up something new.