Amazing write up, appreciate it a lot!v such clear and comprehensive communication! Of course as a global health person its a bit heartbreaking to see then very tiny content related to that, especially when I would imagine more than 1⁄16 th of attendees are likely to be bent that way.
But I suppose that’s the way the EA wind is blowing.
I wanted to aim high with cause diversity, as it seemed vital to convey the important norm that EA is a research question rather than a pile of ‘knowledge’ one is supposed to imbibe. I consider us to have failed to meet our ambitions as regards cause diversity, and would advise future organisers to move on this even earlier than you think you need to. It seems to me that an EAGx (aimed more towards less experienced people) should do more to showcase cause diversity than an EA Global.
From our internal content strategy doc:
Highest priority:
AI risk
Global health (can include mental health) and poverty
Biosecurity
Second priority:
Animals, especially alternative proteins
Global priorities research
Aspiring to include:
Nuclear war
Epistemics and institutional decision-making
Encompassing rationality, forecasting, and IIDM
Climate change
Great power conflict
From the retrospective:
In the event, we had a preponderance of AI and meta-EA-related content, with 9 and 15 talks/workshops respectively; we had 4 talks or workshops on each of animals, global health & development, and biosecurity; and 6 on existential risks besides those focused on biosecurity and AI. (These numbers exclude meetups.) This was more lopsided than we had aimed for.
In the end, there are limits to what you can do to control the balance of the programme, as it depends on who responds. The most important tips are to start early and to keep actively tracking the balance. People within the EA movement and people who work on movement-building are more likely to respond.
Some data on response rates (showing basically that ‘meta-EA’ is the easiest to book):
percent interested
total invited
AI
39.13%
23
Animals
46.15%
13
GH&D
42.86%
14
Meta
65.38%
26
Other
25.00%
4
All GCRs except AI
47.83%
23
Biorisk
33.33%
15
What explains the high rate of inviting AI people? From memory, I might explain it this way: We had someone who worked in the AI safety field working with us on content, who I (half-way through) asked to specialize on AI content in particular, meaning that while my attention (as content lead and team lead) was split among causes and also among non-content tasks, his attention was not, resulting in us overall having more attention on AI. We then (over-?)compensated for a dearth of content not-that-long before the conference by sending out a large number of invites based on the lists we’d compiled, which were AI-heavy by that point. That means that we made a choice, under quite severe time/capacity constraints, to compromise on cause diversity for the sake of having abundant content.
Thanks so much for the comprehensive and honest reply. I’m actually encouraged that the low ratio wasn’t intentional but the result of other understandable content pressures.
Amazing write up, appreciate it a lot!v such clear and comprehensive communication! Of course as a global health person its a bit heartbreaking to see then very tiny content related to that, especially when I would imagine more than 1⁄16 th of attendees are likely to be bent that way.
But I suppose that’s the way the EA wind is blowing.
Nice one.
Thanks, Nick.
I wanted to aim high with cause diversity, as it seemed vital to convey the important norm that EA is a research question rather than a pile of ‘knowledge’ one is supposed to imbibe. I consider us to have failed to meet our ambitions as regards cause diversity, and would advise future organisers to move on this even earlier than you think you need to. It seems to me that an EAGx (aimed more towards less experienced people) should do more to showcase cause diversity than an EA Global.
From our internal content strategy doc:
From the retrospective:
Some data on response rates (showing basically that ‘meta-EA’ is the easiest to book):
What explains the high rate of inviting AI people? From memory, I might explain it this way: We had someone who worked in the AI safety field working with us on content, who I (half-way through) asked to specialize on AI content in particular, meaning that while my attention (as content lead and team lead) was split among causes and also among non-content tasks, his attention was not, resulting in us overall having more attention on AI. We then (over-?)compensated for a dearth of content not-that-long before the conference by sending out a large number of invites based on the lists we’d compiled, which were AI-heavy by that point. That means that we made a choice, under quite severe time/capacity constraints, to compromise on cause diversity for the sake of having abundant content.
Thanks so much for the comprehensive and honest reply. I’m actually encouraged that the low ratio wasn’t intentional but the result of other understandable content pressures.