Two years ago many attendees at the EA Global conference in the San Francisco Bay Area were surprised that the conference focused so heavily on AI risk, rather than the global poverty interventions they’d expected.
EA Global 2015 had one pannel on AI (in the morning, on day 2) and one talk tripplet on Global Poverty (in the afternoon, on day 2). Most of the content was not cause-specific.
People remember EA Global 2015 as having a lot of AI content because Elon Musk was on the AI pannel which made it loom very large in people’s minds. So, while it’s fair to say that more attention ended up on AI than on global poverty, it’s not fair to say that the content focused more on AI than on global poverty
The featured event was the AI risk thing. My recollection is that there was nothing else scheduled at that time so everyone could go to it. That doesn’t mean there wasn’t lots of other content (there was), nor do I think centering AI risk was necessarily a bad thing, but I stand by my description.
We didn’t offer any alternative events during Elon’s panel because we (correctly) perceived that there wouldn’t be demand for going to a different event and putting someone on stage with few people in the audience is not a good way to treat speakers.
We had to set up an overflow room for people that didn’t make it into the main room during the Elon panel, and even the overflow room was standing room only.
I think this is worth pointing out because of the proceeding sentence:
However, EA leadership tends to privately focus on things like AI risk.
The implication is that we aimed to bias the conference towards AI risk and against global poverty because of some private preference for AI risk as a cause area.[1]
I think we can be fairly accused of aiming for Elon as an attendee and not some extremely well known global poverty person.
However, with the exception of Bill Gates (who we tried to get), I don’t know of anyone in global poverty with anywhere close to the combination of a) general renown and b) reachability. So, I think trying to get Elon was probably the right call.
Given that Elon was attending, I don’t see what reasonable options we had for more evenly distributing attention between plausible causes. Elon casts a big shadow.
[1] Some readers contacted me to let me know that they found this sentence confusing. To clarify, I do have personal views on which causes are higher impact than others, but the program design of EA Global was not an attempt to steer EA on the basis of those views.
Trying to square this circle, because I think these observations are pretty readily reconcilable. My second-hand vague recollections from speaking to people at the time are:
The programming had a moderate slant towards AI risk because we got Elon.
The participants were generally very bullish on AI risk and other far-future causes.
The ‘Global poverty is a rounding error’ crowd was a disproportionately-present minority.
Any one of these in isolation would likely have been fine, but the combination left some people feeling various shades of surprised/bait-and-switched/concerned/isolated/unhappy. I think the combination is consistent with both what Ben said and what Kerry said.
Further, (2) and (3) aren’t surprising if you think about the way San Francisco EAs are drawn differently to EAs globally; SF is by some margin the largest AI hub, so committed EAs who care a lot about AI disproportionately end up living and working there.
Note that EAG Oxford, organised by the same team in the same month with the same private opinions, didn’t have the same issues, or at least it didn’t to the best of my knowledge as a participant who cared very little for AI risk at the time. I can’t speak to EAG Melbourne but I’d guess the same was true.
While (2) and (3) aren’t really CEA’s fault, there’s a fair challenge as to whether CEA should have anticipated (2) and (3) given the geography, and therefore gone out of their way to avoid (1). I’m moderately sympathetic to this argument but it’s very easy to make this kind of point with hindsight; I don’t know whether anyone foresaw it. Of course, we can try to avoid the mistake going forward regardless, but then again I didn’t hear or read anyone complaining about this at EAG 2016 in this way, so maybe we did?
I think 2016 EAG was more balanced. But I don’t think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.
The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn’t match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations. This is necessarily going to require some amount of insincerity or disconnect between initial marketing and reality, and represents a substantial cost to that marketing strategy.
The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn’t match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations.
If this is basically saying ‘we should take care to emphasize that EAs have wide-ranging disagreements of both values and fact that lead them to prioritise a range of different cause areas’, then I strongly agree. In the same vein, I think we should emphasize that people who self-identify as ‘EAs’ represent a wide range of commitment levels.
One reason for this is that depending which university or city someone is in, which meetup they turn up to, and who exactly they talk to, they’ll see wildly different distributions of commitment and similarly differing representation of various cause areas.
With that said, I’m not totally sure if that’s the point you’re making because my personal experience in London is that we’ve been going out of our way to make the above points for a while; what’s an example of marketing which you think works to maintain a homogenous public image?
EffectiveAltruism.org’s Introduction to Effective Altruism allocates most of its words to what’s effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
If you click “Donate Effectively,” you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I’ve said above is a good idea but a very large leap from the anti-Playpump pitch. “Trust friendly, sensible-seeming agents and empower them to do what they think is sensible” is a very, very different method than “check everything because it’s easy to spend money on nice-sounding things of no value.”
The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I’ve been advised that this is an old page pending an update). The GWWC Facebook page seems like it’s mostly global poverty stuff, and some promotion of other CEA brands.
It’s very plausible to me that in-person EA groups often don’t have this problem because individuals don’t feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
EffectiveAltruism.org’s Introduction to Effective Altruism allocates most of its words to what’s effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
I think ‘many methods of doing good fail’ has wide applications outside of Global Poverty, but I acknowledge the wider point you’re making.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can’t find) describing how their founders’ approaches to doing good have evolved and updated over the years. Is that something you’d like to see more of?
It’s very plausible to me that in-person EA groups often don’t have this problem because individuals don’t feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general ‘intro to EA’ that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about ‘which causes and which methods of doing good should we list given limited time’, rather than ‘which cause/method would provide the most generically effective pitch’.
We didn’t want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the ‘core’ EAs in the room that was a very real risk.
There was a recent post by 80,000 hours (which annoyingly I now can’t find) describing how their founders’ approaches to doing good have evolved and updated over the years. Is that something you’d like to see more of?
Yes! More clear descriptions of how people have changed their mind would be great. I think it’s especially important to be able to identify which things we’d hoped would go well but didn’t pan out—and then go back and make sure we’re not still implicitly pitching that hope.
EA Global 2015 had one pannel on AI (in the morning, on day 2) and one talk tripplet on Global Poverty (in the afternoon, on day 2). Most of the content was not cause-specific.
People remember EA Global 2015 as having a lot of AI content because Elon Musk was on the AI pannel which made it loom very large in people’s minds. So, while it’s fair to say that more attention ended up on AI than on global poverty, it’s not fair to say that the content focused more on AI than on global poverty
The featured event was the AI risk thing. My recollection is that there was nothing else scheduled at that time so everyone could go to it. That doesn’t mean there wasn’t lots of other content (there was), nor do I think centering AI risk was necessarily a bad thing, but I stand by my description.
We didn’t offer any alternative events during Elon’s panel because we (correctly) perceived that there wouldn’t be demand for going to a different event and putting someone on stage with few people in the audience is not a good way to treat speakers.
We had to set up an overflow room for people that didn’t make it into the main room during the Elon panel, and even the overflow room was standing room only.
I think this is worth pointing out because of the proceeding sentence:
The implication is that we aimed to bias the conference towards AI risk and against global poverty because of some private preference for AI risk as a cause area.[1]
I think we can be fairly accused of aiming for Elon as an attendee and not some extremely well known global poverty person.
However, with the exception of Bill Gates (who we tried to get), I don’t know of anyone in global poverty with anywhere close to the combination of a) general renown and b) reachability. So, I think trying to get Elon was probably the right call.
Given that Elon was attending, I don’t see what reasonable options we had for more evenly distributing attention between plausible causes. Elon casts a big shadow.
[1] Some readers contacted me to let me know that they found this sentence confusing. To clarify, I do have personal views on which causes are higher impact than others, but the program design of EA Global was not an attempt to steer EA on the basis of those views.
Trying to square this circle, because I think these observations are pretty readily reconcilable. My second-hand vague recollections from speaking to people at the time are:
The programming had a moderate slant towards AI risk because we got Elon.
The participants were generally very bullish on AI risk and other far-future causes.
The ‘Global poverty is a rounding error’ crowd was a disproportionately-present minority.
Any one of these in isolation would likely have been fine, but the combination left some people feeling various shades of surprised/bait-and-switched/concerned/isolated/unhappy. I think the combination is consistent with both what Ben said and what Kerry said.
Further, (2) and (3) aren’t surprising if you think about the way San Francisco EAs are drawn differently to EAs globally; SF is by some margin the largest AI hub, so committed EAs who care a lot about AI disproportionately end up living and working there.
Note that EAG Oxford, organised by the same team in the same month with the same private opinions, didn’t have the same issues, or at least it didn’t to the best of my knowledge as a participant who cared very little for AI risk at the time. I can’t speak to EAG Melbourne but I’d guess the same was true.
While (2) and (3) aren’t really CEA’s fault, there’s a fair challenge as to whether CEA should have anticipated (2) and (3) given the geography, and therefore gone out of their way to avoid (1). I’m moderately sympathetic to this argument but it’s very easy to make this kind of point with hindsight; I don’t know whether anyone foresaw it. Of course, we can try to avoid the mistake going forward regardless, but then again I didn’t hear or read anyone complaining about this at EAG 2016 in this way, so maybe we did?
I think 2016 EAG was more balanced. But I don’t think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.
The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn’t match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations. This is necessarily going to require some amount of insincerity or disconnect between initial marketing and reality, and represents a substantial cost to that marketing strategy.
If this is basically saying ‘we should take care to emphasize that EAs have wide-ranging disagreements of both values and fact that lead them to prioritise a range of different cause areas’, then I strongly agree. In the same vein, I think we should emphasize that people who self-identify as ‘EAs’ represent a wide range of commitment levels.
One reason for this is that depending which university or city someone is in, which meetup they turn up to, and who exactly they talk to, they’ll see wildly different distributions of commitment and similarly differing representation of various cause areas.
With that said, I’m not totally sure if that’s the point you’re making because my personal experience in London is that we’ve been going out of our way to make the above points for a while; what’s an example of marketing which you think works to maintain a homogenous public image?
EffectiveAltruism.org’s Introduction to Effective Altruism allocates most of its words to what’s effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
If you click “Donate Effectively,” you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I’ve said above is a good idea but a very large leap from the anti-Playpump pitch. “Trust friendly, sensible-seeming agents and empower them to do what they think is sensible” is a very, very different method than “check everything because it’s easy to spend money on nice-sounding things of no value.”
The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I’ve been advised that this is an old page pending an update). The GWWC Facebook page seems like it’s mostly global poverty stuff, and some promotion of other CEA brands.
It’s very plausible to me that in-person EA groups often don’t have this problem because individuals don’t feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
Thanks for digging up those examples.
I think ‘many methods of doing good fail’ has wide applications outside of Global Poverty, but I acknowledge the wider point you’re making.
This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can’t find) describing how their founders’ approaches to doing good have evolved and updated over the years. Is that something you’d like to see more of?
This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general ‘intro to EA’ that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about ‘which causes and which methods of doing good should we list given limited time’, rather than ‘which cause/method would provide the most generically effective pitch’.
We didn’t want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the ‘core’ EAs in the room that was a very real risk.
Yes! More clear descriptions of how people have changed their mind would be great. I think it’s especially important to be able to identify which things we’d hoped would go well but didn’t pan out—and then go back and make sure we’re not still implicitly pitching that hope.
I found the post, was struggling before because it’s actually part of their career guide rather than a blog post.
Thanks! On a first read, this seems pretty clear and much more like the sort of thing I’d hope to see in introductory material.