Trying to square this circle, because I think these observations are pretty readily reconcilable. My second-hand vague recollections from speaking to people at the time are:
The programming had a moderate slant towards AI risk because we got Elon.
The participants were generally very bullish on AI risk and other far-future causes.
The âGlobal poverty is a rounding errorâ crowd was a disproportionately-present minority.
Any one of these in isolation would likely have been fine, but the combination left some people feeling various shades of surprised/âbait-and-switched/âconcerned/âisolated/âunhappy. I think the combination is consistent with both what Ben said and what Kerry said.
Further, (2) and (3) arenât surprising if you think about the way San Francisco EAs are drawn differently to EAs globally; SF is by some margin the largest AI hub, so committed EAs who care a lot about AI disproportionately end up living and working there.
Note that EAG Oxford, organised by the same team in the same month with the same private opinions, didnât have the same issues, or at least it didnât to the best of my knowledge as a participant who cared very little for AI risk at the time. I canât speak to EAG Melbourne but Iâd guess the same was true.
While (2) and (3) arenât really CEAâs fault, thereâs a fair challenge as to whether CEA should have anticipated (2) and (3) given the geography, and therefore gone out of their way to avoid (1). Iâm moderately sympathetic to this argument but itâs very easy to make this kind of point with hindsight; I donât know whether anyone foresaw it. Of course, we can try to avoid the mistake going forward regardless, but then again I didnât hear or read anyone complaining about this at EAG 2016 in this way, so maybe we did?
I think 2016 EAG was more balanced. But I donât think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.
The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesnât match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations. This is necessarily going to require some amount of insincerity or disconnect between initial marketing and reality, and represents a substantial cost to that marketing strategy.
The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesnât match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations.
If this is basically saying âwe should take care to emphasize that EAs have wide-ranging disagreements of both values and fact that lead them to prioritise a range of different cause areasâ, then I strongly agree. In the same vein, I think we should emphasize that people who self-identify as âEAsâ represent a wide range of commitment levels.
One reason for this is that depending which university or city someone is in, which meetup they turn up to, and who exactly they talk to, theyâll see wildly different distributions of commitment and similarly differing representation of various cause areas.
With that said, Iâm not totally sure if thatâs the point youâre making because my personal experience in London is that weâve been going out of our way to make the above points for a while; whatâs an example of marketing which you think works to maintain a homogenous public image?
EffectiveAltruism.orgâs Introduction to Effective Altruism allocates most of its words to whatâs effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
If you click âDonate Effectively,â you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as Iâve said above is a good idea but a very large leap from the anti-Playpump pitch. âTrust friendly, sensible-seeming agents and empower them to do what they think is sensibleâ is a very, very different method than âcheck everything because itâs easy to spend money on nice-sounding things of no value.â
The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though Iâve been advised that this is an old page pending an update). The GWWC Facebook page seems like itâs mostly global poverty stuff, and some promotion of other CEA brands.
Itâs very plausible to me that in-person EA groups often donât have this problem because individuals donât feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
EffectiveAltruism.orgâs Introduction to Effective Altruism allocates most of its words to whatâs effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
I think âmany methods of doing good failâ has wide applications outside of Global Poverty, but I acknowledge the wider point youâre making.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now canât find) describing how their foundersâ approaches to doing good have evolved and updated over the years. Is that something youâd like to see more of?
Itâs very plausible to me that in-person EA groups often donât have this problem because individuals donât feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general âintro to EAâ that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about âwhich causes and which methods of doing good should we list given limited timeâ, rather than âwhich cause/âmethod would provide the most generically effective pitchâ.
We didnât want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the âcoreâ EAs in the room that was a very real risk.
There was a recent post by 80,000 hours (which annoyingly I now canât find) describing how their foundersâ approaches to doing good have evolved and updated over the years. Is that something youâd like to see more of?
Yes! More clear descriptions of how people have changed their mind would be great. I think itâs especially important to be able to identify which things weâd hoped would go well but didnât pan outâand then go back and make sure weâre not still implicitly pitching that hope.
Trying to square this circle, because I think these observations are pretty readily reconcilable. My second-hand vague recollections from speaking to people at the time are:
The programming had a moderate slant towards AI risk because we got Elon.
The participants were generally very bullish on AI risk and other far-future causes.
The âGlobal poverty is a rounding errorâ crowd was a disproportionately-present minority.
Any one of these in isolation would likely have been fine, but the combination left some people feeling various shades of surprised/âbait-and-switched/âconcerned/âisolated/âunhappy. I think the combination is consistent with both what Ben said and what Kerry said.
Further, (2) and (3) arenât surprising if you think about the way San Francisco EAs are drawn differently to EAs globally; SF is by some margin the largest AI hub, so committed EAs who care a lot about AI disproportionately end up living and working there.
Note that EAG Oxford, organised by the same team in the same month with the same private opinions, didnât have the same issues, or at least it didnât to the best of my knowledge as a participant who cared very little for AI risk at the time. I canât speak to EAG Melbourne but Iâd guess the same was true.
While (2) and (3) arenât really CEAâs fault, thereâs a fair challenge as to whether CEA should have anticipated (2) and (3) given the geography, and therefore gone out of their way to avoid (1). Iâm moderately sympathetic to this argument but itâs very easy to make this kind of point with hindsight; I donât know whether anyone foresaw it. Of course, we can try to avoid the mistake going forward regardless, but then again I didnât hear or read anyone complaining about this at EAG 2016 in this way, so maybe we did?
I think 2016 EAG was more balanced. But I donât think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.
The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesnât match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations. This is necessarily going to require some amount of insincerity or disconnect between initial marketing and reality, and represents a substantial cost to that marketing strategy.
If this is basically saying âwe should take care to emphasize that EAs have wide-ranging disagreements of both values and fact that lead them to prioritise a range of different cause areasâ, then I strongly agree. In the same vein, I think we should emphasize that people who self-identify as âEAsâ represent a wide range of commitment levels.
One reason for this is that depending which university or city someone is in, which meetup they turn up to, and who exactly they talk to, theyâll see wildly different distributions of commitment and similarly differing representation of various cause areas.
With that said, Iâm not totally sure if thatâs the point youâre making because my personal experience in London is that weâve been going out of our way to make the above points for a while; whatâs an example of marketing which you think works to maintain a homogenous public image?
EffectiveAltruism.orgâs Introduction to Effective Altruism allocates most of its words to whatâs effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
If you click âDonate Effectively,â you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as Iâve said above is a good idea but a very large leap from the anti-Playpump pitch. âTrust friendly, sensible-seeming agents and empower them to do what they think is sensibleâ is a very, very different method than âcheck everything because itâs easy to spend money on nice-sounding things of no value.â
The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though Iâve been advised that this is an old page pending an update). The GWWC Facebook page seems like itâs mostly global poverty stuff, and some promotion of other CEA brands.
Itâs very plausible to me that in-person EA groups often donât have this problem because individuals donât feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
Thanks for digging up those examples.
I think âmany methods of doing good failâ has wide applications outside of Global Poverty, but I acknowledge the wider point youâre making.
This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now canât find) describing how their foundersâ approaches to doing good have evolved and updated over the years. Is that something youâd like to see more of?
This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general âintro to EAâ that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about âwhich causes and which methods of doing good should we list given limited timeâ, rather than âwhich cause/âmethod would provide the most generically effective pitchâ.
We didnât want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the âcoreâ EAs in the room that was a very real risk.
Yes! More clear descriptions of how people have changed their mind would be great. I think itâs especially important to be able to identify which things weâd hoped would go well but didnât pan outâand then go back and make sure weâre not still implicitly pitching that hope.
I found the post, was struggling before because itâs actually part of their career guide rather than a blog post.
Thanks! On a first read, this seems pretty clear and much more like the sort of thing Iâd hope to see in introductory material.