I think EAs should be more critical of its advocacy contingents, and that those involved in such efforts should set a higher bar for offering more thoughtful and considered takes.
Short slogans and emojis-in-profiles, such as those often used in ‘Pause AI’ advocacy, are IMO inadequate for the level of nuance required for complex topics like these. Falling short can burn credibility and status of those involved in EA in the eyes of onlookers.
As someone who runs one of EAs advocacy contingents, I think the overall idea of more criticism is probably a good idea (though I suspect I’ll find it personally unpleasant when applied to things I work on), but I’d suggest a few nuances I think exist here:
EA is not unitary, and different EAs and EA factions will have different and at times opposing policy goals. For example, many of the people who work at OpenAI/Anthropic are EAs (or EA adjacent), but many EAs think working at OpenAI/Anthropic leads to AI acceleration in a harmful way (EAs also have differing views of the relevant merits of those two firms).
Which views are considered EA can change the composition of who identifies as EA, EA-adjacent, unopposed, and EA-hostile—e.g. my perception of Sam Altman would be as EA-adjacent, but the perception that EAs have been critical of OpenAI, along with other events, likely pushed him further away from EA than he’d otherwise be; Elon Musk and Peter Thiel may also be related examples.
Advocacy is inherently information-lossy, since it involves translating information from one context into a format that will be persuasive in some sort of politically useful way. Usually this involves simplification (because a popular or decision-maker audience has less bandwidth than an expert audience) and may also involve differentiation (since the message will probably tend to be optimized to fit something like the existing views of its audience). This is a hard challenge to manage.
One type of simplification I’ve noticed is from an internal EA-organizing perspective—where the experts/leaders at the center tend to have nuanced, reasonable views, but those views, when being transmitted to organizers who again transmit to less experienced people interested in EA, can become translated into a dogma that is simplistic and rigid.
Two case studies of EA (or EA-adjacent) advocacy—monetary/macroeconomic policy and criminal justice reform—have had interestingly different trajectories. With monetary policy in the U.S., EA-funded groups tended to foreground technical policy-understanding and (in my opinion) did a good job transitioning their recommendations as macroeconomic conditions changed (am thinking mainly of Employ America). The criminal justice reform movement (where I founded a volunteer advocacy organization, the Rikers Debate Project) has in my opinion been mostly unable to reorient its recommendations and thinking in response to changing conditions. In my opinion, the macroeconomic policy work had more of a technocratic theory of change than more identity-oriented criminal justice reform efforts funded by EA though there were elements of technocracy and identitarianism in both fields. (Rikers Debate, which was not funded by EA groups, has historically been more identitarian in focus).
I think much of the advocacy within EA is reasonably thoughtful and truth-seeking. Reasoning and uncertainties are often transparently communicated. Here are two examples based on my personal impressions:
advocacy around donating a fraction of one’s income to effective charities is generally focused on providing accounts of key facts and statistics, and often acknowledges its demandingness and its potential for personal and social downsides
wild animal suffering advocacy usually does things like acknowledging the second-order effects of interventions on ecosystems, highlights the amount of uncertainty around the extent of suffering and often calls for more research rather than immediate intervention
By contrast, EA veganism advocacy has done a much poorer job in remaining truth-seeking as Elizabeth has pointed out.
Thanks for your thoughtful reply, I appreciate it :)
I am a bit confused still. I’m struggling to see how the work of GWWC is similar to the Pause Movement? Unless you’re saying there is a vocal contingent of EAs (who don’t work for GWWC) who publicly advocate (to non-EAs) for donating ≥ 10% of your income? I haven’t seen these people.
In short, I’m struggling to see how they’re analogous situations.
I’m a bit skeptical that all identitarian tactics should be avoided, insofar as that is what this is. It’s just too potent a tool—just about every social movement has promulgated itself by these means, by plan or otherwise. Part of this is a “growth of the movement” debate; I’m inclined to think that more money+idea proliferation is needed.
I do think there are some reasonable constraints:
Identitarian tactics should be used self-consciously and cynically. It’s when we forget that we are acting, that the worst of in/out groupiness presents itself. Do think we could do with some more reminding of this.
I would agree that certain people should refrain from this. Fine if early-stage career people do it, but I’ll start being concerned if Macaskill loses his cool and starts posting “I AM AN EA💡” and roasting outgroups.
I think EAs should be more critical of its advocacy contingents, and that those involved in such efforts should set a higher bar for offering more thoughtful and considered takes.
Short slogans and emojis-in-profiles, such as those often used in ‘Pause AI’ advocacy, are IMO inadequate for the level of nuance required for complex topics like these. Falling short can burn credibility and status of those involved in EA in the eyes of onlookers.
As someone who runs one of EAs advocacy contingents, I think the overall idea of more criticism is probably a good idea (though I suspect I’ll find it personally unpleasant when applied to things I work on), but I’d suggest a few nuances I think exist here:
EA is not unitary, and different EAs and EA factions will have different and at times opposing policy goals. For example, many of the people who work at OpenAI/Anthropic are EAs (or EA adjacent), but many EAs think working at OpenAI/Anthropic leads to AI acceleration in a harmful way (EAs also have differing views of the relevant merits of those two firms).
Which views are considered EA can change the composition of who identifies as EA, EA-adjacent, unopposed, and EA-hostile—e.g. my perception of Sam Altman would be as EA-adjacent, but the perception that EAs have been critical of OpenAI, along with other events, likely pushed him further away from EA than he’d otherwise be; Elon Musk and Peter Thiel may also be related examples.
Advocacy is inherently information-lossy, since it involves translating information from one context into a format that will be persuasive in some sort of politically useful way. Usually this involves simplification (because a popular or decision-maker audience has less bandwidth than an expert audience) and may also involve differentiation (since the message will probably tend to be optimized to fit something like the existing views of its audience). This is a hard challenge to manage.
One type of simplification I’ve noticed is from an internal EA-organizing perspective—where the experts/leaders at the center tend to have nuanced, reasonable views, but those views, when being transmitted to organizers who again transmit to less experienced people interested in EA, can become translated into a dogma that is simplistic and rigid.
Two case studies of EA (or EA-adjacent) advocacy—monetary/macroeconomic policy and criminal justice reform—have had interestingly different trajectories. With monetary policy in the U.S., EA-funded groups tended to foreground technical policy-understanding and (in my opinion) did a good job transitioning their recommendations as macroeconomic conditions changed (am thinking mainly of Employ America). The criminal justice reform movement (where I founded a volunteer advocacy organization, the Rikers Debate Project) has in my opinion been mostly unable to reorient its recommendations and thinking in response to changing conditions. In my opinion, the macroeconomic policy work had more of a technocratic theory of change than more identity-oriented criminal justice reform efforts funded by EA though there were elements of technocracy and identitarianism in both fields. (Rikers Debate, which was not funded by EA groups, has historically been more identitarian in focus).
Can you provide a historical example of advocacy that you think reaches a high level of thoughtfulness and consideration?
I think much of the advocacy within EA is reasonably thoughtful and truth-seeking. Reasoning and uncertainties are often transparently communicated. Here are two examples based on my personal impressions:
advocacy around donating a fraction of one’s income to effective charities is generally focused on providing accounts of key facts and statistics, and often acknowledges its demandingness and its potential for personal and social downsides
wild animal suffering advocacy usually does things like acknowledging the second-order effects of interventions on ecosystems, highlights the amount of uncertainty around the extent of suffering and often calls for more research rather than immediate intervention
By contrast, EA veganism advocacy has done a much poorer job in remaining truth-seeking as Elizabeth has pointed out.
Thanks for your thoughtful reply, I appreciate it :)
I am a bit confused still. I’m struggling to see how the work of GWWC is similar to the Pause Movement? Unless you’re saying there is a vocal contingent of EAs (who don’t work for GWWC) who publicly advocate (to non-EAs) for donating ≥ 10% of your income? I haven’t seen these people.
In short, I’m struggling to see how they’re analogous situations.
You asked for examples of advocacy done well with respect to truth-seekingness/providing well-considered takes, and I provided examples.
You seem annoyed, so I will leave the conversation here.
I’m a bit skeptical that all identitarian tactics should be avoided, insofar as that is what this is. It’s just too potent a tool—just about every social movement has promulgated itself by these means, by plan or otherwise. Part of this is a “growth of the movement” debate; I’m inclined to think that more money+idea proliferation is needed.
I do think there are some reasonable constraints:
Identitarian tactics should be used self-consciously and cynically. It’s when we forget that we are acting, that the worst of in/out groupiness presents itself. Do think we could do with some more reminding of this.
I would agree that certain people should refrain from this. Fine if early-stage career people do it, but I’ll start being concerned if Macaskill loses his cool and starts posting “I AM AN EA💡” and roasting outgroups.