Rob—I guess it’s valid and unobjectionable to say we should try to stay conscious about what norms we’re creating, following, and reinforcing, with a view towards how we shape the future culture of EA.
However, it sounds like you might be alluding to a somewhat more specific claim that certain good norms are currently at risk of being compromised or corrupted, and/or certain bad norms are at risk of spreading.
I wonder if you could spell this out a bit more? I’m not sure what your intended takeaway is, or in what direction you’re trying to nudge us (e.g. towards more ‘inclusive kindness’, or towards more ‘epistemic integrity’?).
From my perspective, there’s currently a sort of tug-of-war happening between three EA orientations (which already existed, though things like the Bostrom e-mail have inspired people to put them into words and properly attend to them):
one that’s worried EA risks losing sight of principles like “honesty” and “open discourse” (and maybe “scientific inquiry and scholarship”),
one that’s worried EA risks losing sight of principles like “basic human compassion” and “care and respect for everyone regardless of life-circumstance” (and potentially also “scientific inquiry and scholarship”, with a different model of what the science says),
and a more pragmatic “regardless of all that stuff, the PR costs and benefits are clear and those considerations dominate the utilitarian calculus here”.
Obviously these aren’t mutually exclusive, at least at that level of abstraction. You can simultaneously be worried about PR risk, about EAs’ epistemic integrity eroding, and about EAs’ interpersonal compassion eroding. But a lot of the current disagreement seems to pass through EAs putting different relative weightings on these three classes of consideration.
On the current margin, I want to encourage more discussion that’s coming from the first or second perspective, relative to the third perspective. And I want the third perspective to flesh out its models more, and consider a larger option and hypothesis space — in part just so it’s easy to tell when EAs have models of the strategy space that they’re drawing on at all. (Versus arguments based more on mental motions like “feel socially unsafe 🢡 generate post-facto arguments for why the socially-safe-feeling local action is also the best consequentialist strategy”.)
Rob—this is an excellent clarification; thank you. I agree that there are these three orientations in some tension within EA, and that it’s important to be honest about some of these trade-offs, and their implications for the future social and intellectual and ethical norms in EA.
I guess I’m mostly in the first camp.
I’m also sympathetic to taking PR issues seriously—but I think some EA organizers have sometimes been rather inept and amateurish at handling the PR issues, and aren’t working with an accurate model of how PR crises flare up and die down, or of how much to say, and what to say, when the crises at at their peak.
IMHO, we need to realize that any sufficiently large, powerful, and well-funded movement (like ours) will reliably and frequently experience PR crises, some of which arise from deliberate PR attacks from individuals or organizations hostile to our mission.
That will be the new normal. We’ll have to learn to live with it, and to have the PR crisis response teams in place to deal with it. And we’ll need the emotional resilience to accept that not everyone will like, love, or respect what we do—and that’s OK.
Rob—I guess it’s valid and unobjectionable to say we should try to stay conscious about what norms we’re creating, following, and reinforcing, with a view towards how we shape the future culture of EA.
However, it sounds like you might be alluding to a somewhat more specific claim that certain good norms are currently at risk of being compromised or corrupted, and/or certain bad norms are at risk of spreading.
I wonder if you could spell this out a bit more? I’m not sure what your intended takeaway is, or in what direction you’re trying to nudge us (e.g. towards more ‘inclusive kindness’, or towards more ‘epistemic integrity’?).
From my perspective, there’s currently a sort of tug-of-war happening between three EA orientations (which already existed, though things like the Bostrom e-mail have inspired people to put them into words and properly attend to them):
one that’s worried EA risks losing sight of principles like “honesty” and “open discourse” (and maybe “scientific inquiry and scholarship”),
one that’s worried EA risks losing sight of principles like “basic human compassion” and “care and respect for everyone regardless of life-circumstance” (and potentially also “scientific inquiry and scholarship”, with a different model of what the science says),
and a more pragmatic “regardless of all that stuff, the PR costs and benefits are clear and those considerations dominate the utilitarian calculus here”.
Obviously these aren’t mutually exclusive, at least at that level of abstraction. You can simultaneously be worried about PR risk, about EAs’ epistemic integrity eroding, and about EAs’ interpersonal compassion eroding. But a lot of the current disagreement seems to pass through EAs putting different relative weightings on these three classes of consideration.
On the current margin, I want to encourage more discussion that’s coming from the first or second perspective, relative to the third perspective. And I want the third perspective to flesh out its models more, and consider a larger option and hypothesis space — in part just so it’s easy to tell when EAs have models of the strategy space that they’re drawing on at all. (Versus arguments based more on mental motions like “feel socially unsafe 🢡 generate post-facto arguments for why the socially-safe-feeling local action is also the best consequentialist strategy”.)
Rob—this is an excellent clarification; thank you. I agree that there are these three orientations in some tension within EA, and that it’s important to be honest about some of these trade-offs, and their implications for the future social and intellectual and ethical norms in EA.
I guess I’m mostly in the first camp.
I’m also sympathetic to taking PR issues seriously—but I think some EA organizers have sometimes been rather inept and amateurish at handling the PR issues, and aren’t working with an accurate model of how PR crises flare up and die down, or of how much to say, and what to say, when the crises at at their peak.
IMHO, we need to realize that any sufficiently large, powerful, and well-funded movement (like ours) will reliably and frequently experience PR crises, some of which arise from deliberate PR attacks from individuals or organizations hostile to our mission.
That will be the new normal. We’ll have to learn to live with it, and to have the PR crisis response teams in place to deal with it. And we’ll need the emotional resilience to accept that not everyone will like, love, or respect what we do—and that’s OK.