I want to open by saying that there are many things about this post I appreciate, and accordingly I upvoted it despite disagreeing with many particulars. Things I appreciate include, but are not limited to:
-The detailed block-by-block approach to making the case for both cancel culture’s prevalence and its potential harm to the movement.
-An attempt to offer a concrete alternative pathway to CEA and local groups that face similar decisions in future.
-Many attempts throughout the post to imagine the viewpoint of someone who might disagree, and preempt the most obvious responses.
But there’s still a piece I think is missing. I don’t fault Larks for this directly, since the post is already very long and covers a lot of ground, but it’s the area that I always find myself wanting to hear more about in these discussions, and so would like to hear more about from either Larks or others in reply to this comment. It relates to both of these quotes.
Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person.
Rules and standards are very important for organising any sort of society. However, when applied inconsistently they can be used as a weapon to attack unpopular people while letting popular people off the hook.
Given that this post is titled ‘advice for CEA and local groups’, reading this made me hope that this post would end with some suggested ‘rules and standards’ for who we do and do not invite to speak at local events/EAG/etc. Where do we draw the line on ‘behaving immorally’? I strongly agree that whatever rules are being applied should be applied consistently, and think this is most likely to happen when discussed and laid down in a transparent and pre-agreed fashion.
While I have personal views on the Munich case which I have laid out elsewhere, I agree with Khorton below that little is being served by an ongoing prosecution-and-defence of Robin’s character or work. Moreover, my commitment to consistency and transparency is far stronger than my preference for any one set of rules over others. I also expect clear rules about what we will and won’t allow at various levels to naturally insulate against cancel culture. To the extent I agree that cancel culture is an increasing problem, the priority on getting this clear and relying less on ad hoc judgements of individuals has therefore risen, and will likely continue to rise.
So, what rules should we have? What are valid reasons to choose not to invite a speaker?
Meta note: I wouldn’t normally write a comment like this. I don’t seriously consider 99.99% of charities when making my donations; why single out one? I’m writing anyway because comments so far are not engaging with my perspective, and I hope more detail can help 80,000 hours themselves and others engage better if they wish to do so. As I note at the end, they may quite reasonably not wish to do so.
For background, I was one of the people interviewed for this report, and in 2014-2018 my wife and I were one of 80,000 hours’ largest donors. In recent years it has not made my shortlist of donation options. The report’s characterisation of them—spending a huge amount while not clearly being >0 on the margin—is fairly close to my own view, though clearly I was not the only person to express it. All views expressed below are my own.
I think it is very clear that 80,000 hours have had a tremendous influence on the EA community. I cannot recall anyone stating otherwise, so references to things like the EA survey are not very relevant. But influence is not impact. I commonly hear two views for why this influence may not translate into positive impact:
-80,000 hours prioritises AI well above other cause areas. As a result they commonly push people off paths which are high-impact per other worldviews. So if you disagree with them about AI, you’re going to read things like their case studies and be pretty nonplussed. You’re also likely to have friends who have left very promising career paths because they were told they would do even more good in AI safety. This is my own position.
-80,000 hours is likely more responsible than any other single org for the many EA-influenced people working on AI capabilities. Many of the people who consider AI top priority are negative on this and thus on the org as a whole. This is not my own position, but I mention it because I think it helps explain why (some) people who are very pro-AI may decline to fund.
I suspect this unusual convergence may be why they got singled out; pretty much every meta org has funders skeptical of them for cause prioritisation reasons, but here there are many skeptics in the crowd broadly aligned on prioritisation.
Looping back to my own position, I would offer two ‘fake’ illustrative anecdotes:
Alice read Doing Good Better and was convinced of the merits of donating a moderate fraction of her income to effective charities. Later, she came across 80,000 hours and was convinced by their argument that her career was far more important. However, she found herself unable to take any of the recommended positions. As a result she neither donates nor works in what they would consider a high-impact role; it’s as if neither interaction had ever occurred, except perhaps she feels a bit down about her apparent uselessness.
Bob was having impact in a cause many EAs consider a top priority. But he is epistemically modest, and inclined to defer to the apparent EA consensus- communicated via 80,000 hours—that AI was more important. He switched careers and did find a role with solid—but worse—personal fit. The role is well-paid and engaging day-to-day; Bob sees little reason to reconsider the trade-off, especially since ChatGPT seems to have vindicated 80,000 hours’ prior belief that AI was going to be a big deal. But if pressed he would readily acknowledge that it’s not clear how his work actually improves things. In line with his broad policy on epistemics, he points out the EA leadership is very positive on his approach; who is he to disagree?
Alice and Bob have always been possible problems from my perspective. But in recent years I’ve met far more of them than I did when I was funding 80,000 hours. My circles could certainly be skewed here, but when there’s a lack of good data my approach to such situations is to base my own decisions on my own observations. If my circles are skewed, other people who are seeing very little of Alice and Bob can always choose to fund.
On that last note, I want to reiterate that I cannot think of a single org, meta or otherwise, that does not have its detractors. I suspect there may be some latent belief that an org as central as 80,000 hours has solid support across most EA funders. To the best of my knowledge this is not and has never been the case, for them or for anyone else. I do not think they should aim for that outcome, and I would encourage readers to update ~0 on learning such.