The r/SneerClub response is interesting
https://www.reddit.com/r/SneerClub/comments/10ent65/saw_this_and_thought_of_you_guys/
A.C.Skraeling
Interestingly, a huge proportion of EA’s intellectual infrastructure can be traced back to the academic climate of the USA during the Cold War, where left-wing thinkers were eradicated from (analytic) philosophy by McCarthyist purges, Robert McNamara pushed for “rationalisation” and quantification throughout the US establishment, and the RAND Corporation developed concepts like Rational Choice Theory, Operations Research, and Game Theory. Indeed, the current President and CEO of RAND, Jason Matheny, is a CSET founder and former FHI researcher. Aside from the Silicon Valley influences (from which we get the blogposts, Californian Ideology, and most of the technofetishism), EA’s intellectual heritage is largely one of philosophy and economics intentionally stripped of their ability to challenge the status quo. [emphasis mine]
Yes!
The Effective Altruism movement is, in origin and at present, on a mission to benefit the powerless using the tools of the powerful; an injection of genuine compassion into the machinery of Capitalist Modernity.
It thus has precisely the advantages and limitations that you would expect. It is a truly impressive engine for the operationalisation of altruism, distributing malaria nets and deworming drugs with astonishing efficiency and effectiveness. Yet, it cannot conceive of solving problems rather than treating their symptoms, acts with the self-assured entitlement of a colonial administrator, and can never quite escape the stony gaze of the techno-modernist Leviathan.
At least it matches our Basilisk.
The campaign team flew EA community organisers from across the world to knock on doors, and ended up paying over a thousand dollars per vote. This happened in the USA, which has a political system tailored to facilitate the purchasing of elections. It was bad.
From my experience, most anything that significantly conflicts with the TUA.
How would you prefer people to react when someone acts in bad faith?
What aspects of this comment fall outside those bounds?
That’s good to know, but I wonder how much change you personally can make. It’ll be significant, for sure, but I think a lot of this is cultural: a sort of EA-accelerated chunk of the class-coded aspects of the Hidden Curriculum.
John, with all possible respect, that is not a theoretical framework.
I think one of your major errors in this piece (as betrayed by your methodology-as-categorisation comment above), is that you have an implicit ontology of factors as essentially separate phenomena that can perhaps have a few, likely simple relationships, which is simply not how the Earth-system or social systems work.
Thus, you think that if you’ve written a few paragraphs on each thing you deem relevant (chosen informally, liberally sprinkled with assertions, assumptions, and self-citations), you’ve covered everything.
It’s all very Cartesian.
Doubtful if you look at Gideon’s first comment and remember it was downvoted through the floor almost immediately.
Questioning orthodoxy is ok within some bounds (often technical/narrow disagreements), or when expressed in suitable terms, e.g.
(Significant) underconfidence, regardless of expertise and/or lack of expertise among those criticised
Unreasonable assumptions of good faith, even in the face of hostility or malpractice (double standards, perhaps a lesser form of the expectation of a ‘perfect victim’)
Extensive use of EA buzzwords
Huge amounts of extra work/detail that would not be deemed necessary for non-critical writing
Essentially making oneself as small as possible so as not to set off the Bad Tone hair-trigger
This is difficult because knowing what you are talking about and being lazily dismissed by people you know for a fact know far less than you about a given subject matter makes one somewhat frustrated
As several EAs have noted, e.g. weeatquince, this is time-consuming and (emotionally) exhausting, and often results in dismissal anyway.
This is even harder to pull off when questioning sensitive issues like politics, funding ethics, foundational intellectual issues (e.g. the ways in which the TUA uses utterly unsuitable tools for its subject matter due to a lack of outside reading), competence of prominent figures, etc.
I actually think this forms a sort of positive feedback loop, where EAs become increasingly orthodox (and confident in that orthodoxy) due to perceived lack of substantive critiques, which makes making those critiques so frustrating, time-consuming, and low-impact that people just don’t bother. I’ve certainly done it.
To me it seems more like EA’s STEMlord-ism and roots in management consultancy, and its consequent maximiser-culture, rejection of democracy, and heavy preference for the latter aspect of the explore-exploit tradeoff.
“Number go bigger” etc. with a far lower value placed on critical reason, i.e. what the number actually is.
Orthodoxy is very efficient, you just end up pointed in the wrong direction.
Indeed, knowing what I know of some of the reviewers Halstead named I am very curious to see what the review process was, what their comments were, and whether they recommended publishing the report as-is.
I’ve always been quite confused about attitudes to scholarly rigour in this community: if the decisions we’re making are so important, shouldn’t we have really robust ways of making sure they’re right?
I think we also need renewed discussion of how the karma system contributes to groupthink and hierarchy, things that, to put it gently, EA sometimes struggles with somewhat.
As far as I can tell, the system gives far more voting power to highly-rated users, allowing a few highly active (and thus most likely highly orthodox) forum users to unilaterally boost or tank any given post.
This is especially bad when you consider that low-karma comments are hidden, allowing prominent figures (often with high karma scores) to soft-censor their own critics.
This is especially worrying given the groupthink that emerges on internet fora, where a comment having a score of −5 makes it much more likely for people to downvote it further on reflex, and vice versa.
I am not going to go into details here beyond saying that this is the plot of the MeowMeowBeenz episode of Community.
MeowMeowBeenz does not contribute to good epistemics.
What you say is true, but is not a response to what I said.
I didn’t say Halstead was a terrible person: there is a difference between disapproving of actions and damning persons. In any case, leaving open a significant possibility space for poor intent is not in any way close to ‘assuming intent’ and if someone reads it as such, they are wrong.
The comment was not meant to be neutral, but again, disapproving of an action is not the same as assuming poor intent, never mind calling the actor a ‘terrible person’.
I’m starting to see the ways in which tone-policing is selectively employed in this community (well, Forum, at least) to shut down criticism.
I don’t think many of the people who do it are conscious of what they’re doing, but there does seem to be an assumption that strong criticism (i.e. what is necessary if something is very wrong or someone has acted badly) is by default aggressive and thus in violation of group norms.
Thus, all criticism must be stated in the most faux-friendly, milquetoast way possible, imposing significant effort demands on the critic and allowing them to be disregarded if they ever slip up and actually straightforwardly say that a bad thing is bad.
Naturally this is far more likely to be applied when the criticism is directed at big or semi-big figures in the community or orthodox viewpoints.
And we wonder why EA is so terminally upper-middle class...
I may be missing something here, but how is ‘either you have acted in way x or way y’ “quite close” to assuming ‘x’?
The sentence was constructed to deliberately hold open both possibilities (i.e. aware or not), and you have cut off the quote before the latter of the possibilities was spelt out.
“‘Either this animal is a cat or a dog’ is quite close to assuming that the animal is a cat.”
Strong upvote, I thought I was going crazy. Thank you!
Yes and no in my opinion haha but I see your point
META: This + additional comments below from Halstead are strongly suggestive of bad-faith engagement: lazy dismissal without substantive engagement, repeated strawman-ing, Never Play Defense-ing, and accusing his critics of secretly being sockpuppet accounts of known heretics so their views can be ignored.
On the basis of Brandolini’s Law I am going to try to keep my replies as short as I can. If they seem insubstantial, it is likely because I have already responded to the point under discussion elsewhere, or because they are responding to attempts to move the conversation away from the original points of criticism.
I have specific criticisms to make, and I would like to see them addressed rather than ignored, dismissed, or answered only on the condition that I make a whole new set of criticisms for Halstead to also not engage with.
I suggest the reader read Halstead’s response before going back to Gideon’s and my comments. It was useful for me.
Climatic tipping points, cascading risks, and systemic risk are different things and you (hopefully) know it.
If you have refuted arguments made by relevant papers, why didn’t you cite them?
I’m not sure I understand your argument here: you are not under any obligation to discuss opposing perspectives on climate risk because a different paper on climate risk did not explicitly refute an argument that you would go on to make in the future?
In any case, this is not at all what Gideon or I said Your lazy and factually inaccurate dismissal of complex systems theory remains lazy and factually inaccurate. I am not sure where this point about Richards et al. comes into it.
You would have evaluated biorisk if the only possible use of a methodology was making sure you had a comprehensive categorisation system, which (as I hope you know) is not at all true.
I don’t really see how I have not made any arguments here. I suppose I could ask someone if it’s possible to write ‘Please cite your sources.’ in first-order logic.
I would like to hear your justification for how Beard et al, Richards et al, and Kemp et al all lean heavily on the idea of planetary boundaries, and how, if this was true, it would be relevant.
However, I doubt this would go anywhere. I suspect this is simply yet another way of ignoring people who disagree with you without thinking too hard, and relying on the combination of your name-recognition and the average EA’s ignorance of climate change to buy you the ‘Seems like he knows what he’s talking about!’-ness you want.
That is a rather odd assumption to make given that two of the issues under discussion were X-risk methodology and EA discourse norms in response to criticism.
Also I think it’s worth noting that you have once again ignored most of the criticism presented and moved to the safer rhetorical ground of vague insinuations about people you don’t like.
‘Never Play Defense’, anyone?
For those who haven’t read the full comments section, Halstead has decided that I am Carla Zoe Cremer.
Democratising Risk is a preprint, no?
Democratising Risk is not primarily about climate change, but it is about X-risk methodology. You have written a piece about X-risk. Scholarly works generally require a methodology section, and scholars are expected to justify their methodology, especially when it is a controversial one. This is advice I would give to any undergraduate I supervised.
It is true that Kemp et al. 2022 has not been published for long, so you can be excused for not discussing it at length. It seems odd to have not mentioned it at all though: two weeks is not a huge amount of time, but enough to at least mention by far the most prominent work of climate GCR work to date.
If you discuss Beard’s and Richards’ points, why don’t you cite them? In any case, justification for the lack of substantive engagement seems like something you need to offer, rather than me.
In any case, the lack of mention of most climate-specific GCR work is not the only thing you have been criticised for: please scroll up to see Gideon’s original comment if you like.
Jehn et al., do not say that climate science literature ignores warming of more than 3C, they say that it is heavily under-represented.
Again, please stop lazily mischaracterising the views of your critics.
I don’t know if this repeated strawman-ing is accidental or not: if accidental, please improve your epistemics, if not, please try to engage in good faith.
This is an interesting point of view, that you should have mentioned and justified, as any student would be expected to in an essay, rather than simply pretending that criticisms do not exist
What are you even talking about?
I am not Cremer and it seems like an odd act of ego-defence to assume that there is only one person that could disagree with you.
I have no idea what you mean about Phil Torres: he clearly needs to take a chill pill but ‘harassment’ seems strong. Perhaps I’ve missed something. ‘Frustrated his career aims’?
In any case, Torres wasn’t a co-author of Democratising Risk, though I agree that he would probably agree with a lot of it.
Even if all of your implicit points were true, why on Earth would co-authorship with someone who had defamed you be grounds to offer reams of contradictory critiques to critical works while making none of the same critiques to comparable [EA Forum comments, but whatever] written pieces that do not substantially disagree with the canon.
Hi Holden, thanks for writing this up, but would it be possible for you to say something with a little bit more substance? At present it seems rather perfunctory and potentially a little insulting.
I’ve attempted to translate the comment above into a series of plain-English bottom-lines.
I apologise if the tone is a little forthright: a trade-off with clarity and intellectual honesty.
On anonymity
“Yeah I can see why there might seem to be a problem, and I promise that I am truly very sorry that you’re facing its consequences. In any case, I promise that everything is actually completely fine and you don’t need to worry! I acknowledge that (as you have already said) my promises don’t count for much here, but… trust me anyway! No, I will not take any notice of the specific issues you describe, nor the specific solutions that you propose.”
On conflicts of interest
“Here I will briefly describe some of the original causes of the problem. I personally think that it’s no big deal, and will not engage with any of the arguments or examples you provide. I promise we’re taking it really seriously, though.”
On focusing on a couple of existential risks (which is a gross simplification of the section I presume you’re responding to?)
“I personally think everything is fine, no I will not engage with any of the arguments or examples you provide.”
On being in line with the interests of billionaires
“I understand your concerns, but most of our tech billionaire donors changed their minds to fit the techno-political culture of Silicon Valley rather than starting off that way, and thus all incentive structures and cultural factors are completely irrelevant.”
On centralization of funding
“I perfunctorily agree that there is a problem, but I’m having trouble operationalizing the operational proposals you made. I will provide no specifics. I think membership-demarcation may be a problem, and will ignore your proposals for solving it.”
“By the way, would you mind doing even more unpaid work to flesh out specific mechanistic proposals, even though I, the person with the power to implement such proposals, just completely ignored them all in the sections I responded to?”
Despite my pre-existing intellectual respect for you, Holden, I really can’t escape reading this as a somewhat-more-socially-competent version of Buck’s response:
“We bosses know what we’re doing, you’re welcome to disagree if you want, but if you want to be listened to you need to do a bunch of unpaid work that we will probably completely ignore, and we most likely won’t listen to you at all anyway.”
This is what power does to your brain: you are only able to countenance posting empty EA-ified PR-speak like this because you are accountable only to a few personal friends that basically agree with you, and can thus get away with more or less ignoring external inputs.
Writing like this really reminds me of the bit about Interpretive Labour in Dead Zones of the Imagination:
Overwhelmingly one-sided social arrangements breed stupidity: by being in a position where you’re powerful enough to ignore people with other points of view, you become extremely bad at understanding them.
Thus, the oblivious bosses (egged on by mixed teams of true sycophants and power/money-seeking yes-men) continue doing whatever they want to do, and an invisible army of exhausted, exasperated, and powerless subordinates scramble to semi-successfully translate the whims of the bosses into bureaucratic justifications for doing the things that actually need to be done.
The bosses can always cook up some justification for why them being in charge is always the best way forward, and either never hear critiques because critics fear for their careers or, as seen here, lazily dismiss them without consequence.
Speaking as someone with a little experience in similar organisations and movements to this one that slowly lost their principles as they calcified into self-serving bureaucracies:
This is what it looks like.