You may think this, but (some) people on the Forum clearly do not.
A.C.Skraeling
This is good, though offering comments on various sections of a google doc is of course a very different exercise to full and blind peer-review.
Did any of the reviewers notice that you had not mentioned (almost?) any climate-related GCR papers? If so, what was your response to them?
As per your comments about complex systems above, please do not dismissively mischaracterise the views of your critics. This is the kind of thing an average forum user would get hammered for, please do not try to get away with it just because you know you can.
If you discuss their arguments, why didn’t you cite them? If the X-risk climate corpus takes an ‘extreme’ stance by and large, is that not the kind of thing you would expect to see discussed in a >400 page report on climate change X-risk?
Even to the extent that this report is within the IPCC mainstream, notwithstanding, for instance:
The complete absence of systems perspectives (even just to justify your rejection, something I, to be frank, would expect in an undergraduate dissertation)
Lack of consideration of vulnerability, exposure, or cascading disasters
Silent disregard for Reisinger et al.’s discussion of the concept of risk
...it is well-known that the IPCC must moderate its conclusions and focus on better-case scenarios for political reasons, i.e. so as to not be written off as alarmist. You know this, because it is mentioned in Climate Endgame and discussed at length by Jehn et al.
This is another rather important issue in climate risk scholarship you would expect to see mentioned in a work this long.
I was worried about the harshness aspect but to be frank there are only so many ways to say that someone in a position of power and influence has acted with negligence.
Perhaps these could also be useful things to do (thought given the afore-mentioned herd-downvoting I doubt that (a) would receive sufficient good-faith engagement to be worth writing.
(b) could be useful for facilitating small-scale discussion, but I haven’t seen any indication that there are people who want to or are trying to do that, e.g. with a comment saying ‘On point #4...’
In any case, I have seen far longer comments than mine and comments with more questions and less elaboration than Gideon’s get dozens of upvotes before.
These criticisms (and I’m discussing both your response and Karthik’s here, as well as a more general pattern) appear to only be brought up when the EA big boys are being criticised: I doubt if Gideon had asked five complimentary questions he would have received anything close to such a negative reaction.
This does remind me of a lot of the response to Democratising Risk: Carla and Luke were told that the paper was at once too broad and too narrow, too harsh and yet not direct enough: anything to dismiss critique while being able to rationalise it as a mere technical application of discursive norms.
To be fair a ‘comprehensive’ response would include even more questions, so I’m not confident there’s any way to win here.
Yes I am also very worried about the orthodoxy point; EA is often a closed citation loop, where a small number of people and organisations cross-cite one another and ignore outside (‘non-value aligned’) work. Most reading lists are absolutely dominated by ~5 names, sometimes a few more.
Halstead, as a semi-big name at a prominent organisation (and, for better or worse, the movement’s de facto authority on climate change) is extremely likely to have his work accepted into the canon without significant challenge from climate experts (with training in climate science and policy, rather than philosophy...).
Thus, a fresh crop of undergraduates on will be told that climate is no big deal compared to sexier and more EA-friendly stuff like AI without ever being aware of all the climate-related GCR work Halstead doesn’t engage with (or even mention). I suspect, perhaps uncharitably, that this is because most of it disagrees with him. This in turn is partially because it has to be peer-reviewed by people selected on the basis of their expertise in climate risk, rather than EA value-alignment.
This lack of internal critique is probably because EA talks down climate so much (not least due to the influence of Halstead) that there simply aren’t very many climate-focused people around, and those that are around know the kind of response they get when they speak out of turn (see above haha).
I love so much of EA but for a community so focused on epistemics we really are bad at accepting criticism, especially when it’s directed at the big boys.
I can see where you’re coming from here but I don’t think the specifics really apply in this case.
There are many questions to raise about this google doc, and it seems fair to the reader to ask them all in one place rather than drip-feeding throughout a tree of replies and reply-replies. If responding to them all would take up too much of Halstead’s time, he can say so, no?
There’s not usually very much to elaborate when it comes to questions of omission: x is an important aspect of climate risk, Halstead has not mentioned x.
I suppose you could add the implicit points (studies of topics should include or at least mention the important aspects of those topics, space wasn’t a constraint, Halstead knows what the terms mean, etc.) but that’s unnecessary in 99% of conversations and not a standard we expect anywhere else.
I meant ‘I’ll leave the in-depth response to Gideon’. What you say speaks for itself: if Halstead presented this at a climate science org these would be some of the first questions asked and I’m puzzled (+ a bit weirded out, to be frank) as to why they’re getting such a hostile response.
(Case in point for my comment about downvoting, community hierarchy, and groupthink, below)
What would you call it?
I suppose all I have to say is that I often see very reasonable critiques downvoted through the floor without explanation worryingly often.
I haven’t theorised very much about the cause, but the phenomenon correlates suspiciously well with substantive or strong criticism of prominent figures within EA.If this perception is accurate, it does not seem like good epistemic practice.
(This one has 14 points from 3 votes? Do three strong-upvotes produce 14 overall karma? Why?)
(I have a few thoughts on this but it’s being marked as spam for some reason, possibly length. I’m going to post this as a short response and then edit in the content. Please let me know if you can see it.)
Hi John, thanks for the post!
I’ll leave an in-depth response to Gideon, but I have a few points that I think would be helpful to share. In short, your response worries me. I have tried to keep the prose below inoffensive in tone, but there is a trade-off between offensive directness and condescending obfuscation. I hope I have traced the line accurately.
You may not think significant discussion of cascading risks would change the fundamental conclusions of your report, but many researchers, often those with considerably more experience and expertise in climate risk (e.g. the IPCC), do: strongly so. Surely in a book-length report there is room for a few pages?
If you have refuted arguments, is it not academic best practice to cite the papers you respond to? In any case, if you know of and have read the papers, are we to understand that you believe many (if not most) peer-reviewed papers on Global Catastrophic and Existential climate risk are not worth mentioning anywhere in 437 pages of discussion?
This response causes me the most concern. That is simply not what complex systems theory is. Either you are aware that this characterisation is highly inaccurate and unfair, or you are not. If the former, I am disappointed by your (apparent) dismissiveness and willingness to mischaracterise. If the latter, I wonder how you could have done anything close to sufficient research into one of the foundational components of many studies of climate risks.
It is true that there are many conceptual frameworks for climate risk, and in a study of any topic you are generally expected to state, explain, and justify your conceptual framework. This is especially true when the framework you use (i.e. that of the Techno-Utopian Approach) has been strongly critiqued, for instance in Democratising Risk (Cremer and Kemp, 2021), another highly consequential paper you do not appear to have engaged with or cited. The dichotomy of ‘direct’ and indirect’ risks may be exhaustive, but this is not the only criterion for an adequate theoretical framework. To be somewhat, but logically coherently glib, we could make the same argument for categorising phenomena according to whether their names contained an odd or even number of letters.
I also disagree with this point, especially the final sentence, but there is little to engage with: simply assertions. Let us agree to disagree.
Beard et al. and Kemp et al. are each less than 5% of the length of your piece. Of course they cover less ground. There is a difference between a 10- or 20-page paper not mentioning every single caveat in every single work they cite, and one (1) failing to substantively engage with or even cite almost all GCR-specific climate research, (2) not explicitly stating nor justifying one’s methodology in the face of strong critique, and (3) disregarding (in complex systems studies) a massive component of studies of climate risk, wider GCR (e.g. Fisher and Sandberg 2022), and the studies of Earth-system dynamics in general without explanation or justification.
Do you expect to subject this work to peer-review, and if not, why?
In all cases perhaps, but it is strange to see objections that would be super obvious top-of-the-head stuff in climate circles dismissed out of hand here.
(Also can someone who knows more about the Forum than me explain how this reply has 51 points from 13 votes? Even if strong-upvotes count as double this is extremely inflated. Are the totals extremified or something? Is it multiplicative?)
I think this strongly contributes to groupthink.
People will subconsciously adapt their views to match the majority to some extent, and assume that a post or comment has the rating it does for a reason. This is exacerbated by the [issues around hierarchy and hero-worship EA sometimes has.](https://forum.effectivealtruism.org/posts/DxfpGi9hwvwLCf5iQ/objections-to-value-alignment-between-effective-altruists)
(Edit: it seems my fears were right, lol)
Thanks for posting this Gideon, I shared similar issues to you but didn’t make a reply because I feared the it would would be dismissed or ignored. It is gratifying to see that John has replied, but epistemically concerning that your entirely reasonable criticisms are being so heavily downvoted: at present you average 1 point from 13 votes.
These are critiques you would expect anyone with a background in climate risk to make and I don’t see any good reason for them to have been dismissed by so many fellow EAs. Could any of the downvoters explain their decision?
Would it be correct to say that university groups that were quite strongly and repeatedly encouraged to bulk-buy the book by central EA orgs (which MacAskill either works for, is a director of, or serves on the board of)?
I must say that I am also not convinced by the argument that the only or best way of preventing supply-chain issues is having a book bulk-ordered by affiliated organisations, but this is a weaker-held perspective.
Wow! Do we have any data on how much of this is a consequence of EA organisations mass-ordering the book to for instance hand out for free?
Especially, how much is a consequence of the bulk-buying by organisations that MacAskill is affiliated with, or the orgs funded by them (I’m thinking student groups)?
I presume that you are assuming I am Zoe Cremer here. I am not Zoe (Carla? Which is her actual first name?) and I have never met her, but feel free to assume only one person has issues with EA norms if you want. That post has 200 upvotes: some people must have agreed with her, even if you didn’t.
Based on Cremer’s recent statements in and around the MacAskill profile in the New Yorker she seems to be completely worn out by EA and has largely lost interest: presumably not someone who would dedicate very much time to getting into EA Forum comment wars?
This isn’t just an issue with the karma system (though artificially magnifying the ratings of somewhat popular comments so that 7 votes can produce a rating of over 25 is definitely an odd choice) it’s a cultural issue. Why did you ignore these aspects and focus the most technical issue?