I’m genuinely not sure why I’m being downvoted here. What did I say?
anonymousEA
My apologies, specific evidence was not presented with respect to...
...the quasi-censorship/emotional blackmail point because I think it’s up to the people involved to provide as much detail as they are personally comfortable with. All I can morally do is signal to those out of the loop that there are serious problems and hope that somebody with the right to name names does so. I can see why this may seem conspiratorial without further context. All I can suggest is that you keep an ear to the ground. I’m anonymous for a reason.
...the funding issue because either it fits the first category of “areas where I don’t have a right to name names” (cf. ”...any critique of central figures in EA would result in an inability to secure funding from EA sources...” above) or because the relevant information would probably be enough to identify me and thus destroy my career.
...the reading list issue because I thought the point was self-evident. If you would like some examples, see a very brief selection below, but this criticism applies to all relevant reading lists I have seen and is an area where I’m afraid we have prior form—see https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/#Systematically_problematic_syllabi_reading_lists_citations_writings_etc . I am not accusing those involved of being “indoctrinators” or of having bad intentions, I am merely observing that they ignore much of academic existential risk work in favour of a restricted range of texts by a few EA “thought leaders” and EA Forum posts, which, to newcomers, presents an idiosyncratic and ideological view of the field as the only view.
- Doing EA Better by 17 Jan 2023 20:09 UTC; 257 points) (
- 2 Jan 2022 21:56 UTC; 31 points) 's comment on Democratising Risk—or how EA deals with critics by (
If people downvote comments on the basis of perceived ingroup affiliation rather than content then I think that might make OP’s point for them...
Again, I’m really not sure where these downvotes are coming from. I’m engaging with criticism and presenting what information I can present as clearly as possible.
I apologise, I don’t process it that way, I was simply using it as shorthand.
I don’t want to dip into discussions that don’t directly concern the issues I created this account to discuss, but your characterisation of degrowth as having “enormous humanitarian costs” “built in” is flatly untrue in a way that is obvious to anyone who has read any degrowth literature, e.g. Kallis or Hickel.
This is not the only time you have mischaracterised democratic and ecological positions on this post, please stop.
That sounds to me like a thing only cartoon villains would say.
...oh dear
This community is entering a rough patch, I feel.
- 29 Dec 2021 10:02 UTC; 14 points) 's comment on Democratising Risk—or how EA deals with critics by (
...um
It seems that you fundamentally misunderstand degrowth. For an introduction I suggest this:
https://www.annualreviews.org/doi/abs/10.1146/annurev-environ-102017-025941
I’m sorry but can you please explain how “techbro” is a slur?
I don’t see a dichotomy between “ignoring the source of an argument and their potential biases” and downvoting a multi-paragraph comment on the grounds that it used less-than-charitable language about Silicone Valley billionaires.
Based on your final line I’m not sure we disagree?
I’m not sure that there are any attempted-objective assessments of degrowth (at least, not that I’ve found) and the post I linked provides an overview of the topic as understood by most of its key proponents. If I wanted to introduce people to EA, would it be inappropriate to offer them a copy of Doing Good Better?
I didn’t make specific arguments because frankly I shouldn’t need to. Someone who has written about climate change should not be making unequivocally untrue statements about basic aspects of a core strand of environmental economics. My assumption was that, given Halstead’s experience, his mischaracterizations could not have been due to a lack of knowledge.
This will probably be dogpiled due to “tone” but to be honest I have rewritten this comment twice to move away from clear statements of my views towards more EA-friendly language to make it as charitable as possible. There just aren’t many nice ways of saying that, well...
you see the problem?
See my reply to Will above. It’s a fair point that it’s not very helpful to spectators (besides indicating that the claim referred to should perhaps not be taken at face value) but my intention was to reply to Halstead rather than the audience.
In my view, it would be condescending if I was referring to most people, but not in this case. My point is that someone who has written about climate issues more than once in the past and who is considered something of an authority on climate issues within EA can be expected to have basic background knowledge on climate topics.
If we are going to have a hierarchical culture led by “thought leaders”, I think we should at least hold them to a certain standard.
I really don’t see the link between reducing air travel and the fact that COVID killed millions of people and necessitated lockdown measures.
I’m going to disengage now. Repeatedly mischaracterizing opposing views and deploying non-sequiturs for rhetorical reasons do not indicate to me that this will be a productive conversation.
Which alternatives to EV have what problems for what uses in what contexts?
Why do those problems make them worse than EV, a tool that requires the use of numerical probabilities for poorly-defined events often with no precedent or useful data?
What makes all alternatives to EV less preferable to the way in which EV is usually used in existential risk scholarship today, where subjectively-generated probabilities are asserted by “thought leaders” with no methodology and no justification, about events that are not rigorously defined nor separable, which are then fed into idealized economic models, policy documents, and press packs?
- 2 Jan 2022 21:56 UTC; 31 points) 's comment on Democratising Risk—or how EA deals with critics by (
The argument is too vague to counter: how do you disprove claims about unspecified problems with unspecified tools in unspecified contexts?
There is no snark in this comment, I am simply stating my views as clearly and unambiguously as possible.
I’d like to add that as someone whose social circle includes both EAs and non-EAS, I have never witnessed reactions as defensive and fragile as those made by some EAs in response to criticism of orthodox EA views. This kind of behaviour simply isn’t normal.
We could also on occasion say “yes we get this wrong and we still have much to learn” and not treat every critique as an attack.
Strong upvote for this if nothing else.
(the rest is also brilliant though, thank you so much for speaking up!)
I think this conflates the criticism of the idea of unitary and unstoppable technological progress with opposition to any and all technological progress.
“Direct” vs “indirect” x-risk is a crude categorization, as most risks will cause hazards via a variety of pathways.
I think you switched the two by accident
Otherwise an excellent comment even if I disagree with most of it, have an updoot
Thank you both from the bottom of my heart for writing this. I share many (but not all) of your views, but I don’t express them publicly because if I do my career will be over.
What you call the Techno-Utopian Approach is, for all intents and purposes, hegemonic within this field.
Newcomers (who are typically undergraduates not yet in their twenties) have the TUA presented to them as fact, through reading lists that aim to be educational. In fact, they are extremely philosophically, scientifically, and politically biased; when I showed a non-EA friend of mine a couple of examples, the first word out of their mouth was “indoctrination”, and I struggle to substantively disagree.
These newcomers are then presented with access to billions of dollars in EA funding, on the unspoken (and for many EAs, I suspect honestly unknown) condition that they don’t ask too many awkward questions.
I do not know everything about, ahem, recent events in multiple existential risk organisations, but it does not seem healthy. All the information I have points toward widespread emotional blackmail and quasi-censorship, and an attitude toward “unaligned” work that approaches full-on corruption.
Existential risk is too important to depend on the whims of a small handful of incredibly wealthy techbros, and the people who make this cause their mission should not have to fear what will happen to their livelihoods or personal lives if they publicly disagree with the views of the powerful.
We can’t go on like this.