It’s also not the claim being made:
...minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by [them]...
It’s also not the claim being made:
...minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by [them]...
I’m genuinely not sure why I’m being downvoted here. What did I say?
If people downvote comments on the basis of perceived ingroup affiliation rather than content then I think that might make OP’s point for them...
Honestly, fair enough.
I apologise, I don’t process it that way, I was simply using it as shorthand.
I don’t see a dichotomy between “ignoring the source of an argument and their potential biases” and downvoting a multi-paragraph comment on the grounds that it used less-than-charitable language about Silicone Valley billionaires.
Based on your final line I’m not sure we disagree?
Thank you both from the bottom of my heart for writing this. I share many (but not all) of your views, but I don’t express them publicly because if I do my career will be over.
What you call the Techno-Utopian Approach is, for all intents and purposes, hegemonic within this field.
Newcomers (who are typically undergraduates not yet in their twenties) have the TUA presented to them as fact, through reading lists that aim to be educational. In fact, they are extremely philosophically, scientifically, and politically biased; when I showed a non-EA friend of mine a couple of examples, the first word out of their mouth was “indoctrination”, and I struggle to substantively disagree.
These newcomers are then presented with access to billions of dollars in EA funding, on the unspoken (and for many EAs, I suspect honestly unknown) condition that they don’t ask too many awkward questions.
I do not know everything about, ahem, recent events in multiple existential risk organisations, but it does not seem healthy. All the information I have points toward widespread emotional blackmail and quasi-censorship, and an attitude toward “unaligned” work that approaches full-on corruption.
Existential risk is too important to depend on the whims of a small handful of incredibly wealthy techbros, and the people who make this cause their mission should not have to fear what will happen to their livelihoods or personal lives if they publicly disagree with the views of the powerful.
We can’t go on like this.
“Direct” vs “indirect” x-risk is a crude categorization, as most risks will cause hazards via a variety of pathways.
I think you switched the two by accident
Otherwise an excellent comment even if I disagree with most of it, have an updoot
We could also on occasion say “yes we get this wrong and we still have much to learn” and not treat every critique as an attack.
Strong upvote for this if nothing else.
(the rest is also brilliant though, thank you so much for speaking up!)
Again, I’m really not sure where these downvotes are coming from. I’m engaging with criticism and presenting what information I can present as clearly as possible.
It is morally tenable under some moral codes but not others. That’s the point.
I think this conflates the criticism of the idea of unitary and unstoppable technological progress with opposition to any and all technological progress.
I really don’t see the link between reducing air travel and the fact that COVID killed millions of people and necessitated lockdown measures.
I’m going to disengage now. Repeatedly mischaracterizing opposing views and deploying non-sequiturs for rhetorical reasons do not indicate to me that this will be a productive conversation.
Which alternatives to EV have what problems for what uses in what contexts?
Why do those problems make them worse than EV, a tool that requires the use of numerical probabilities for poorly-defined events often with no precedent or useful data?
What makes all alternatives to EV less preferable to the way in which EV is usually used in existential risk scholarship today, where subjectively-generated probabilities are asserted by “thought leaders” with no methodology and no justification, about events that are not rigorously defined nor separable, which are then fed into idealized economic models, policy documents, and press packs?
See my reply to Will above. It’s a fair point that it’s not very helpful to spectators (besides indicating that the claim referred to should perhaps not be taken at face value) but my intention was to reply to Halstead rather than the audience.
In my view, it would be condescending if I was referring to most people, but not in this case. My point is that someone who has written about climate issues more than once in the past and who is considered something of an authority on climate issues within EA can be expected to have basic background knowledge on climate topics.
If we are going to have a hierarchical culture led by “thought leaders”, I think we should at least hold them to a certain standard.
It seems that you fundamentally misunderstand degrowth. For an introduction I suggest this:
https://www.annualreviews.org/doi/abs/10.1146/annurev-environ-102017-025941
I’m not sure that there are any attempted-objective assessments of degrowth (at least, not that I’ve found) and the post I linked provides an overview of the topic as understood by most of its key proponents. If I wanted to introduce people to EA, would it be inappropriate to offer them a copy of Doing Good Better?
I didn’t make specific arguments because frankly I shouldn’t need to. Someone who has written about climate change should not be making unequivocally untrue statements about basic aspects of a core strand of environmental economics. My assumption was that, given Halstead’s experience, his mischaracterizations could not have been due to a lack of knowledge.
This will probably be dogpiled due to “tone” but to be honest I have rewritten this comment twice to move away from clear statements of my views towards more EA-friendly language to make it as charitable as possible. There just aren’t many nice ways of saying that, well...
you see the problem?
...um
But it is basic background knowledge, and that point needs to be made clear to those less familiar with the topic! This isn’t an issue of understanding and disagreeing, as demonstrated by his non-sequitur about COVID if nothing else.
If, for instance, someone who has written about AI more than once argues that the Chinese government funding AI research for solely humanitarian reasons, you have two choices: they are being honest but ignorant (which is unlikely, embarrassing for them and worrying for any community that treats them as an authority) or they are being dishonest (which is bad for everyone). There is no “charitable” position here.
I understand and agree with the discourse norms here, but if someone is demonstrably, repeatedly, unequivocally acting in bad faith then others must be able to call that out.
My apologies, specific evidence was not presented with respect to...
...the quasi-censorship/emotional blackmail point because I think it’s up to the people involved to provide as much detail as they are personally comfortable with. All I can morally do is signal to those out of the loop that there are serious problems and hope that somebody with the right to name names does so. I can see why this may seem conspiratorial without further context. All I can suggest is that you keep an ear to the ground. I’m anonymous for a reason.
...the funding issue because either it fits the first category of “areas where I don’t have a right to name names” (cf. ”...any critique of central figures in EA would result in an inability to secure funding from EA sources...” above) or because the relevant information would probably be enough to identify me and thus destroy my career.
...the reading list issue because I thought the point was self-evident. If you would like some examples, see a very brief selection below, but this criticism applies to all relevant reading lists I have seen and is an area where I’m afraid we have prior form—see https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/#Systematically_problematic_syllabi_reading_lists_citations_writings_etc . I am not accusing those involved of being “indoctrinators” or of having bad intentions, I am merely observing that they ignore much of academic existential risk work in favour of a restricted range of texts by a few EA “thought leaders” and EA Forum posts, which, to newcomers, presents an idiosyncratic and ideological view of the field as the only view.
https://forum.effectivealtruism.org/posts/u58HNBMBdKPbvpKqH/ea-reading-list-longtermism-and-existential-risks
http://www.global-catastrophic-risks.com/reading.html
https://forum.effectivealtruism.org/posts/wmAQavcKjWc393NXP/example-syllabus-existential-risks