Thank you both from the bottom of my heart for writing this. I share many (but not all) of your views, but I don’t express them publicly because if I do my career will be over.
What you call the Techno-Utopian Approach is, for all intents and purposes, hegemonic within this field.
Newcomers (who are typically undergraduates not yet in their twenties) have the TUA presented to them as fact, through reading lists that aim to be educational. In fact, they are extremely philosophically, scientifically, and politically biased; when I showed a non-EA friend of mine a couple of examples, the first word out of their mouth was “indoctrination”, and I struggle to substantively disagree.
These newcomers are then presented with access to billions of dollars in EA funding, on the unspoken (and for many EAs, I suspect honestly unknown) condition that they don’t ask too many awkward questions.
I do not know everything about, ahem, recent events in multiple existential risk organisations, but it does not seem healthy. All the information I have points toward widespread emotional blackmail and quasi-censorship, and an attitude toward “unaligned” work that approaches full-on corruption.
Existential risk is too important to depend on the whims of a small handful of incredibly wealthy techbros, and the people who make this cause their mission should not have to fear what will happen to their livelihoods or personal lives if they publicly disagree with the views of the powerful.
I think it’s because you’re making strong claims without presenting any supporting evidence. I don’t know what reading lists you’re referring to; I have doubts about not asking questions being an ‘unspoken condition’ about getting access to funding; and I have no idea what you’re conspiratorially alluding to regarding ‘quasi-censorship’ and ‘emotional blackmail’.
I also feel like the comment doesn’t seem to engage much with the perspective it criticizes (in terms of trying to see things from that point of view). (I didn’t downvote the OP myself.)
When you criticize a group/movement for giving money to those who seem aligned with their mission, it seems relevant to acknowledge that it wouldn’t make sense to not focus on this sort of alignment at all. There’s an inevitable, tricky tradeoff between movement/aim dilution and too much insularity. It would be fair if you wanted to claim that EA longtermism is too far on one end of that spectrum, but it seems unfair to play up the bad connotations of taking actions that contribute to insularity, implying that there’s something sinister about having selection criteria at all, without acknowledging that taking at least some such actions is part of the only sensible strategy.
I feel similar about the remark about “techbros.” If you’re able to work with rich people, wouldn’t it be wasteful not to do it? It would be fair if you wanted to claim that the rich people in EA use their influence in ways that… what is even the claim here? That their idiosyncrasies end up having an outsized effect? That’s probably going to happen in every situation where a rich person is passionate (and hands-on involved) about a cause – that doesn’t mean that the movement around that cause therefore becomes morally problematic. Alternatively, if your claim is that rich people in EA engage in practices that are bad, that could be a a fair thing to point out, but I’d want to learn about the specifics of the claim and why you think it’s the case.
I’m also not a fan of most EA reading lists but I’d say that EA longtermism addresses topics that up until recently haven’t gotten a lot of coverage, so the direct critiques are usually by people who know very little about longtermism. And “indirect critiques” don’t exist as a crisp category. If you wanted to write a reading list section to balance out the epistemic insularity effects in EA, you’d have to do a lot of pretty difficult work of unearthing what those biases are and then seeking out the exact alternative points of view that usefully counterbalance it. It’s not as easy as adding a bunch of texts by other political movements – that would be too random. Texts written by proponents of other intellectual movements contain important insights, but they’re usually not directly applicable to EA. Someone has to do the difficult work first of figuring out where exactly EA longtermism benefits from insights from other fields. This isn’t an impossible task, but it’s not easy, as any field’s intellectual maturation takes time (it’s an iterative process). Reading lists don’t start out as perfectly balanced. To summarize, it seems relevant to mention (again) that there are inherent challenges to writing balanced reading lists for young fields. The downvoted comment skips over that and dishes out a blanket criticism that one could probably level against any reading list of a young field.
If you’re able to work with rich people, wouldn’t it be wasteful not to do it? … [T]heir idiosyncrasies end up having an outsized effect? That’s probably going to happen in every situation where a rich person is passionate (and hands-on involved) about a cause
If that will happen whenever a rich person is passionate about a cause, then opting to work with rich people can cause more harm than good. Opting out certainly doesn’t have to be “wasteful”.
My initial thinking was that “idiosyncrasies” can sometimes be neutral or even incidentally good.
But I think you’re right that this isn’t the norm and it can quickly happen that it makes things worse when someone only has a lot of influence because they have money, rather than having influence because they are valued by their peers for being unusually thoughtful.
(FWIW, I think the richest individuals within EA often defer to the judgment of EA researchers, as opposed to setting priorities directly themselves?)
FWIW, I think the richest individuals within EA often defer to the judgment of EA researchers, as opposed to setting priorities directly themselves
I’m not saying I know anything to the contrary—but I’d like to point out that we have no way of knowing. This is a major disadvantage of philanthropy—where governments are required to be transparent regarding their fund allocations, individual donors are given privacy and undisclosed control over who receives their donations and what organisations are allowed to use them for.
My apologies, specific evidence was not presented with respect to...
...the quasi-censorship/emotional blackmail point because I think it’s up to the people involved to provide as much detail as they are personally comfortable with. All I can morally do is signal to those out of the loop that there are serious problems and hope that somebody with the right to name names does so. I can see why this may seem conspiratorial without further context. All I can suggest is that you keep an ear to the ground. I’m anonymous for a reason.
...the funding issue because either it fits the first category of “areas where I don’t have a right to name names” (cf. ”...any critique of central figures in EA would result in an inability to secure funding from EA sources...” above) or because the relevant information would probably be enough to identify me and thus destroy my career.
...the reading list issue because I thought the point was self-evident. If you would like some examples, see a very brief selection below, but this criticism applies to all relevant reading lists I have seen and is an area where I’m afraid we have prior form—see https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/#Systematically_problematic_syllabi_reading_lists_citations_writings_etc . I am not accusing those involved of being “indoctrinators” or of having bad intentions, I am merely observing that they ignore much of academic existential risk work in favour of a restricted range of texts by a few EA “thought leaders” and EA Forum posts, which, to newcomers, presents an idiosyncratic and ideological view of the field as the only view.
Again, I’m really not sure where these downvotes are coming from. I’m engaging with criticism and presenting what information I can present as clearly as possible.
I disagree with much of the original comment, but I’m baffled that you think this is appropriate content for the EA Forum. I strong-downvoted and reported this comment.
While this comment was deleted, the moderators discussed it in its original form (which included multiple serious insults to another user) and decided to issue a two-week ban to Charles, starting today. We don’t tolerate personal insults on the Forum.
Hi Charles. Please consider revising or retracting this comment; unlike your other comments in this thread, it’s unkind and not adding to the conversation.
Personally I more or less agreed with you and I don’t think you were as insensitive as people suggested. I work in machine learning yet I feel shining a light on the biases and the oversized control of people in the tech industry is warranted and important.
IMO we should seek out and listen to the most persuasive advocates for a lot of different worldviews. It doesn’t seem epistemically justified to penalize a worldview because it gets a lot of obtuse advocacy.
I think that the dismissive and insulting language is at best unhelpful—and signaling your affiliations by being insulting to people you see as the outgroup seems like a bad strategy for engaging in conversation.
The “content” here is that you refer to the funders you dislike with slurs like “techbro”. It’s reasonable to update negatively in response to that evidence.
It’s straightforwardly a slur – to quote Google’s dictionary, it is “a derogatory or insulting term applied to particular group of people”.
It’s not a term anyone would ever use to neutrally describe a group of people, or a term anyone would use to describe themselves (I have yet to see anyone “reclaim” “techbro”). Its primary conversational value is as an insult.
I’m also surprised by how strongly people feel about this term! I’ve always thought techbro was a mildly insulting caricature of a certain type of Silicon Valley guy
Even if it’s only a “mildly insulting caricature”, it’s still a way to claim that certain people are unintelligent or unserious without actually presenting an argument.
Compare:
“A small handful of incredibly wealthy techbros”
“A small handful of incredibly wealthy people with similar backgrounds in technology, which could lead to biases X and Y”
The first of these feels like it’s trying to do the same thing as the second, without actually backing up its claim.
When I read the second, I feel like someone is trying to make me think. When I read the first, I feel like someone is trying to make me stop thinking.
Priors should matter! For example, early rationalists were (rightfully) criticized for being too open to arguments from white nationalists, believing they should only look at the argument itself rather than the source. It isn’t good epistemics to ignore the source of an argument and their potential biases (though it isn’t good epistemics to dismiss them out of hand either based on that, of course).
I don’t see a dichotomy between “ignoring the source of an argument and their potential biases” and downvoting a multi-paragraph comment on the grounds that it used less-than-charitable language about Silicone Valley billionaires.
Based on your final line I’m not sure we disagree?
I think it’s plausible that it’s hard to notice this issue if your personal aesthetic preferences happen to be aligned with TUA. I tried to write here a little questioning how important aesthetic preferences may be. I think it’s plausible that people can unite around negative goals even if positive goals would divide them, for instance, but I’m not convinced.
Thank you both from the bottom of my heart for writing this. I share many (but not all) of your views, but I don’t express them publicly because if I do my career will be over.
What you call the Techno-Utopian Approach is, for all intents and purposes, hegemonic within this field.
Newcomers (who are typically undergraduates not yet in their twenties) have the TUA presented to them as fact, through reading lists that aim to be educational. In fact, they are extremely philosophically, scientifically, and politically biased; when I showed a non-EA friend of mine a couple of examples, the first word out of their mouth was “indoctrination”, and I struggle to substantively disagree.
These newcomers are then presented with access to billions of dollars in EA funding, on the unspoken (and for many EAs, I suspect honestly unknown) condition that they don’t ask too many awkward questions.
I do not know everything about, ahem, recent events in multiple existential risk organisations, but it does not seem healthy. All the information I have points toward widespread emotional blackmail and quasi-censorship, and an attitude toward “unaligned” work that approaches full-on corruption.
Existential risk is too important to depend on the whims of a small handful of incredibly wealthy techbros, and the people who make this cause their mission should not have to fear what will happen to their livelihoods or personal lives if they publicly disagree with the views of the powerful.
We can’t go on like this.
I’m genuinely not sure why I’m being downvoted here. What did I say?
I think it’s because you’re making strong claims without presenting any supporting evidence. I don’t know what reading lists you’re referring to; I have doubts about not asking questions being an ‘unspoken condition’ about getting access to funding; and I have no idea what you’re conspiratorially alluding to regarding ‘quasi-censorship’ and ‘emotional blackmail’.
I also feel like the comment doesn’t seem to engage much with the perspective it criticizes (in terms of trying to see things from that point of view). (I didn’t downvote the OP myself.)
When you criticize a group/movement for giving money to those who seem aligned with their mission, it seems relevant to acknowledge that it wouldn’t make sense to not focus on this sort of alignment at all. There’s an inevitable, tricky tradeoff between movement/aim dilution and too much insularity. It would be fair if you wanted to claim that EA longtermism is too far on one end of that spectrum, but it seems unfair to play up the bad connotations of taking actions that contribute to insularity, implying that there’s something sinister about having selection criteria at all, without acknowledging that taking at least some such actions is part of the only sensible strategy.
I feel similar about the remark about “techbros.” If you’re able to work with rich people, wouldn’t it be wasteful not to do it? It would be fair if you wanted to claim that the rich people in EA use their influence in ways that… what is even the claim here? That their idiosyncrasies end up having an outsized effect? That’s probably going to happen in every situation where a rich person is passionate (and hands-on involved) about a cause – that doesn’t mean that the movement around that cause therefore becomes morally problematic. Alternatively, if your claim is that rich people in EA engage in practices that are bad, that could be a a fair thing to point out, but I’d want to learn about the specifics of the claim and why you think it’s the case.
I’m also not a fan of most EA reading lists but I’d say that EA longtermism addresses topics that up until recently haven’t gotten a lot of coverage, so the direct critiques are usually by people who know very little about longtermism. And “indirect critiques” don’t exist as a crisp category. If you wanted to write a reading list section to balance out the epistemic insularity effects in EA, you’d have to do a lot of pretty difficult work of unearthing what those biases are and then seeking out the exact alternative points of view that usefully counterbalance it. It’s not as easy as adding a bunch of texts by other political movements – that would be too random. Texts written by proponents of other intellectual movements contain important insights, but they’re usually not directly applicable to EA. Someone has to do the difficult work first of figuring out where exactly EA longtermism benefits from insights from other fields. This isn’t an impossible task, but it’s not easy, as any field’s intellectual maturation takes time (it’s an iterative process). Reading lists don’t start out as perfectly balanced. To summarize, it seems relevant to mention (again) that there are inherent challenges to writing balanced reading lists for young fields. The downvoted comment skips over that and dishes out a blanket criticism that one could probably level against any reading list of a young field.
If that will happen whenever a rich person is passionate about a cause, then opting to work with rich people can cause more harm than good. Opting out certainly doesn’t have to be “wasteful”.
My initial thinking was that “idiosyncrasies” can sometimes be neutral or even incidentally good.
But I think you’re right that this isn’t the norm and it can quickly happen that it makes things worse when someone only has a lot of influence because they have money, rather than having influence because they are valued by their peers for being unusually thoughtful.
(FWIW, I think the richest individuals within EA often defer to the judgment of EA researchers, as opposed to setting priorities directly themselves?)
I’m not saying I know anything to the contrary—but I’d like to point out that we have no way of knowing. This is a major disadvantage of philanthropy—where governments are required to be transparent regarding their fund allocations, individual donors are given privacy and undisclosed control over who receives their donations and what organisations are allowed to use them for.
My apologies, specific evidence was not presented with respect to...
...the quasi-censorship/emotional blackmail point because I think it’s up to the people involved to provide as much detail as they are personally comfortable with. All I can morally do is signal to those out of the loop that there are serious problems and hope that somebody with the right to name names does so. I can see why this may seem conspiratorial without further context. All I can suggest is that you keep an ear to the ground. I’m anonymous for a reason.
...the funding issue because either it fits the first category of “areas where I don’t have a right to name names” (cf. ”...any critique of central figures in EA would result in an inability to secure funding from EA sources...” above) or because the relevant information would probably be enough to identify me and thus destroy my career.
...the reading list issue because I thought the point was self-evident. If you would like some examples, see a very brief selection below, but this criticism applies to all relevant reading lists I have seen and is an area where I’m afraid we have prior form—see https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/#Systematically_problematic_syllabi_reading_lists_citations_writings_etc . I am not accusing those involved of being “indoctrinators” or of having bad intentions, I am merely observing that they ignore much of academic existential risk work in favour of a restricted range of texts by a few EA “thought leaders” and EA Forum posts, which, to newcomers, presents an idiosyncratic and ideological view of the field as the only view.
https://forum.effectivealtruism.org/posts/u58HNBMBdKPbvpKqH/ea-reading-list-longtermism-and-existential-risks
http://www.global-catastrophic-risks.com/reading.html
https://forum.effectivealtruism.org/posts/wmAQavcKjWc393NXP/example-syllabus-existential-risks
Again, I’m really not sure where these downvotes are coming from. I’m engaging with criticism and presenting what information I can present as clearly as possible.
<Comment deleted>
I disagree with much of the original comment, but I’m baffled that you think this is appropriate content for the EA Forum. I strong-downvoted and reported this comment.
While this comment was deleted, the moderators discussed it in its original form (which included multiple serious insults to another user) and decided to issue a two-week ban to Charles, starting today. We don’t tolerate personal insults on the Forum.
Hi Charles. Please consider revising or retracting this comment; unlike your other comments in this thread, it’s unkind and not adding to the conversation.
Per your personal request, I have deleted my comment.
...um
Personally I more or less agreed with you and I don’t think you were as insensitive as people suggested. I work in machine learning yet I feel shining a light on the biases and the oversized control of people in the tech industry is warranted and important.
the word “techbros” signals you have a kind of information diet and worldview that I think people have bad priors about
IMO we should seek out and listen to the most persuasive advocates for a lot of different worldviews. It doesn’t seem epistemically justified to penalize a worldview because it gets a lot of obtuse advocacy.
If people downvote comments on the basis of perceived ingroup affiliation rather than content then I think that might make OP’s point for them...
I think that the dismissive and insulting language is at best unhelpful—and signaling your affiliations by being insulting to people you see as the outgroup seems like a bad strategy for engaging in conversation.
I apologise, I don’t process it that way, I was simply using it as shorthand.
The “content” here is that you refer to the funders you dislike with slurs like “techbro”. It’s reasonable to update negatively in response to that evidence.
I’m sorry but can you please explain how “techbro” is a slur?
It’s straightforwardly a slur – to quote Google’s dictionary, it is “a derogatory or insulting term applied to particular group of people”.
It’s not a term anyone would ever use to neutrally describe a group of people, or a term anyone would use to describe themselves (I have yet to see anyone “reclaim” “techbro”). Its primary conversational value is as an insult.
I’m also surprised by how strongly people feel about this term! I’ve always thought techbro was a mildly insulting caricature of a certain type of Silicon Valley guy
Even if it’s only a “mildly insulting caricature”, it’s still a way to claim that certain people are unintelligent or unserious without actually presenting an argument.
Compare:
“A small handful of incredibly wealthy techbros”
“A small handful of incredibly wealthy people with similar backgrounds in technology, which could lead to biases X and Y”
The first of these feels like it’s trying to do the same thing as the second, without actually backing up its claim.
When I read the second, I feel like someone is trying to make me think. When I read the first, I feel like someone is trying to make me stop thinking.
Priors should matter! For example, early rationalists were (rightfully) criticized for being too open to arguments from white nationalists, believing they should only look at the argument itself rather than the source. It isn’t good epistemics to ignore the source of an argument and their potential biases (though it isn’t good epistemics to dismiss them out of hand either based on that, of course).
I don’t see a dichotomy between “ignoring the source of an argument and their potential biases” and downvoting a multi-paragraph comment on the grounds that it used less-than-charitable language about Silicone Valley billionaires.
Based on your final line I’m not sure we disagree?
I think it’s plausible that it’s hard to notice this issue if your personal aesthetic preferences happen to be aligned with TUA. I tried to write here a little questioning how important aesthetic preferences may be. I think it’s plausible that people can unite around negative goals even if positive goals would divide them, for instance, but I’m not convinced.