Wei Dai
Would be interested in your (eventual) take on the following parallels between FTX and OpenAI:
Inspired/funded by EA
Taking big risks with other people’s lives/money
Large employee exodus due to safety/ethics/governance concerns
Lack of public details of concerns due in part to non-disparagement agreements
Questioning the Foundations of EA
Social justice in relation to effective altruism
I’ve been thinking a lot about this recently too. Unfortunately I didn’t see this AMA until now but hopefully it’s not too late to chime in. My biggest worry about SJ in relation to EA is that the political correctness / cancel culture / censorship that seems endemic in SJ (i.e., there are certain beliefs that you have to signal complete certainty in, or face accusations of various “isms” or “phobias”, or worse, get demoted/fired/deplatformed) will come to affect EA as well.
I can see at least two ways of this happening to EA:
Whatever social dynamic is responsible for this happening within SJ applies to EA as well, and EA will become like SJ in this regard for purely internal reasons. (In this case EA will probably come to have a different set of politically correct beliefs from SJ that one must profess faith in.)
SJ comes to control even more of the cultural/intellectual “high grounds” (journalism, academia, K-12 education, tech industry, departments within EA organizations, etc.) than it already does, and EA will be forced to play by SJ’s rules. (See second link above for one specific scenario that worries me.)
From your answers so far it seems like you’re not particularly worried about this. If you have good reasons to not worry about this, please share them so I can move on to other problems myself.
(I think SJ is already actively doing harm because it pursues actions/policies based on these politically correct beliefs, many of which are likely wrong but can’t be argued about. But I’m more worried about EA potentially doing this in the future because EAs tend to pursue more consequential actions/policies that will be much more disastrous (in terms of benefits foregone if nothing else) if they are wrong.)
- 29 Aug 2020 7:41 UTC; 72 points) 's comment on Some thoughts on the EA Munich // Robin Hanson incident by (
- 2 Sep 2020 0:05 UTC; 22 points) 's comment on Some thoughts on the EA Munich // Robin Hanson incident by (
- 26 Apr 2022 16:03 UTC; 19 points) 's comment on How to engage with AI 4 Social Justice actors by (
Taking the question literally, searching the term ‘social justice’ in EA forum reveals only 12 mentions, six within blog posts, and six comments, one full blog post supports it, three items even question its value, the remainder being neutral or unclear on value.
That can’t be right. I think what may have happened is that when you do a search, the results page initially shows you only 6 each of posts and comments, and you have to click on “next” to see the rest. If I keep clicking next until I get to the last pages of posts and comments, I can count 86 blog posts and 158 comments that mention “social justice”, as of now.
BTW I find it interesting that you used the phrase “even question its value”, since “even” is “used to emphasize something surprising or extreme”. I would consider questioning the values of things to be pretty much the core of the EA philosophy...
Can you explain more about this part of ACE’s public statement about withdrawing from the conference:
We took the initiative to contact CARE’s organizers to discuss our concern, exchanging many thoughtful messages and making significant attempts to find a compromise.
If ACE was not trying to deplatform the speaker in question, what were these messages about and what kind of compromise were you trying to reach with CARE?
Do you have any thoughts on this earlier comment of mine? In short, are you worried about about EA developing a full-scale cancel culture similar to other places where SJ values currently predominate, like academia or MSM / (formerly) liberal journalism? (By that I mean a culture where many important policy-relevant issues either cannot be discussed, or the discussions must follow the prevailing “party line” in order for the speakers to not face serious negative consequences like career termination.) If you are worried, are you aware of any efforts to prevent this from happening? Or at least discussions around this among EA leaders?
I realize that EA Munich and other EA organizations face difficult trade-offs and believe that they are making the best choices possible given their values and the information they have access to, but people in places like academia must have thought the same when they started what would later turn out to be their first steps towards cancel culture. Do you think EA can avoid sharing the same eventual fate?
AI doing philosophy = AI generating hands?
I happen to believe this is misguided, but first I want to point out the irony in believing that politicization makes a movement less effective and yet fearing the awesome power of the social justice warriors.
Something can be “less effective” and “powerful” at the same time, if the power is misapplied. I find it very surprising and dispiriting that this needs to be explicitly pointed out, in a place like this.
I also stand by my previous comments, which are now hidden on EA Forum, but can still be viewed at ea.greaterwrong.com /posts/68TmDK6MrjfJgvA7p/introducing-landslide-coalition-an-ea-themed-giving-circle.
(I’m occupied with some things so I’ll just address this point and maybe come back to others later.)
It seems like the balance of opinion is very firmly anti-CC.
That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:
I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of “EA politics”. If there’s a danger or trend of EA going in a CC direction, I should be among the last to know.
Until recently I have had very little interest in politics or even socializing. (I once wrote “And while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.”) Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
I’m probably well within the top percentile of all EAs in terms of “cancel proofness”, because I have both an independent source of income and a non-zero amount of “intersectional currency” (e.g., I’m a POC and first-generation immigrant). I also have no official EA affiliations (which I deliberately maintained in part to be a more unbiased voice, but I had no idea that it would come in handy for this) and I don’t like to do talks/presentations, so there’s pretty much nothing about me that can be canceled.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn’t exist. (Maybe they won’t be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. “preference falsification”). That seems to already be the situation today.
Indeed, I also have direct evidence in the form of EAs contacting me privately (after seeing my earlier comments) to say that they’re worried about EA developing/joining CC, and telling me what they’ve seen to make them worried, and saying that they can’t talk publicly about it.
- 15 Oct 2020 18:15 UTC; 7 points) 's comment on When does it make sense to support/oppose political candidates on EA grounds? by (
I urge those who are concerned about cancel culture to think more strategically. For instance, why has cancel culture taken over almost all intellectual and cultural institutions? What can EA do to fight it that those other institutions couldn’t do, or didn’t think of? Although I upvoted this post for trying to fight the good fight, I really doubt that what it suggests is going to be enough in the long run.
Although the post includes a section titled “The Nature of Cancel Culture”, it seems silent on the social/political dynamics driving cancel culture’s quick and widespread adoption. To make an analogy, it’s like trying to defend a group of people against an infectious disease that has already become a pandemic among the wider society, without understanding its mechanism of infection, and hoping to make do with just common sense hygiene.
In one particularly striking example, I came across this article about a former head of the ACLU. It talks about how the ACLU has been retreating from its free speech principles, and includes this sentence:
But the ACLU has also waded into partisan political issues, at precisely the same time as it was retreating on First Amendment issues.
Does it not seem like EA is going down the same path, and for probably similar reasons? If even the ACLU couldn’t resist the pull of contemporary leftist ideology and its attending abandonment of free speech, why do you think EA could, absent some truly creative and strategic thinking?
(To be clear, I don’t have much confidence that sufficiently effective strategic ideas for defending EA against cancel culture actually exist or can be found by ordinary human minds in time to make a difference. But I see even less hope if no one tries.)
which could mean that invading Taiwan would give China a substantial advantage in any sort of AI-driven war.
My assessment is that actually the opposite is true. Invading (and even successfully conquering) Taiwan would actually cause China to fall behind in any potential AI race. The reason is that absent a war, China can hope to achieve parity with the West (by which I mean the countries allied with the US including South Korea and Japan) on the hardware side by buying chips from Taiwan like everyone else, but if a war happened, the semiconductor foundries in Taiwan would likely be destroyed (to prevent them from falling to the Chinese government), and China lacks the technology to rebuild them without Western help. Even if the factories are not destroyed, critical supplies (such as specialty chemicals) would be cut off and the factories would become useless. Almost all of the machines and supplies that go into a semi foundry are made outside Taiwan in the West, and while China is trying to develop its own domestic semiconductor supply chains, it’s something like 10 years behind the state of the art in most areas, and not catching up, because the enormous amount of R&D going into the industry across all of the supply chains across the West is not something China can match on its own.
So my conclusion is that if China invades Taiwan, it would lose access to the most advanced semiconductor processes, while the West can rebuild the lost Taiwan foundries without too much trouble. (My knowledge about all of this came from listening to a bunch of different podcasts, but IIRC, Jon Y (Asianometry) on Semiconductor Tech and U.S.-China Competition should cover most of it.)
- 22 Jun 2022 23:02 UTC; 8 points) 's comment on Preventing a US-China war as a policy priority by (
I want to push back against this, from one of your slides:
If we’ve failed to notice important issues with classic arguments until recently, we should also worry about our ability to assess new arguments
I feel like the LW community did notice many important issues with the classic arguments. Personally, I was/am pessimistic about AI risk, but thought my reasons were not fully or most captured by the those arguments, and I saw various issues/caveats with them that I talked about on LW. I’m going to just cite my own posts/comments because they’re the easiest to find, but I’m sure there were lots of criticisms from others too. 1 2 3 4
Of course I’m glad that you thought about and critiqued those arguments in a more systematic and prominent way, but it seems wrong to say or imply that nobody noticed their issues until now.
I upvoted Dale’s comment instead, because if one reason “this has been a difficult week for Asian Americans” is a wrong or overly-confident belief that the Atlanta shooting was mainly motivated by anti-Asian bias or hatred, then pointing that out can be better than merely expressing sympathy (which others have already done), and certainly doesn’t constitute an “unsympathetic” response. Of course doing this may not be a good idea in all circumstances and for all audiences, but if bringing up alternative hypotheses and evidence to support them can’t even be done on EAF without risking social disapproval, I think something has gone seriously wrong.
And suppose we did make introductory spaces “safe” for people who believe that certain types of speech are very harmful, but somehow managed to keep norms of open discussion in other more “advanced” spaces. How would those people feel when they find out that they can’t participate in the more advanced spaces without the risk of paying a high subjective cost (i.e., encountering speech that they find intolerable)? Won’t many of them think that the EA community has performed a bait-and-switch on them and potentially become hostile to EA? Have people who have proposed this type of solution actually thought things through?
I think it’s important to make EA as welcoming as possible to all people, but not by compromising in the direction of safetyism, as I don’t see any way that doesn’t end up causing more harm than good in the long run.
[Question] How should large donors coordinate with small donors?
one of the ones I find most concerning are the University of California diversity statements
I’m not sure I understand what you mean here. Do you think other universities are not requiring diversity statements from job applicants, or that the University of California is especially “concerning” in how it uses them? If it’s the latter, what do you think the University of California is doing that others aren’t? If the former, see this article from two years ago, which states:
Many more institutions are asking her to submit a statement with her application about how her work would advance diversity, equity, and inclusion.
The requests have appeared on advertisements for jobs at all kinds of colleges, from the largest research institutions to small teaching-focused campuses
(And it seems a safe bet that the trend has continued. See this search result for a quick sense of what universities currently have formal rubrics for evaluating diversity statements. I also checked a random open position (for a chemistry professor) at a university that didn’t show up in these results and found that it also requires a diversity statement: “Applicants should state in their cover letter how their teaching, research, service and/or life experiences have prepared them to advance Dartmouth’s commitment to diversity, equity and inclusion.”)
Another reason I think academia has been taken over by cancel culture is that I’ve read many news stories, blog posts, and the like about cancel culture in academia, and often scan their comment sections for contrary opinions, and have yet to see anyone chime to say that they’re an academic and cancel culture doesn’t exist at their institution (which I’d expect to see if it weren’t actually widespread), aside from some saying that it doesn’t exist as a way of defending it (i.e., that what’s happening is just people facing reasonable consequences for their speech acts and doesn’t count as cancel culture). I also tried to Google “cancel culture isn’t widespread in academia” in case someone wrote an article arguing that, but all the top relevant results are articles arguing that cancel culture is widespread in academia.
Curious if you have any evidence to the contrary, or just thought that I was making too strong a claim without backing it up myself.
Made a transcript with Microsoft Word.
I maybe should have said something like “concerns related to social justice” when I said “diversity.” I wound up picking the shorter word, but at the price of ambiguity.
I find it interesting that you thought “diversity” is a good shorthand for “social justice”, whereas other EAs naturally interpreted it as “intellectual diversity” or at least thought there’s significant ambiguity in that direction. Seems to say a lot about the current moment in EA...
Getting the right balance seems difficult.
Well, maybe not, if some of the apparent options aren’t real options. For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope. (Or to quickly run back to level ground while you still have some chance, as I’m starting to suspect that EA has taken quite a few steps down the slope already.)
It may be that in the end EA can’t fight (i.e., can’t win against) SJ-like dynamics, and therefore EA joining cancel culture is more “effective” than it getting canceled as a whole. If EA leaders have made an informed and well-considered decision about this, then fine, tell me and I’ll defer to them. (If that’s the case, I’ll appreciate that it would be politically impossible to publicly lay out all of their reasoning.) It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about “getting the right balance” without even mentioning the obvious possibility of a slippery slope.
Founder effects and strong communal norms towards open discussion in the EA community to which I think most newcomers get pretty heavily inculcated.
This does not reassure me very much, because academia used to have strong openness norms but is quickly losing them or has already lost them almost everywhere, and it seems easy for founders to lose their influence (i.e., be pushed out or aside) these days, especially if they do not belong to one of the SJ-recognized marginalized/oppressed groups (and I think founders of EA mostly do not?).
Cause prioritization and consequentialism are somewhat incongruous with these things, since many of the things that can get people to be unfairly “canceled” are quite small from an EA perspective.
One could say that seeking knowledge and maximizing profits are somewhat incongruous with these things, but that hasn’t stopped academia and corporations from adopting harmful SJ practices.
Heavy influence of and connection to philosophy selects for openness norms as well.
Again it doesn’t seem like openness norms offer enough protection against whatever social dynamics is operating.
Ability and motivation to selectively adopt the best SJ positions without adopting some of its most harmful practices.
Surely people in academia and business also had the motivation to avoid the most harmful practices, but perhaps didn’t have the ability? Why do you think that EA has the ability? I don’t see any evidence, at least from the perspective of someone not privy to private or internal discussions, that any EA person has a good understanding of the social dynamics driving adoption of the harmful practices, or (aside from you and a few others I know who don’t seem to be close to the centers of EA) are even thinking about this topic at all.
I’m a POC, and I’ve been recruited by multiple AI-focused longtermist organizations (in both leadership and research capacities) but did not join for personal reasons. I’ve participated in online longtermist discussions since the 1990s, and AFAICT participants in those discussions have always skewed white. Specifically I don’t know anyone else of Asian descent (like myself) who was a frequent participant in longtermist discussions even as of 10 years ago. This has not been a problem or issue for me personally – I guess different groups participate at different rates because they tend to have different philosophies and interests, and I’ve never faced any racism or discrimination in longtermist spaces or had my ideas taken less seriously for not being white. I’m actually more worried about organizations setting hiring goals for themselves that assume that everyone do have the same philosophies and interests, potentially leading to pathological policies down the line.