Will basically threatened Tara,â
I would VERY much like to get more information on this (though I understand if Naia feels she canât say more.) This sounds, really really bad, but also like a lot turns on exactly how far âbasically threatenedâ is from âthreatenedâ without qualifier.
David Mathersđ¸
As a person with an autism (at the time âaspergerâsâ) diagnosis from childhood, I think this is very tricky territory. I agree that autistics are almost certainly more likely to make innocent-but-harmful mistakes in this context. But Iâm a bit worried about overcorrection for that for a few reasons:
Firstly, men in general (and presumably women to some degree also), autistic or otherwise are already incredibly good at self-deception about the actions they take to get sex (source: basic commonsense). So giving a particular subset of us more of an excuse to think âI didnât realize I would upset herâ, when the actual facts are more âI did know there was a significant risk, but I couldnât resist because I really wanted to have sex with herâ, seems a bit fraught. I think this is different from the sort of predatory, unrepentant narcissism that Jonas Vollmer says we shouldnât ascribe to Owen: itâs a kind of self-deception perfectly compatible with genuine guilt at your own bad behavior and certainly with being a kind and nice person overall. I actually think the feminism-associated* meme about sexual bad behavior being always really about misogyny or dominance can sometimes obscure this for people a bit.
Secondly, I worry that people who are both autistic or at least autistic-coded and predatory can take advantage of a perception that their bad behavior is always a mistake and not deliberate. I strongly suspect SBF, though he is not a diagnosed autistic, deliberately exploited a perception that ânerdsâ are not socially savvy enough to engage in deliberate deception.
Thirdly, Iâm worried about being patronized.
Fourthly, Iâm worried that if the association between âautisticâ and (even accidental) âsexual misconduct riskâ becomes too strong in peopleâs heads, this will actually lead to overcorrection in the other way, with people becoming too reluctant to hire autistics. (Probably not an issue in EA to the degree it would be in less autistic communities though.) We donât actually know how much more likely autistics are to behave badly in which particular ways.
Alas 4 and 1 kind of point in opposite directions.
*My guess is that feminists whoâve actually written carefully and at length about sexual bad behaviour have more nuanced views than this, and often when they cite âmisogynyâ as an explanation, they mean something structural, not something in the psychology of people who behave badly.)
I appreciate the spirit of this post as I am not a Yudkowsky fan, think he is crazy overconfident about AI, am not very keen on rationalism in general, and think the EA community sometimes gets overconfident in the views of its âstarâ members. But some of the philosophy stuff here seems not quite right to me, though none of its egregiously wrong, and on each topic I agree that Yudkowsky is way, way overconfident. (Many professional philosophers are way overconfident too!)
As a philosophy of consciousness PhD: the view that animals lack consciousness is definitely an extreme minority view in the field, but it itâs not a view that no serious experts hold. Daniel Dennett has denied animal consciousness for roughly Yudkowsky like reasons I think. (EDIT: Actually maybe not: see my discussion with Michael St. Jules below. Dennett is hard to interpret on this, and also seems to have changed his mind to fairly definitively accept animal consciousness more recently. But his earlier stuff on this at the very least opposed to confident assertions that we just know animals are conscious, and any theory that says otherwise is crazy.) And more definitely Peter Carruthers (https://ââscholar.google.com/ââcitations?user=2JF8VWYAAAAJ&hl=en&oi=ao) used to defend the view that animals lack consciousness because they lack a capacity for higher-order thought. (He changed his mind in the last few years, but I personally didnât find his explanation as to why made much sense.) Likewise, itâs far from obvious that higher-order thought views imply any animals other than humans are conscious. And still less obvious that they imply all mammals are conscious.* Indeed a standard objection to HOT views, mentioned in the Stanford Encyclopedia of Philosophy page on them last time I checked, is that they are incompatible with animal consciousness. Though that does of course illustrate that you are right that most experts take it as obvious that mammals are conscious.
As for the zombies stuff: you are right that Yudkowsky is mistaken and mistaken for the reasons you give, but itâs not a âno undergraduate would make thisâ error. Trust me. I have marked undergrads a little, though Iâve never been a Prof. Far worse confusion is common. Itâs not even âif an undergrad made this error in 2nd year Iâd assume they didnât have what it takes to become a profâ. Philosophy is really hard and the error is quite subtle, plus many philosophers of mind do think you can get from the possibility of zombies to epiphenomenalism given plausible further assumptions, so when Yudkowsky read into the topic he probably encountered lots of people assuming accepting the possibility of zombies commits you to epiphenomenalism. But yes, the general lesson of âDave Chalmers, not an idiotâ is obviously correct.
As for functional decision theory. I read Wolfgang Schwarzâs critique when it came out, and for me the major news in it was that a philosopher as qualified as Wolfgang thought it was potentially publishable given revisions. It is incredibly hard to publish in good philosophy journals, at the very top end they have rejection rates of >95%. I have literally never heard of a non-academic doing do without even an academic coauthor. Iâd classify it as a genuinely exceptional achievement to write something Wolfgang gave a revise and resubmit verdict to with no formal training in philosophy. I say this not because I think it means anyone should defer to Yudkowsky and Soares-again, I think their confidence on AI doom is genuinely crazy, but just because it feels a bit unfair to me to see what was actually an impressive achievement denigrated.
*My own view is that IF animals are not capable of higher-order thought there isnât even a fact of the matter about whether they are conscious, but that only justifies downweighting their interests to a less than overwhelming degree, and so doesnât really damage arguments for veganism. Though it would affect how much you should prioritise animals v. humans.
Without in any sense wanting to take away from the personal responsibility of the people who actually did the unethical, and probably illegal trading, I think there might be a couple of general lessons here:
1) An attitude of âI take huge financial risks because Iâm trading for others, not myself, and money has approx. 0 diminishing marginal utility for altruism, plus Iâm so ethical I donât mind losing my shirtâ might sound like a clever idea. But crucially, it is MUCH easier psychologically to think youâll just eat the loss and the attendant humiliation and loss of status, before you are actually facing losing vast sums of money for real. Assuming (as seems likely to me) that SBF started out with genuine good intentions, my guess is this was hard to anticipate because of a self-conception as âgenuinely altruisticâ blocked him from the idea he might do wrong. The same thing probably stopped others hearing about SBF taking on huge risks, which of course he was open* about, from realizing this danger.
2) On reflection, the following is a failure mode for us as a movement combining a lot of utilitarians (and more generally, people who understand that it is *sometimes, in principle* okay to do morally dodgy things when the stakes are really really high, i.e. Schindler made arms for the Nazis etc.) with an encouragement to earn to give: most people take to heart the standard advice about âdonât do conventionally immoral things in order to maximize, it will almost always go wrong by utilitarian standards themselves, plus there is moral uncertainty etc. But the people who actually make major money are the least risk averse, because of the trade-off between risk and return in finance. Those people are probably disproportionately likely to ignore the cautious warnings about doing evil for good effects, because there is very likely a connection between this and being less risk averse. (I am not saying this is what happened here: the motivating factor for SBF in appropriating the customer funds might well have really mostly been simple fear of being publicly embarrassed by his losses and have nothing to do with âI have an obligation to make the money back to help save the worldâ. There have been plenty cases of traders doing this sort of thing before who had never heard of utilitarianism. But I think the current disaster nonetheless has brought this risk to light.)
*(Iâm talking about the apparently legit trading that got him into financial trouble, not the unethical speculation with customer funds that came after)
Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell.
Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically (âa revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black peopleâ https://ââen.wikipedia.org/ââwiki/ââRichard_Hanania). Yet in the face of this, and after he made an incredibly grudging apology about his most extreme stuff (after journalists dug it up), heâs been invited to Manifioldâs events and put on Richard Yetter Chappelâs blogroll.
DO NOT DO THIS. If you want people to distinguish benign transhumanism (which I agree is a real thing*) from the racist history of eugenics, do not fail to shun actual racists and Nazis. Likewise, if you want to promote âdecouplingâ factual beliefs from policy recommendations, which can be useful, do not duck and dive around the fact that virtually every major promoter of scientific racism ever, including allegedly mainstream figures like Jensen, worked with or published with actual literal Nazis (https://ââwww.splcenter.org/ââfighting-hate/ââextremist-files/ââindividual/ââarthur-jensen).
I love most of the people I have met through EA, and I know that-despite what some people say on twitter- we are not actually a secret crypto-fascist movement (nor is longtermism specifically, which whether you like it or not, is mostly about what its EA proponents say it is about.) But there is in my view a disturbing degree of tolerance for this stuff in the community, mostly centered around the Bay specifically. And to be clear I am complaining about tolerance for people with far-right and fascist (âreactionaryâ or whatever) political views, not people with any particular personal opinion on the genetics of intelligence. A desire for authoritarian government enforcing the ânaturalâ racial hierarchy does not become okay, just because you met the person with the desire at a house party and they seemed kind of normal and chill or super-smart and nerdy.
I usually take a way more measured tone on the forum than this, but here I think real information is given by getting shouty.
*Anyone who thinks it is automatically far-right to think about any kind of genetic enhancement at all should go read some Culture novels, and note the implied politics (or indeed, look up the authorâs actual die-hard libertarian socialist views.) I am not claiming that far-left politics is innocent, just that it is not racist.- 28 Apr 2024 23:33 UTC; -15 points) 's comment on The Guardian calls EA âcultishâ and acÂcuses the late FHI of âEuÂgenÂics on Steroidsâ by (
I think that the evidence you cite for âcareening towards Venezuelaâ being a significant risk comes nowhere near to showing that, and that as someone with a lot of sway in the community youâre being epistemically irresponsible in suggesting otherwise.
Of the links you cite as evidence:
The first is about the rate of advance slowing, which is not a collapse or regression scenario. At most it could contribute to such a scenario if we had reason to think one was otherwise likely.
The second is describing an all-ready existing phenomenon of cost disease which while concerning has been compatible with high rates of growth and progress over the past 200 years.
The third is just a blog post about how some definitions of âdemocraticâ are theoretically totalitarian in principle, and contains 0 argument (even bad) that totalitarianism risk is high, or rising, or will become high.
The fourth is mostly just a piece that takes for granted that some powerful American liberals and some fraction of American liberals like to shut down dissenting opinion, and then discusses inconclusively how much this will continue and what can be done about it. But this seem obviously insufficient to cause the collapse of society, given that, as you admit, periods of liberalism where you could mostly say what you like without being cancelled have been the exception not the rule over the past 200 years, and yet growth and progress have occurred. Not to mention that they have also occurred in places like the Soviet Union, or China from the early 1980s onward, that have been pretty intolerant of ideological dissent.
The fifth is a highly abstract and inconclusive discussion of the possibility that having a bunch of governments that grow/âshrink in power as their policies are successful/âunsuccessful, might produce better policies than an (assumed) status quo where this doesnât happen*, combined with a discussion of the connection of this idea to an obscure far-right wing Bay Area movement of at most a few thousand people. It doesnât actually argue for the idea that dangerous popular ideas will eventually cause civilization regression at all; itâs mostly about what would follow if popular ideas tended to be bad in some general sense, and you could get better ideas by having a âfree market for governmentsâ where only successful govs survived.
The last link on dysgenics and fertility collapse largely consist of you arguing that these are not as threatening as some people believe(!). In particular, you argue that world population will still be slightly growing by 2100 and itâs just really hard to project current trends beyond then. And you argue that dysgenic trends are real but will only cause a very small reduction in average IQ, even absent a further Flynn effect (and âabsent a further Flynn effectâ strikes me as unlikely if we are talking about world IQ, and not US.) Nowhere does it argue these things will be bad enough to send progress into reverse.
This is an incredibly slender basis to be worrying about the idea that the general trend towards growth and progress of the last 200 years will reverse absent one particular transformative technology.
*It plausibly does happen to some degree. The US won the Cold War partly because it had better economic policies than the Soviet Union.- 25 Oct 2023 10:47 UTC; 3 points) 's comment on Pause For Thought: The AI Pause Debate by (
Also, I feel mean for pressing the point against someone who is clearly finding this stressful and is no more responsible for it than anyone else in the know, but I really want someone to properly explain what the warning signs the leadership saw were, who saw them, and what was said internally in response to them. I donât even know how much that will help with anything, to be honest, so much as I just want to know. But at least in theory, anyone who behaved really badly should be removed from positions of power. (And I do mean just that: positions where they run big orgs: Iâm not saying they should be shunned or they canât be allowed to contribute to the community intellectually any more.) If Rebecca wonât do this, someone else should. But also, depending on how bad the behavior of leaders actually was, in NOT saying more people with inside knowledge are probably either a) helping people escape responsibility for really bad behavior or b) making what were reasonably sympathetic mistakes that many people might have made in the same position sound much worse than they were through vagueness, leading to unfair reputational damage. (EDIT: I should say that sadly, I think a) is much the more likely possibility.) Not to mention that right now it is not clear which leaders are the responsible ones, which is unfair on anyone who actually didnât do anything wrong. Which could include not just people with no knowledge of the warning signs, but people who knew about them, complained internally, were ignored, and then didnât take things public for defensible reasons.
In my view, Phil Torresâ stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like âis adding happy people actually good anywayâ get associated with less fair criticism-âNick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different placesâ, potentially biasing us against the legit stuff in a dangerous way.
But there could-again, in my view-easily be a wave of criticism coming from people who share Torresâ political viewpoint and tendency towards heated rhetoric, but who, unlike him, havenât really taken the time to understand EA /âlongtermist/âAI safety ideas in the first place. Iâve already seen one decently well-known anti-âtechâ figure on twitter re-tweet a tweet that in itâs entirety consisted of âlong-termism is eugenics!â. People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldnât have, but at least Iâm not anyone important!) for keeping their cool in the face of criticisms that is:
-Poorly argued
-Very rhetorically forceful
-Based on straightforward misunderstandings
-Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions.
-Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel.
-Attacks motives not just ideas
-Gendered in a way that will play directly to the personal insecurities of some male EAs.
Alas, stuff can be all those things and also identify some genuine errors weâre making. Itâs important we remain open to that, and also donât get too polarized politically by this kind of stuff ourselves.
* (i.e. he leaves out reasons to be longtermist that donât depend on total utilitarianism or adding happy people being good, doesnât discuss why you might reject person-affecting population ethics etc.)
** I say âof an unreasonable sortâ because in principle peopleâs associations can be legitimately criticized if they have bad effects, just like anything else.- (Re)conÂsidÂerÂing the AesÂthetÂics of EA by 20 May 2022 15:01 UTC; 24 points) (
- 6 Jun 2023 19:18 UTC; 14 points) 's comment on JWSâs Quick takes by (
I feel like âpeople who worked with Sam told people about specific instances of quite serious dishonesty they had personally observedâ is being classed as ârumourâ here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word ârumourâ conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued âoh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upsideâ. Thatâs a signal someone is a bad leader in my view, which is useful knowledge going forward. (Iâm not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.)
Your discussion of the âgoodâ in the book doesnât mention a part of Amiaâs foreword that I think is a fairly powerful critique (though far from establishing âeffective altruism is bad as currently practicedâ or anything that strong):
âThese [above] are some of the questions raised when the story of Effective Altruismâs success is told not by its proponents, but by those engaged in liberation struggles and justice movements that operate outside Effective Altruismâs terms. These struggles, it must be said, long predate Effective Altruism, and it is striking that Effective Altruism has not found anything very worthwhile in them: in the historically deep and ongoing movements for the rights of working-class people, nonhuman animals, people of color, Indigenous people, women, incarcerated people, disabled people, and people living under colonial and authoritarian rule. For most Effective Altruists, these movements are, at best, examples of ineffective attempts to do good; negative examples from which to prescind or correct, not political formations from which to learn, with which to create coalition, or to join.â
(Got the quote from David Thorstadâs blog: https://ââineffectivealtruismblog.com/ââ2023/ââ02/ââ25/ââthe-good-it-promises-the-harm-it-does-part-1-introduction/ââ)
Now, we can debate the extent to which this is true (most EAs are actually pretty sympathetic to animal rights activism I suspect, Open Phil. gave money to criminal justice reform etc.). But insofar as it is true, I take it the challenge is something like: âwhatâs more likely, all those movements were in fact ineffective, or youâre biased demographically against themâ. And I think the bite of the challenge comes from something like this. Most EAs are *liberal/âcentre-left* in political orientation. So they probably believe that historically these movements have been of very high value and have produced important insights about the world and social reality. (Even if we also think some other things have been high value too, including perhaps some many people in these movements disliked or opposed.) So how come they act like those movements probably arenât still doing that? What changed?
I think there are lots of good responses that can be made to this, but itâs still a challenge very much worth thinking about. More worth thinking about than gloating/âgetting angry over the dumbest or most annoying things the book says. (And to be clear, I do find most of the passages Richard quotes in his review pretty annoying.)
Also, I donât know if Spencer Greenbergâs podcast with Will is recorded yet, but if it hasnât been I think he absolutely should ask Will what he thinks the phrase about âextensive and significant mistakesâ here actually refers to. EDIT: Having listened (vaguely, while working) to most of the Sam Harris interview with Will, as far as I can tell Harris entirely failed to ask anything about this, which is a huge omission. Another question Spencer could ask Will is: did you specify this topic was off-limits to Harris?
I mostly agree with this, and upvoted strongly, but I donât think the scare quotes around âcriticismâ is warranted. Improving ideas and projects through constructive criticism is not the same thing as speaking truth to power, but it is still good and useful, itâs just a different good and useful thing.
Any claim that advising people to earn to give is inherently really bad needs to either defend the view that âstart a business or take another high paying jobâ is inherently immoral advice, or explain why it becomes immoral when you add âand give the money to charityâ or when itâs aimed at EAs specifically. Itâs possible that can be done, but I think itâs quite a high bar. (Which is not to say EtG advice couldnât be improved in ways that make future scandals less likely.)
This is very important if true, because it suggests with due diligence, EA leaders could have known that it was morally dodgy to be associated with FTX, even before the current blow-up. In comparison if the story is âpreviously reasonably ethical by finance standards trader steals to cover losses in a panicâ, then while you can say there is always some risk of something like that, itâs not really the kind of thing where you can blame people for associating with someone with beforehand. I think itâd be good if some EA orgs had a proper look into which of these narratives is more correct when they do a post-mortem on this whole disasters.
I think two things are being conflated here into a 3rd position no one holds
-Some people donât like the big R community very much.-Some people donât think improving the worldâs small-r rationality/âepistemics should be a leading EA cause area.
Are getting conflated into:
-People donât think itâs important to try hard at being small-r rational.I agree that some people might be running together the first two claims, and that is bad, since they are independent, and it could easily be high impact to work on improving collective epistemics in the outside world even if the big R rationalist community was bad in various ways. But holding the first two claims (which I think I do moderately) doesnât imply the third. I think the rationalists are often not that rational in practice, and are too open to racism and sexim. And I also (weakly) think that we donât currently know enough about âimproving epistemicsâ for it to be a tractable cause area. But obviously I still want us to make decisions rationally, in the small-r sense internally. Who wouldnât! Being against small-r rationality is like being against kindness or virtue; no one thinks of themselves as taking that stand.
Danielâs behavior here is genuinely heroic, and I say that as someone who is pretty skeptical of AI takeover being a significant risk*.
*(I still think the departure of safety people is bad news though.)
Wasnât the OpenAI thing basically the opposite of the mistake with FTX though? With FTX people ignored what appears to have been a fair amount of evidence that a powerful, allegedly ethical businessperson was in fact shady. At OpenAI, people seem to have got (what they perceived as, but weâve no strong evidence they were wrong) evidence, that a powerful, allegedly ethically motivated businessperson was in fact shady, so they learnt the lessons of FTX and tried to do something about it (and failed.)
BeÂware popÂuÂlar disÂcusÂsions of AI âsenÂtienceâ
Itâs pretty damning of an event in my view if people are saying things beyond âsome races are worse than others and donât deserve respect.â (Or indeed, if they are literally saying just that.)
Many not themselves bigoted people in the rationalist community seem to really hate the idea that HBD people are covering up bad intentions with a veneer of just being interested in scientific questions about the genetics of intelligence because they pattern-match it to accusations of âdog-whistlingâ on twitter and correctly note that such accusations are epistemically dodgy, because they are so hard to disprove even in cases where they are false. (And also, the rationalists themselves I think, often are interested in scientific racist ideas simply because they want to know whether scary taboo things are true.) But these rationalists should in my view remember that:
A) It IS possible for people to âhide their power levelâ so to speak (https://ââknowyourmeme.com/ââmemes/ââhide-your-power-level) and people on the far-right (amongst others) do do that. (Unsurprisingly, as they have strong incentives to do so.) Part of the reason this sometimes works is because most people understand that accusations that someone is doing this are sometimes made frivolously because they are to disprove.
B) There are people who hate Black people (and in the context of US HBD it usually is about Black people, even if literal Nazis care more about antisemitism), and enjoy participating in groups that are hostile to them. (These people can easily be Jewish or Asian so âbut theyâre not actually a white supremacistâ is not much of a defense here.)
C) For extremely obvious reasons, scientific racism is extremely attractive to people who genuinely hate black people.
D) Scientific racism is extremely unpopular in the wider world of people who donât hate Black people.
Together, A-D) make it I suspect very easy to attract the kind of people who say things more extreme than âsome races are worse than others and donât deserve respectâ if you signal openess to HBD/âscientific racism by attracting speakers associated with it. They also mean that some (in my view, probably most, but I canât prove that) scientists who believe in scientific racism but claim a lack of personal prejudice are just lying about it, and actually are hostile to Black people.
â- Alice has accused the majority of her previous employers, and 28 peopleâthat we know ofâof abuse. She accused people of: not paying her, being culty, persecuting/âoppressing her, controlling her romantic life, hiring stalkers, threatening to kill her, and even, literally, murder.â
The section of doc linked to here does not in fact provide any evidence whatsoever of Alice making wild accusations against anyone else, beyond plain assertions (i.e. there are no links to other people saying this).