Will basically threatened Tara,”
I would VERY much like to get more information on this (though I understand if Naia feels she can’t say more.) This sounds, really really bad, but also like a lot turns on exactly how far ‘basically threatened’ is from ‘threatened’ without qualifier.
David Mathers
As a person with an autism (at the time “asperger’s”) diagnosis from childhood, I think this is very tricky territory. I agree that autistics are almost certainly more likely to make innocent-but-harmful mistakes in this context. But I’m a bit worried about overcorrection for that for a few reasons:
Firstly, men in general (and presumably women to some degree also), autistic or otherwise are already incredibly good at self-deception about the actions they take to get sex (source: basic commonsense). So giving a particular subset of us more of an excuse to think “I didn’t realize I would upset her”, when the actual facts are more “I did know there was a significant risk, but I couldn’t resist because I really wanted to have sex with her”, seems a bit fraught. I think this is different from the sort of predatory, unrepentant narcissism that Jonas Vollmer says we shouldn’t ascribe to Owen: it’s a kind of self-deception perfectly compatible with genuine guilt at your own bad behavior and certainly with being a kind and nice person overall. I actually think the feminism-associated* meme about sexual bad behavior being always really about misogyny or dominance can sometimes obscure this for people a bit.
Secondly, I worry that people who are both autistic or at least autistic-coded and predatory can take advantage of a perception that their bad behavior is always a mistake and not deliberate. I strongly suspect SBF, though he is not a diagnosed autistic, deliberately exploited a perception that “nerds” are not socially savvy enough to engage in deliberate deception.
Thirdly, I’m worried about being patronized.
Fourthly, I’m worried that if the association between “autistic” and (even accidental) “sexual misconduct risk” becomes too strong in people’s heads, this will actually lead to overcorrection in the other way, with people becoming too reluctant to hire autistics. (Probably not an issue in EA to the degree it would be in less autistic communities though.) We don’t actually know how much more likely autistics are to behave badly in which particular ways.
Alas 4 and 1 kind of point in opposite directions.
*My guess is that feminists who’ve actually written carefully and at length about sexual bad behaviour have more nuanced views than this, and often when they cite “misogyny” as an explanation, they mean something structural, not something in the psychology of people who behave badly.)
I appreciate the spirit of this post as I am not a Yudkowsky fan, think he is crazy overconfident about AI, am not very keen on rationalism in general, and think the EA community sometimes gets overconfident in the views of its “star” members. But some of the philosophy stuff here seems not quite right to me, though none of its egregiously wrong, and on each topic I agree that Yudkowsky is way, way overconfident. (Many professional philosophers are way overconfident too!)
As a philosophy of consciousness PhD: the view that animals lack consciousness is definitely an extreme minority view in the field, but it it’s not a view that no serious experts hold. Daniel Dennett has denied animal consciousness for roughly Yudkowsky like reasons I think. (EDIT: Actually maybe not: see my discussion with Michael St. Jules below. Dennett is hard to interpret on this, and also seems to have changed his mind to fairly definitively accept animal consciousness more recently. But his earlier stuff on this at the very least opposed to confident assertions that we just know animals are conscious, and any theory that says otherwise is crazy.) And more definitely Peter Carruthers (https://scholar.google.com/citations?user=2JF8VWYAAAAJ&hl=en&oi=ao) used to defend the view that animals lack consciousness because they lack a capacity for higher-order thought. (He changed his mind in the last few years, but I personally didn’t find his explanation as to why made much sense.) Likewise, it’s far from obvious that higher-order thought views imply any animals other than humans are conscious. And still less obvious that they imply all mammals are conscious.* Indeed a standard objection to HOT views, mentioned in the Stanford Encyclopedia of Philosophy page on them last time I checked, is that they are incompatible with animal consciousness. Though that does of course illustrate that you are right that most experts take it as obvious that mammals are conscious.
As for the zombies stuff: you are right that Yudkowsky is mistaken and mistaken for the reasons you give, but it’s not a “no undergraduate would make this” error. Trust me. I have marked undergrads a little, though I’ve never been a Prof. Far worse confusion is common. It’s not even “if an undergrad made this error in 2nd year I’d assume they didn’t have what it takes to become a prof”. Philosophy is really hard and the error is quite subtle, plus many philosophers of mind do think you can get from the possibility of zombies to epiphenomenalism given plausible further assumptions, so when Yudkowsky read into the topic he probably encountered lots of people assuming accepting the possibility of zombies commits you to epiphenomenalism. But yes, the general lesson of “Dave Chalmers, not an idiot” is obviously correct.
As for functional decision theory. I read Wolfgang Schwarz’s critique when it came out, and for me the major news in it was that a philosopher as qualified as Wolfgang thought it was potentially publishable given revisions. It is incredibly hard to publish in good philosophy journals, at the very top end they have rejection rates of >95%. I have literally never heard of a non-academic doing do without even an academic coauthor. I’d classify it as a genuinely exceptional achievement to write something Wolfgang gave a revise and resubmit verdict to with no formal training in philosophy. I say this not because I think it means anyone should defer to Yudkowsky and Soares-again, I think their confidence on AI doom is genuinely crazy, but just because it feels a bit unfair to me to see what was actually an impressive achievement denigrated.
*My own view is that IF animals are not capable of higher-order thought there isn’t even a fact of the matter about whether they are conscious, but that only justifies downweighting their interests to a less than overwhelming degree, and so doesn’t really damage arguments for veganism. Though it would affect how much you should prioritise animals v. humans.
Without in any sense wanting to take away from the personal responsibility of the people who actually did the unethical, and probably illegal trading, I think there might be a couple of general lessons here:
1) An attitude of ‘I take huge financial risks because I’m trading for others, not myself, and money has approx. 0 diminishing marginal utility for altruism, plus I’m so ethical I don’t mind losing my shirt’ might sound like a clever idea. But crucially, it is MUCH easier psychologically to think you’ll just eat the loss and the attendant humiliation and loss of status, before you are actually facing losing vast sums of money for real. Assuming (as seems likely to me) that SBF started out with genuine good intentions, my guess is this was hard to anticipate because of a self-conception as “genuinely altruistic” blocked him from the idea he might do wrong. The same thing probably stopped others hearing about SBF taking on huge risks, which of course he was open* about, from realizing this danger.
2) On reflection, the following is a failure mode for us as a movement combining a lot of utilitarians (and more generally, people who understand that it is *sometimes, in principle* okay to do morally dodgy things when the stakes are really really high, i.e. Schindler made arms for the Nazis etc.) with an encouragement to earn to give: most people take to heart the standard advice about ‘don’t do conventionally immoral things in order to maximize, it will almost always go wrong by utilitarian standards themselves, plus there is moral uncertainty etc. But the people who actually make major money are the least risk averse, because of the trade-off between risk and return in finance. Those people are probably disproportionately likely to ignore the cautious warnings about doing evil for good effects, because there is very likely a connection between this and being less risk averse. (I am not saying this is what happened here: the motivating factor for SBF in appropriating the customer funds might well have really mostly been simple fear of being publicly embarrassed by his losses and have nothing to do with ‘I have an obligation to make the money back to help save the world’. There have been plenty cases of traders doing this sort of thing before who had never heard of utilitarianism. But I think the current disaster nonetheless has brought this risk to light.)
*(I’m talking about the apparently legit trading that got him into financial trouble, not the unethical speculation with customer funds that came after)
Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell.
Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically (‘a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people’ https://en.wikipedia.org/wiki/Richard_Hanania). Yet in the face of this, and after he made an incredibly grudging apology about his most extreme stuff (after journalists dug it up), he’s been invited to Manifiold’s events and put on Richard Yetter Chappel’s blogroll.
DO NOT DO THIS. If you want people to distinguish benign transhumanism (which I agree is a real thing*) from the racist history of eugenics, do not fail to shun actual racists and Nazis. Likewise, if you want to promote “decoupling” factual beliefs from policy recommendations, which can be useful, do not duck and dive around the fact that virtually every major promoter of scientific racism ever, including allegedly mainstream figures like Jensen, worked with or published with actual literal Nazis (https://www.splcenter.org/fighting-hate/extremist-files/individual/arthur-jensen).
I love most of the people I have met through EA, and I know that-despite what some people say on twitter- we are not actually a secret crypto-fascist movement (nor is longtermism specifically, which whether you like it or not, is mostly about what its EA proponents say it is about.) But there is in my view a disturbing degree of tolerance for this stuff in the community, mostly centered around the Bay specifically. And to be clear I am complaining about tolerance for people with far-right and fascist (“reactionary” or whatever) political views, not people with any particular personal opinion on the genetics of intelligence. A desire for authoritarian government enforcing the “natural” racial hierarchy does not become okay, just because you met the person with the desire at a house party and they seemed kind of normal and chill or super-smart and nerdy.
I usually take a way more measured tone on the forum than this, but here I think real information is given by getting shouty.
*Anyone who thinks it is automatically far-right to think about any kind of genetic enhancement at all should go read some Culture novels, and note the implied politics (or indeed, look up the author’s actual die-hard libertarian socialist views.) I am not claiming that far-left politics is innocent, just that it is not racist.
I think that the evidence you cite for “careening towards Venezuela” being a significant risk comes nowhere near to showing that, and that as someone with a lot of sway in the community you’re being epistemically irresponsible in suggesting otherwise.
Of the links you cite as evidence:
The first is about the rate of advance slowing, which is not a collapse or regression scenario. At most it could contribute to such a scenario if we had reason to think one was otherwise likely.
The second is describing an all-ready existing phenomenon of cost disease which while concerning has been compatible with high rates of growth and progress over the past 200 years.
The third is just a blog post about how some definitions of “democratic” are theoretically totalitarian in principle, and contains 0 argument (even bad) that totalitarianism risk is high, or rising, or will become high.
The fourth is mostly just a piece that takes for granted that some powerful American liberals and some fraction of American liberals like to shut down dissenting opinion, and then discusses inconclusively how much this will continue and what can be done about it. But this seem obviously insufficient to cause the collapse of society, given that, as you admit, periods of liberalism where you could mostly say what you like without being cancelled have been the exception not the rule over the past 200 years, and yet growth and progress have occurred. Not to mention that they have also occurred in places like the Soviet Union, or China from the early 1980s onward, that have been pretty intolerant of ideological dissent.
The fifth is a highly abstract and inconclusive discussion of the possibility that having a bunch of governments that grow/shrink in power as their policies are successful/unsuccessful, might produce better policies than an (assumed) status quo where this doesn’t happen*, combined with a discussion of the connection of this idea to an obscure far-right wing Bay Area movement of at most a few thousand people. It doesn’t actually argue for the idea that dangerous popular ideas will eventually cause civilization regression at all; it’s mostly about what would follow if popular ideas tended to be bad in some general sense, and you could get better ideas by having a “free market for governments” where only successful govs survived.
The last link on dysgenics and fertility collapse largely consist of you arguing that these are not as threatening as some people believe(!). In particular, you argue that world population will still be slightly growing by 2100 and it’s just really hard to project current trends beyond then. And you argue that dysgenic trends are real but will only cause a very small reduction in average IQ, even absent a further Flynn effect (and “absent a further Flynn effect” strikes me as unlikely if we are talking about world IQ, and not US.) Nowhere does it argue these things will be bad enough to send progress into reverse.
This is an incredibly slender basis to be worrying about the idea that the general trend towards growth and progress of the last 200 years will reverse absent one particular transformative technology.
*It plausibly does happen to some degree. The US won the Cold War partly because it had better economic policies than the Soviet Union.- 25 Oct 2023 10:47 UTC; 2 points) 's comment on Pause For Thought: The AI Pause Debate by (
In my view, Phil Torres’ stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like ‘is adding happy people actually good anyway’ get associated with less fair criticism-”Nick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different places”, potentially biasing us against the legit stuff in a dangerous way.
But there could-again, in my view-easily be a wave of criticism coming from people who share Torres’ political viewpoint and tendency towards heated rhetoric, but who, unlike him, haven’t really taken the time to understand EA /longtermist/AI safety ideas in the first place. I’ve already seen one decently well-known anti-”tech” figure on twitter re-tweet a tweet that in it’s entirety consisted of “long-termism is eugenics!”. People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldn’t have, but at least I’m not anyone important!) for keeping their cool in the face of criticisms that is:
-Poorly argued
-Very rhetorically forceful
-Based on straightforward misunderstandings
-Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions.
-Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel.
-Attacks motives not just ideas
-Gendered in a way that will play directly to the personal insecurities of some male EAs.
Alas, stuff can be all those things and also identify some genuine errors we’re making. It’s important we remain open to that, and also don’t get too polarized politically by this kind of stuff ourselves.
* (i.e. he leaves out reasons to be longtermist that don’t depend on total utilitarianism or adding happy people being good, doesn’t discuss why you might reject person-affecting population ethics etc.)
** I say “of an unreasonable sort” because in principle people’s associations can be legitimately criticized if they have bad effects, just like anything else.- (Re)considering the Aesthetics of EA by 20 May 2022 15:01 UTC; 24 points) (
- 6 Jun 2023 19:18 UTC; 14 points) 's comment on JWS’s Quick takes by (
Your discussion of the ‘good’ in the book doesn’t mention a part of Amia’s foreword that I think is a fairly powerful critique (though far from establishing “effective altruism is bad as currently practiced” or anything that strong):
‘These [above] are some of the questions raised when the story of Effective Altruism’s success is told not by its proponents, but by those engaged in liberation struggles and justice movements that operate outside Effective Altruism’s terms. These struggles, it must be said, long predate Effective Altruism, and it is striking that Effective Altruism has not found anything very worthwhile in them: in the historically deep and ongoing movements for the rights of working-class people, nonhuman animals, people of color, Indigenous people, women, incarcerated people, disabled people, and people living under colonial and authoritarian rule. For most Effective Altruists, these movements are, at best, examples of ineffective attempts to do good; negative examples from which to prescind or correct, not political formations from which to learn, with which to create coalition, or to join.’
(Got the quote from David Thorstad’s blog: https://ineffectivealtruismblog.com/2023/02/25/the-good-it-promises-the-harm-it-does-part-1-introduction/)
Now, we can debate the extent to which this is true (most EAs are actually pretty sympathetic to animal rights activism I suspect, Open Phil. gave money to criminal justice reform etc.). But insofar as it is true, I take it the challenge is something like: ‘what’s more likely, all those movements were in fact ineffective, or you’re biased demographically against them’. And I think the bite of the challenge comes from something like this. Most EAs are *liberal/centre-left* in political orientation. So they probably believe that historically these movements have been of very high value and have produced important insights about the world and social reality. (Even if we also think some other things have been high value too, including perhaps some many people in these movements disliked or opposed.) So how come they act like those movements probably aren’t still doing that? What changed?
I think there are lots of good responses that can be made to this, but it’s still a challenge very much worth thinking about. More worth thinking about than gloating/getting angry over the dumbest or most annoying things the book says. (And to be clear, I do find most of the passages Richard quotes in his review pretty annoying.)
Also, I feel mean for pressing the point against someone who is clearly finding this stressful and is no more responsible for it than anyone else in the know, but I really want someone to properly explain what the warning signs the leadership saw were, who saw them, and what was said internally in response to them. I don’t even know how much that will help with anything, to be honest, so much as I just want to know. But at least in theory, anyone who behaved really badly should be removed from positions of power. (And I do mean just that: positions where they run big orgs: I’m not saying they should be shunned or they can’t be allowed to contribute to the community intellectually any more.) If Rebecca won’t do this, someone else should. But also, depending on how bad the behavior of leaders actually was, in NOT saying more people with inside knowledge are probably either a) helping people escape responsibility for really bad behavior or b) making what were reasonably sympathetic mistakes that many people might have made in the same position sound much worse than they were through vagueness, leading to unfair reputational damage. (EDIT: I should say that sadly, I think a) is much the more likely possibility.) Not to mention that right now it is not clear which leaders are the responsible ones, which is unfair on anyone who actually didn’t do anything wrong. Which could include not just people with no knowledge of the warning signs, but people who knew about them, complained internally, were ignored, and then didn’t take things public for defensible reasons.
I feel like “people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed” is being classed as “rumour” here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word “rumour” conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued “oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside”. That’s a signal someone is a bad leader in my view, which is useful knowledge going forward. (I’m not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.)
I mostly agree with this, and upvoted strongly, but I don’t think the scare quotes around “criticism” is warranted. Improving ideas and projects through constructive criticism is not the same thing as speaking truth to power, but it is still good and useful, it’s just a different good and useful thing.
Also, I don’t know if Spencer Greenberg’s podcast with Will is recorded yet, but if it hasn’t been I think he absolutely should ask Will what he thinks the phrase about “extensive and significant mistakes” here actually refers to. EDIT: Having listened (vaguely, while working) to most of the Sam Harris interview with Will, as far as I can tell Harris entirely failed to ask anything about this, which is a huge omission. Another question Spencer could ask Will is: did you specify this topic was off-limits to Harris?
Any claim that advising people to earn to give is inherently really bad needs to either defend the view that “start a business or take another high paying job” is inherently immoral advice, or explain why it becomes immoral when you add “and give the money to charity” or when it’s aimed at EAs specifically. It’s possible that can be done, but I think it’s quite a high bar. (Which is not to say EtG advice couldn’t be improved in ways that make future scandals less likely.)
This is very important if true, because it suggests with due diligence, EA leaders could have known that it was morally dodgy to be associated with FTX, even before the current blow-up. In comparison if the story is “previously reasonably ethical by finance standards trader steals to cover losses in a panic”, then while you can say there is always some risk of something like that, it’s not really the kind of thing where you can blame people for associating with someone with beforehand. I think it’d be good if some EA orgs had a proper look into which of these narratives is more correct when they do a post-mortem on this whole disasters.
Wasn’t the OpenAI thing basically the opposite of the mistake with FTX though? With FTX people ignored what appears to have been a fair amount of evidence that a powerful, allegedly ethical businessperson was in fact shady. At OpenAI, people seem to have got (what they perceived as, but we’ve no strong evidence they were wrong) evidence, that a powerful, allegedly ethically motivated businessperson was in fact shady, so they learnt the lessons of FTX and tried to do something about it (and failed.)
Beware popular discussions of AI “sentience”
’e.g. integrity, truth-seeking, x-risk reduction) that I care about — such as FTX, OpenAI, Anthropic, CEA, Will MacAskill’s career as a public intellectual — and those that do seem to have closed down or been unsupported (such as FHI, MIRI, CFAR’
Warning, this is coming from quite a tribal place, since I was an Oxford philosopher back when GWWC was first getting started, so consider me biased but:
Obviously FTX was very bad, and the only provably very harmful thing that the community has done so far, but I still want to push back against here. CEA and Will have been heavily involved with the bits of EA that seem to me to have obviously worked fairly well: global development stuff and farm animal welfare campaigning. Many lives have been saved by donations to AMF. Meanwhile, by your own lights, you think it is more likely than not that the most important effect of the Bay Area Rationalist cluster and the FHI has been to speed AI capabilities research that you yourselves think of as near-term extinction risk. It seems like, by your own lights, Will’s career as a public intellectual (as opposed to his and CEA’s involvement in setting up Alameda) has been harmful, to the exact extent that it has promoted ideas about working on AI risk that he got from FHI/MIRI/CFAR people, whilst it has been good otherwise (i.e. when he has been promoting ideas that are closer to the very beginnings of Oxford/GiveWell EA: at least if you agree that global development/animal welfare EA are good in themselves).
I think this is too pessimistic: why did one of Biden’s cabinet ask for Christiano in one of the top positions at the US gov’s AI safety org if the government will reliably prioritize the sort of factors you cite here to the exclusion of safety?: https://www.nist.gov/news-events/news/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety
I also think that whether or not the government regulates private AI has little to do with whether it militarizes AI. It’s not like there is one dial with “amount of government” and it just gets turned up or down. Government can do very little to restrict what Open AI/DeepMind/Anthropic do, but then also spend lots and lots of money on military AI projects. So worries about militarization are not really a reason not to want the government to restrict Open AI/DeepMind/Anthropic.
Not to mention that insofar as the basic science here is getting done for commercial reasons, any regulations which slow down the commercial development of frontier modes will actually slow down the progress of AI for military applications too, whether or not that is what the US gov intends, and regardless of whether those regulations are intended to reduce X-risk, or to protect the jobs of voice actors in cartoons facing AI replacement.
I’m not sure it is a full misreading, sadly. I don’t think it a fair characterization of Ord, Greaves and MacAskill (though I am kind of biased because of my pride in having been an Oxford philosophy DPhil). It would be easy to give a radical deliberative democracy spin on Will and Toby’s “long reflection” ideas in particular. But all the “pivotal act” stuff come out of certain people in the Bay, sure sounds like an attempt to temporarily seize control of the future without worrying too much about actual consent. Of course, the idea (or at least Yudkowsky’s original vision for “coherent extrapolated volition”) is that eventually the governing AIs will just implement what we all collectively want. And that could happen! But remember Lenin thought that the state would eventually “wither away” as Marx predicted, once the dictatorship of the proletariat had taken care of building industrial socialism...
Not to mention there are, shall we say, longtermism adjacent rich people like Musk and Thiel who seem pretty plausibly power-seeking, even if they are not really proper longtermists (or at least, they are not EAs).
(Despite all this, I should say that I think the in-principle philosophical case for longtermism is very strong. Alas, ideas can be both correct and dangerous.)
‘- Alice has accused the majority of her previous employers, and 28 people—that we know of—of abuse. She accused people of: not paying her, being culty, persecuting/oppressing her, controlling her romantic life, hiring stalkers, threatening to kill her, and even, literally, murder.’
The section of doc linked to here does not in fact provide any evidence whatsoever of Alice making wild accusations against anyone else, beyond plain assertions (i.e. there are no links to other people saying this).