I heard someone from Kevin Esvelt’s lab talking about this + pain-free lab mice once
Dicentra
I upvoted this because I like the passion, and I too feel a desire to passionately defend EA and the disempowered beneficiaries EAs seek to protect, who are indirectly harmed by this kind of sloppy coverage. I do hope people respond, and I think EAs err towards being too passive about media coverage.
But I think important parts of this take are quite wrong.
Most people just aren’t basically sympathetic to EA, let alone EAs-waiting-to-happen; they have a tangle of different moral intuitions and aren’t very well-informed or thoughtful about it. Sure, they’ll say they want more effective charity, but they also want to give back to their local community and follow fads and do what makes them feel good and support things that helped them in particular and keep the money for themselves and all kindsa stuff. So, I don’t think this is surprising, and I think it’s important for EAs to be clear-eyed about how they’re different from other people.
I don’t think that means EAs could never be a dominant force in philanthropy or whatever; most people throughout history didn’t care about anti-racism or demoncracy but they’re popular now; caring about what your ancestors has declined a lot; things can change, I just don’t think it’s inevitable or foregone (or couldn’t reverse).
If someone wrote an article about a minority group and described them with a few nasty racist stereotypes, there would be massive protests, retractions, apologies and a real effort to ensure that people were well informed about the reality.
People would do this for some kinds of minorities (racial or sex/gender minorities), and for racist stereotypes. I don’t think they would for people with unusual hobbies or lifestyle choices or belief sets, with stereotypes related to those things. “not being racist” or discriminating against some kinds of minorities is a sacred value for much of liberal elite society, but many kinds of minorities aren’t covered by that.
Crappy stereotypes are always bad, but I don’t think that means that just because you’re a minority you shouldn’t be potentially subject to serious criticism (of course, unfortunately this criticism isn’t intellectually serious).
Sounds like the reversal test
I don’t think I saw the 80k thing in particular at the time
I agree with some of the thrust of this question, but want to flag that I think these sources and this post kind of conflate FTX being extravagant and SBF personally being so. E.g. if you click through the restaurant tabs were about doordash orders for FTX, not SBF personally. I think it’s totally consistent to believe it’s worth spending a lot on employee food (especially given they were trying to retain top talent in a difficult location in a high-paying field) while being personally more abstemious
As an EA at the time (let’s say mid-2022), I knew there were aspects of the of the FTX situation what were very plush. I still believed it was part of SBF’s efforts to make as much money as possible for good causes, and had heard SBF say things communicating that he thought it was worth spending a lot in the course of optimizing intensely for having the best shot of making a ton of money in the long run, and was generally skeptical of the impact of aiming at frugality. My impression at the time was indeed that the Corolla was a bit of a gimmick (and that the beanbag was about working longer, not saving money), but that SBF was genuinely very altruistic and giving his wealth away extremely quickly by the standards of new billionaires.
Yeah re the export controls, I was trying to say “I think CSET was generally anti-escalatory, but in contrast, the effect of their export controls work was less so” (though I used the word “ambiguous” because my impression was that some relevant people saw a pro of that work that it also mostly didn’t directly advance AI progress in the US, i.e. it set China back without necessarily bringing the US forward towards AGI). To use your terminology, my impression is some of those people were “trying to establish overwhelming dominance over China” but not by “investing heavily in AI”.
I largely agree with this post, and think this is a big problem in general. There’s also a lot of adverse selection that can’t be called out because it’s too petty and/or would require revealing private information. In a reasonable fraction of cases where I know the details, the loudest critics of a person or project is someone who has a pretty substantial negative-COI that isn’t being disclosed, like that the project fired them or defunded them or the person used to date them and broke up with them or something. As with positive COIs, there’s a problem where being closely involved with something both gives you more information you could use to form a valid criticism (or make a good hire or grant) that others might miss and is correlated with factors that could bias your judgment.
But with hiring and grantmaking there are generally internal processes for flagging these, whereas when people are making random public criticisms, there generally isn’t such a process
This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.
yeah, on second thought I think you’re right that at least the arg “For a fixed valuation, potential is inversely correlated with probability of success” probably got a lot less attention than it should have, at least in the relevant conversations I remember
I’m a bit confused about how the first part of this post connects to the final major section… I recall people saying many of the things you say you wish you had said… do you think people were unaware FTX, a recent startup in a tumultuous new industry, might fail? Or weren’t thinking about it enough?
I agree strongly with your last paragraph, but I think most people I know who bounced from EA were probably just more of gold diggers, fad-follwing, or sensitive to public opinion and less willing to do what’s hard when circumstances become less comfortable (but of course they won’t come out and say it and plausibly don’t admit it to themselves). Of the rest, it seems like they were bothered by a combination of the fraud, how EAs responded to the collapse, and updated towards the dangers of more utilitarian-style reasoning and the people it attracts.
Another meta thing about the visuals is that I don’t like the +[number] feature that makes it so can’t tell, at a glance, that the voting is becoming very tilted towards the right side
I was also convinced by this and other things to write a letter, and am commenting now to make the idea stay salient to people on the Forum.
The scientific proposition is “are there racial genetic differences related to intelligence” right, not “is racism [morally] right”?
I find it odd how much such things seem to be conflated; if I learned that Jews have an IQ an average of 5 points lower than non-Jews, I would… still think the Holocaust and violence towards and harassment of Jews was abhorrent and horrible? I don’t think I’d update much/at all towards thinking it was less horrible. Or if you could visually identify people whose mothers had drank alcohol during pregnancy, and they were statistically a big less intelligent (as I understand them to be), enslaving them, genociding them, or subjecting them to Jim Crow style laws would seem approximately as bad as it seems to do to some group that’s slightly more intelligent on average.
I agree with
if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
and
if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
and
many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. “finding EA cause X” or “thinking about if newly invented systems of government will work well”. This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so.
Two points:
(1) I don’t think “we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits” or “all drugs should be legal and ideally available for free from the state” are the most popular political positions in the US, nor close to them, even for D-voters.
(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they “genuinely believe”
But yes, per my earlier point, if you told me for example “there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let’s say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most” I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don’t have much explanatory power. And, assuming you meant “support” not “genuinely believe” and cutting the two bullets I claim aren’t even majority positions among for example D-voters, and B>A>C but barely
[not trying to take a position on the whole issue at hand in this post here] I think I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t. And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists. Claims without justification, mostly because I find it helpful to articulate my beliefs aloud for myself:
I don’t think people generally having correct beliefs on irrelevant social issues is very correlated with having correct beliefs on their area of expertise
I think in most cases, having unpopular and unconventional beliefs is wrong (most contrarians are not correct contrarians)
A bunch of unpopular and unconventional things are true, so to be maximally correct you have to be a correct contrarian
Some people aren’t really able to entertain unpopular and unconventional ideas at all, which is very anticorrelated with the ability to have important insights and make huge contributions to a field
But lots of people have very domain-specific ability to have unpopular and unconventional ideas while not having/not trusting/not saying those ideas in other domains.
A large subset of the above are both top-tier in terms of ground-breaking insights in their domain of expertise, and put off by groups that are maximally open to unpopular and unconventional beliefs (which are often shitty and costs to associate with)
I think people who are top-tier in terms of ability to have ground-breaking insights in their domain disproportionately like discussing unpopular and unconventional beliefs from many different domains, but I don’t know if, among people who are top-tier in terms of ground-breaking insights in a given domain, the majority prefer to be in more or less domain-agnostically-edgy crowds.
(1) I agree if your timelines are super short, like <2yrs, it’s probably not worth it. I have a bunch of probability mass on longer timelines, though some on really short ones
Re (2), my sense is some employees already have had some of this effect (and many don’t. But some do). I think board members are terrible candidates for changing org culture; they have unrelated full-time jobs, they don’t work from the office, they have different backgrounds, most people don’t have cause to interact with them much. People who are full-time, work together with people all day every day, know the context, etc., seem more likely to be effective at this (and indeed, I think they have been, to some extent in some cases)
Re (3), seems like a bunch of OAI people have blown the whistle on bad behavior already, so the track record is pretty great, and I think them doing that has been super valuable. And 1 whistleblower seems much better than several converts is bad. I agree it can be terrible for mental health for some people, and people should take care of themselves.
Re (4), um, this is the EA Forum, we care about how good the money is. Besides crypto, I don’t think there are many for many of the relevant people to make similar amounts of money on similar timeframes. Actually I think working at a lab early was an effective way to make money. A bunch of safety-concerned people for example have equity worth several millions to tens of millions, more than I think they could have easily earned elsewhere, and some are now billionaires on paper. And if AI has the transformative impact on the economy we expect, that could be worth way more (and it being worth more is correlated with it being needed more, so extra valuable); we are talking about the most valuable/powerful industry the world has ever known here, hard to beat that for making money. I don’t think that makes it okay to lead large AI labs, but for joining early, especially doing some capabilities work that doesn’t push the most risky capabilities along much, I don’t think it’s obvious.
I agree that there are various risks related to staying too long, rationalizing, being greedy, etc., and in most cases I wouldn’t advice a safety-concerned person to do capabilities. But I think you’re being substantially too intense about the risk of speeding up AI relative to the benefits of seeing what’s happening on the inside, which seem like they’ve already been very substantial
Yes. I think most people working on capabilities at leading labs are confused or callous (or something similar, like greedy or delusional), but definitely not all. And personally, I very much hope there are many safety-concerned people working on capabilities at big labs, and am concerned about the most safety-concerned people feeling the most pressure to leave, leading to evaporative cooling.
Reasons to work on capabilities at a large lab:
To build career capital of the kind that will allow you to have a positive impact later. E.g. to be offered relevant positions in government
To positively influence the culture of capabilities teams or leadership at labs.
To be willing and able to whistleblow bad situations (e.g. seeing emerging dangerous capabilities in new models, the non-disparagement stuff).
[maybe] to earn to give (especially if you don’t think you’re contributing to core capabilities)
To be clear, I expect achieving the above to be infeasible for most people, and it’s important for people to not delude themselves into thinking they’re having a positive impact to keep enjoying a lucrative, exciting job. But I definitely think there are people for whom the above is feasible and extremely important.
Another way to phrase the question is “is it good for all safety-concerned people to shun capabilities teams, given (as seems to be the case) that those teams will continue to exist and make progress by default?” And for me the strong answer is “yes”. Which is totally consistent with wanting labs to pause and thinking that just contributing to capabilities (on frontier models) in expectation is extremely destructive.
I’m confused by this post. Sam Altman isn’t an EA, afaik, and hasn’t claimed to be, afaik, and afaik no relatively in-the-know EAs thought he was, or even in recent years thought he was particularly trustworthy, though I’d agree that many have updated negative over the last year or two.
But a substantial number of EAs spent the next couple of weeks or months making excuses not to call a spade a spade, or an amoral serial liar an amoral serial liar. This continued even after we knew he’d A) committed massive fraud, B) used that money to buy himself a $222 million house, and C) referred to ethics as a “dumb reputation game” in an interview with Kelsey Piper.
This wasn’t because they thought the fraud was good; everyone was clear that SBF was very bad. It’s because a surprisingly big number of people can’t identify a psychopath. I’d like to offer a lesson on how to tell. If someone walks up to you and says “I’m a psychopath”, they’re probably a psychopath.
Very few EAs that I know did that (I’d like to see stats, of the dozens of EAs I know, none publicly/to my knowledge did such a thing except if I remember right Austin Chen in an article I now can’t find). And for people who did defend Sam, I don’t know why you’d assume that the issue is them not being able to identify psychopaths, as opposed to being confused about the crimes SBF committed and believing they were the result of a misunderstanding or something like that
I think (1) is just very false for people who might seriously consider entering government, and irresponsible advice. I’ve spoken to people who currently work in government, who concur that the Trump administration is illegally checking on people’s track record of support for Democrats. And it seems plausible to me that that kind of thing will intensify. I think that there’s quite a lot of evidence that Trump is very interested in loyalty and rooting out figures who are not loyal to him, and doing background checks, of certain kinds at least, is literally the legal responsibility of people doing hiring in various parts of government (though checking donations to political candidates is not supposed to be part of that).
I’ll also say that I am personally a person who has looked up where individuals have donated (not in a hiring context), and so am existence proof of that kind of behavior. It’s a matter of public record, and I think it is often interesting to know what political candidates different powerful figures in the spaces I care about are supporting.
If you haven’t already, you might want to take a look at this post: https://forum.effectivealtruism.org/posts/6o7B3Fxj55gbcmNQN/considerations-around-career-costs-of-political-donations