I think the basic argument that there’s a good chance that OpenAI creates an ASI, and so it’s important that they have a good safety team, remains very strong. I think for a long time the case for working at OpenAI has not been that we ought to trust the company or agree with most of its policy decisions. It’s that they might create the most powerful entity that has ever existed, and making that go better is high EV.
Dicentra
I see that you’ve now made an edit noting that your comment was specifically about small organizations. While I agree that the concerns about asking out colleagues and flirting with them are larger in a smaller organization than in a huge organization, I still definitely don’t think it’s obvious that such behavior should be disallowed (in fact, I believe it should generally be allowed, as long as people aren’t in one another’s reporting lines).
I disagree-voted with your claim. I wouldn’t say it’s actively professional, but I don’t think it’s inherently unprofessional either to flirt with, hit on, and ask out a colleague if neither of you is in each other’s reporting chain, and your company has no policy against it. My sense is that this is pretty accepted in most of the large organizations that I know of as a matter of course and not considered bad behavior, nor should it be. Of course, given that you work together ongoingly, it’s probably prudent to be especially receptive to any signs that it’s making them uncomfortable and stop right away if so.
I also thought the tone of your comment was snide and unpleasant, and also just overconfident: most large companies I know don’t have a policy against their employees asking each other out (e.g. here’s an old discussion of Google and Facebook’s policies), so I don’t know why you would think would or consider it so obvious.
This is completely separate from the matter Frances is discussing about having a document discussing her rape shared among her colleagues, which sounds exceedingly distressing, and I have a hard time thinking of a reasonable justification for.
I think (1) is just very false for people who might seriously consider entering government, and irresponsible advice. I’ve spoken to people who currently work in government, who concur that the Trump administration is illegally checking on people’s track record of support for Democrats. And it seems plausible to me that that kind of thing will intensify. I think that there’s quite a lot of evidence that Trump is very interested in loyalty and rooting out figures who are not loyal to him, and doing background checks, of certain kinds at least, is literally the legal responsibility of people doing hiring in various parts of government (though checking donations to political candidates is not supposed to be part of that).
I’ll also say that I am personally a person who has looked up where individuals have donated (not in a hiring context), and so am existence proof of that kind of behavior. It’s a matter of public record, and I think it is often interesting to know what political candidates different powerful figures in the spaces I care about are supporting.
If you haven’t already, you might want to take a look at this post: https://forum.effectivealtruism.org/posts/6o7B3Fxj55gbcmNQN/considerations-around-career-costs-of-political-donations
I heard someone from Kevin Esvelt’s lab talking about this + pain-free lab mice once
I upvoted this because I like the passion, and I too feel a desire to passionately defend EA and the disempowered beneficiaries EAs seek to protect, who are indirectly harmed by this kind of sloppy coverage. I do hope people respond, and I think EAs err towards being too passive about media coverage.
But I think important parts of this take are quite wrong.
Most people just aren’t basically sympathetic to EA, let alone EAs-waiting-to-happen; they have a tangle of different moral intuitions and aren’t very well-informed or thoughtful about it. Sure, they’ll say they want more effective charity, but they also want to give back to their local community and follow fads and do what makes them feel good and support things that helped them in particular and keep the money for themselves and all kindsa stuff. So, I don’t think this is surprising, and I think it’s important for EAs to be clear-eyed about how they’re different from other people.
I don’t think that means EAs could never be a dominant force in philanthropy or whatever; most people throughout history didn’t care about anti-racism or demoncracy but they’re popular now; caring about what your ancestors has declined a lot; things can change, I just don’t think it’s inevitable or foregone (or couldn’t reverse).
If someone wrote an article about a minority group and described them with a few nasty racist stereotypes, there would be massive protests, retractions, apologies and a real effort to ensure that people were well informed about the reality.
People would do this for some kinds of minorities (racial or sex/gender minorities), and for racist stereotypes. I don’t think they would for people with unusual hobbies or lifestyle choices or belief sets, with stereotypes related to those things. “not being racist” or discriminating against some kinds of minorities is a sacred value for much of liberal elite society, but many kinds of minorities aren’t covered by that.
Crappy stereotypes are always bad, but I don’t think that means that just because you’re a minority you shouldn’t be potentially subject to serious criticism (of course, unfortunately this criticism isn’t intellectually serious).
Sounds like the reversal test
I don’t think I saw the 80k thing in particular at the time
I agree with some of the thrust of this question, but want to flag that I think these sources and this post kind of conflate FTX being extravagant and SBF personally being so. E.g. if you click through the restaurant tabs were about doordash orders for FTX, not SBF personally. I think it’s totally consistent to believe it’s worth spending a lot on employee food (especially given they were trying to retain top talent in a difficult location in a high-paying field) while being personally more abstemious
As an EA at the time (let’s say mid-2022), I knew there were aspects of the of the FTX situation what were very plush. I still believed it was part of SBF’s efforts to make as much money as possible for good causes, and had heard SBF say things communicating that he thought it was worth spending a lot in the course of optimizing intensely for having the best shot of making a ton of money in the long run, and was generally skeptical of the impact of aiming at frugality. My impression at the time was indeed that the Corolla was a bit of a gimmick (and that the beanbag was about working longer, not saving money), but that SBF was genuinely very altruistic and giving his wealth away extremely quickly by the standards of new billionaires.
Yeah re the export controls, I was trying to say “I think CSET was generally anti-escalatory, but in contrast, the effect of their export controls work was less so” (though I used the word “ambiguous” because my impression was that some relevant people saw a pro of that work that it also mostly didn’t directly advance AI progress in the US, i.e. it set China back without necessarily bringing the US forward towards AGI). To use your terminology, my impression is some of those people were “trying to establish overwhelming dominance over China” but not by “investing heavily in AI”.
I largely agree with this post, and think this is a big problem in general. There’s also a lot of adverse selection that can’t be called out because it’s too petty and/or would require revealing private information. In a reasonable fraction of cases where I know the details, the loudest critics of a person or project is someone who has a pretty substantial negative-COI that isn’t being disclosed, like that the project fired them or defunded them or the person used to date them and broke up with them or something. As with positive COIs, there’s a problem where being closely involved with something both gives you more information you could use to form a valid criticism (or make a good hire or grant) that others might miss and is correlated with factors that could bias your judgment.
But with hiring and grantmaking there are generally internal processes for flagging these, whereas when people are making random public criticisms, there generally isn’t such a process
This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.
yeah, on second thought I think you’re right that at least the arg “For a fixed valuation, potential is inversely correlated with probability of success” probably got a lot less attention than it should have, at least in the relevant conversations I remember
I’m a bit confused about how the first part of this post connects to the final major section… I recall people saying many of the things you say you wish you had said… do you think people were unaware FTX, a recent startup in a tumultuous new industry, might fail? Or weren’t thinking about it enough?
I agree strongly with your last paragraph, but I think most people I know who bounced from EA were probably just more of gold diggers, fad-follwing, or sensitive to public opinion and less willing to do what’s hard when circumstances become less comfortable (but of course they won’t come out and say it and plausibly don’t admit it to themselves). Of the rest, it seems like they were bothered by a combination of the fraud, how EAs responded to the collapse, and updated towards the dangers of more utilitarian-style reasoning and the people it attracts.
Another meta thing about the visuals is that I don’t like the +[number] feature that makes it so can’t tell, at a glance, that the voting is becoming very tilted towards the right side
I was also convinced by this and other things to write a letter, and am commenting now to make the idea stay salient to people on the Forum.
The scientific proposition is “are there racial genetic differences related to intelligence” right, not “is racism [morally] right”?
I find it odd how much such things seem to be conflated; if I learned that Jews have an IQ an average of 5 points lower than non-Jews, I would… still think the Holocaust and violence towards and harassment of Jews was abhorrent and horrible? I don’t think I’d update much/at all towards thinking it was less horrible. Or if you could visually identify people whose mothers had drank alcohol during pregnancy, and they were statistically a big less intelligent (as I understand them to be), enslaving them, genociding them, or subjecting them to Jim Crow style laws would seem approximately as bad as it seems to do to some group that’s slightly more intelligent on average.
I agree with
if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
and
if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
and
many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. “finding EA cause X” or “thinking about if newly invented systems of government will work well”. This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so.
Two points:
(1) I don’t think “we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits” or “all drugs should be legal and ideally available for free from the state” are the most popular political positions in the US, nor close to them, even for D-voters.
(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they “genuinely believe”
But yes, per my earlier point, if you told me for example “there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let’s say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most” I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don’t have much explanatory power. And, assuming you meant “support” not “genuinely believe” and cutting the two bullets I claim aren’t even majority positions among for example D-voters, and B>A>C but barely
I want to separate what people have a legally protected right to do and what I feel like they have a commonsense right to do. I have approximately no information other than what’s here about what actually went down between Riley, CEA, and Fran. But there are versions of “speculating about a coworker’s mental health” that I think are commonsense-reasonable, and to which I’m morally sympathetic, even if they might be imprudent and legally forbidden.
For example, I think that noticing that a co-worker is underperforming and speculating that the co-worker might be struggling and need extra support because you heard that their dog died recently and they seemed upset about it, can be the result of normal human sympathy and desire to raise relevant information. For example, it’s the kind of thing that I think a normal and well-intentioned person might raise if they noticed a friend of a friend or a cousin struggling.
I’m not saying that you should treat co-workers exactly the same, and there are no risks to doing so. Obviously there are. And I’m not saying anything about Riley-in-particular’s intentions here. I just think that having this kind of condemnatory attitude towards the entire class of behavior that involves ever raising information about potential causes of another’s struggles in a workplace feels like a really harsh and cold attitude towards workplace relationships, and I don’t like it.