Given your position I am concerned about the arms race accelerationism messaging in this post. Substantively, the major claims of this post are “China AI progress poses a serious threat we must overcome via AI progress (that is, we are in an arms race)” and “society may regulate AI such that projects that don’t meet a very high standard of safety will not be deployable”. The argument is that pursuing safety follows from these premises, mostly the latter.
This can be interpreted in a number of ways, charitably or uncharitably. Independent of that, I do not think it is really a good idea to talk this way about AI, re: geopolitics. It has a very bad track record with other stuff such as nukes, and I’m not sure who the intended audience is (are capabilities CEOs China hawks who can only be convinced to slow down if framed in terms of beating China? big if true)
Agrippa
Hmm. I think if I had been in an abusive situation such as the ones OP describes, and I (privately) went to the Community Health team about it, and the only outcomes were what you just listed, I would have considered it a waste of my time and emotional energy.
Edit: waste of my time relative to “going public”, that is.
We were familiar with many (but not all) of the concerns raised in Ben’s post based on our own investigation.
What happened as a result of this, before Ben posted?
Thanks for writing, I hope things change.
PS: I think the name “Ratrick Bayesman” will live in my head for at least 5 years
Yeah. (as a note I am also a fan of the animal welfare stuff).
This is good suggestion.
I think most of this stuff is too dry to hold my attention by itself. I would like a social environment that was engaging yet systematically directed my attention more often to things I care about. This happens naturally if I am around people who are interesting/fun but also highly engaged and motivated about a topic. As such I have focused on community and community spaces more than, for example, finding a good randomista newsletter or extracting randomista posts from the forums.
Another reason to focus on community interaction, is that it is both much more fun and much more useful to help with creative problem solving. But forum posts tend to report the results of problem solving / report news. I would rather be engaging with people before that step, but I don’t know of a place where one could go to participate in that aside from employment. In contrast, I do have a sense of where one could go to participate in this kind of group or community re: AI safety.
from private convos I am pretty sure that the tweet about mike vassar is in reference to this https://forum.effectivealtruism.org/posts/7b9ZDTAYQY9k6FZHS/abuse-in-lesswrong-and-rationalist-communities-in-bloomberg?commentId=FCcEMhiwtkmr7wS84 (which is about Mike Vassar, not Jacy)
there may or may not be other things informing it, but it’s not about Jacy.
“It doesn’t exist” is too strong for sure. I consider GiveWell central to the randomista part and it was my entrypoint into EA at large. Founder’s Pledge was also pretty randomista back when I was applying for a job there in college. I don’t know anything about HLI.
There may be a thriving community around GiveWell etc that I am ignorant to. Or maybe if I tried to filter out non-randomista stuff from my mind then I would naturally focus more on randomista stuff when engaging EA feeds.
The reality is that I find stuff like “people just doing AI capabilities work and calling themselves EA” to be quite emotionally triggering and when I’m exposed to it thats what my attention goes to (if I’m not, as is more often the case, avoiding the situation entirely). Naturally this probably makes me pretty blind to other stuff going on in EA channels. There are pretty strong selection effects on my attention here.
All of that said, I do think that community building in EA looks completely different than how it would look if it were the GiveWell movement.
[Question] Is this quote from SBF aligned with EA?
17. I get a lot of messages these days about people wanting me to moderate or censor various forms of discussion on LessWrong that I think seem pretty innocuous to me, and the generators of this usually seem to be reputation related. E.g. recently I’ve had multiple pretty influential people ping me to delete or threaten moderation action against the authors of posts and comments talking about: How OpenAI doesn’t seem to take AI Alignment very seriously, why gene drives against Malaria seem like a good idea, why working on intelligence enhancement is a good idea. In all of these cases the person asking me to moderate did not leave any comment of their own trying to argue for their position, before asking me to censor the content. I find this pretty stressful, and also like, most of the relevant ideas feel like stuff that people would have just felt comfortable discussing openly on LW 7 years ago or so (not like, everyone, but there wouldn’t have been so much of a chilling effect so that nobody brings up these topics).
First of all, yikes.
Second of all, I think I could always sense that things were like this (broadly speaking), but simultaneously worried I was just paranoid and deranged. I think that this dynamic has been quite bad for my mental health.
I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very “randomista” flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.
I feel like I joined EA for this “randomista” flavored version of the movement. I don’t really feel like the version of EA I thought I was joining exists even though, as you describe here, it gets a lot of lip service (because it’s uncontroversially good and inspiring!!!!). I found it validating for you to point this out.
If it does exist, it hasn’t recruited me despite my pretty concentrated efforts over several years. And I’m not sure why it wouldn’t.
I don’t have a problem with longtermist principles. As far as I’m concerned maybe the best way to promote longterm good really is to take huge risks at the expense of community health / downside risks / integreity ala SBF (among others). But I don’t want to spend my life participating in some scheme to ruthlessly attain power and convert it into good, and I sure as hell don’t want to spend my life participating in that as a pawn. I liked the randomista + earn to give version of the movement because I could just do things that were definitely good to do in the company of others doing the same. I feel like that movement has been starved out by this other thing wearing it as a mask.
My critique seems resilient to this consideration. The fact that managers do not publicly criticize employees is not evidence of discomfort or awkwardness. Under the very obvious model of “how would a manager get what they want re: an employee”, public criticism is not a sensical lever to want to use.
There would still be zero benefit to publicly criticize in the case you are describing.
Related, there’s far more public criticism from Google employees about their management than there is their management about their employees. This plays out on a lot of levels.
The nature of A having power over B is that A doesn’t need to coordinate with others in order to get what A wants with respect to B. It would be really bizarre for management to publicly criticize employees whom they can just fire. There is simply no benefit. This explains much more of the variance than anything to do with awkwardness or “punching down”.
Nice try—I like your on-the-nose username
As somebody in the industry I have to say Alameda/FTX pushing MAPS was surreal and cannot be explained as good faith investing by a competent team.
As far as I can tell there is no reason to condemn fraud, but not the stuff SBF openly endorsed, except that fraud happened and hit the “bad” outcome.
From https://conversationswithtyler.com/episodes/sam-bankman-fried/
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.
One of my friends literally withdrew everything from FTX after seeing this originally, haha. Pretty sure the EV on whatever scheme occurred was higher than 51⁄49, so it follows....
I have to say I didn’t expect “all remaining assets across ftx empire ‘hacked’ and apps updated to have malware” as an outcome.
(as an aside it also seems quite unusual to apply this impartiality to the finances of EAs. If EAs were going to be financially impartial it seems like we would not really encourage trying to earn money in competitive financially zero sum ways such as a quant finance career or crypto trading)
Seriously, imagine dedicating your life to EA and then finding out you lost your life savings because one group of EAs defrauded you and the other top EAs decided you shouldn’t be alerted about it for as long as possible specifically because it might lead to you reaching safety. Of course none of the in-the-know people decided to put up their own money to defend against bank run, just decided it would be best if you kept doing so.
In that situation I have to say I would just go and never look back.
Minor note: “based” is a part of current gen Z parlance and “fag” is a part of current queer gen Z parlance.