from private convos I am pretty sure that the tweet about mike vassar is in reference to this https://forum.effectivealtruism.org/posts/7b9ZDTAYQY9k6FZHS/abuse-in-lesswrong-and-rationalist-communities-in-bloomberg?commentId=FCcEMhiwtkmr7wS84 (which is about Mike Vassar, not Jacy)
there may or may not be other things informing it, but it’s not about Jacy.
Agrippa
“It doesn’t exist” is too strong for sure. I consider GiveWell central to the randomista part and it was my entrypoint into EA at large. Founder’s Pledge was also pretty randomista back when I was applying for a job there in college. I don’t know anything about HLI.
There may be a thriving community around GiveWell etc that I am ignorant to. Or maybe if I tried to filter out non-randomista stuff from my mind then I would naturally focus more on randomista stuff when engaging EA feeds.
The reality is that I find stuff like “people just doing AI capabilities work and calling themselves EA” to be quite emotionally triggering and when I’m exposed to it thats what my attention goes to (if I’m not, as is more often the case, avoiding the situation entirely). Naturally this probably makes me pretty blind to other stuff going on in EA channels. There are pretty strong selection effects on my attention here.
All of that said, I do think that community building in EA looks completely different than how it would look if it were the GiveWell movement.
[Question] Is this quote from SBF aligned with EA?
17. I get a lot of messages these days about people wanting me to moderate or censor various forms of discussion on LessWrong that I think seem pretty innocuous to me, and the generators of this usually seem to be reputation related. E.g. recently I’ve had multiple pretty influential people ping me to delete or threaten moderation action against the authors of posts and comments talking about: How OpenAI doesn’t seem to take AI Alignment very seriously, why gene drives against Malaria seem like a good idea, why working on intelligence enhancement is a good idea. In all of these cases the person asking me to moderate did not leave any comment of their own trying to argue for their position, before asking me to censor the content. I find this pretty stressful, and also like, most of the relevant ideas feel like stuff that people would have just felt comfortable discussing openly on LW 7 years ago or so (not like, everyone, but there wouldn’t have been so much of a chilling effect so that nobody brings up these topics).
First of all, yikes.
Second of all, I think I could always sense that things were like this (broadly speaking), but simultaneously worried I was just paranoid and deranged. I think that this dynamic has been quite bad for my mental health.
I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very “randomista” flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.
I feel like I joined EA for this “randomista” flavored version of the movement. I don’t really feel like the version of EA I thought I was joining exists even though, as you describe here, it gets a lot of lip service (because it’s uncontroversially good and inspiring!!!!). I found it validating for you to point this out.
If it does exist, it hasn’t recruited me despite my pretty concentrated efforts over several years. And I’m not sure why it wouldn’t.
I don’t have a problem with longtermist principles. As far as I’m concerned maybe the best way to promote longterm good really is to take huge risks at the expense of community health / downside risks / integreity ala SBF (among others). But I don’t want to spend my life participating in some scheme to ruthlessly attain power and convert it into good, and I sure as hell don’t want to spend my life participating in that as a pawn. I liked the randomista + earn to give version of the movement because I could just do things that were definitely good to do in the company of others doing the same. I feel like that movement has been starved out by this other thing wearing it as a mask.
My critique seems resilient to this consideration. The fact that managers do not publicly criticize employees is not evidence of discomfort or awkwardness. Under the very obvious model of “how would a manager get what they want re: an employee”, public criticism is not a sensical lever to want to use.
There would still be zero benefit to publicly criticize in the case you are describing.
Related, there’s far more public criticism from Google employees about their management than there is their management about their employees. This plays out on a lot of levels.
The nature of A having power over B is that A doesn’t need to coordinate with others in order to get what A wants with respect to B. It would be really bizarre for management to publicly criticize employees whom they can just fire. There is simply no benefit. This explains much more of the variance than anything to do with awkwardness or “punching down”.
Nice try—I like your on-the-nose username
As somebody in the industry I have to say Alameda/FTX pushing MAPS was surreal and cannot be explained as good faith investing by a competent team.
As far as I can tell there is no reason to condemn fraud, but not the stuff SBF openly endorsed, except that fraud happened and hit the “bad” outcome.
From https://conversationswithtyler.com/episodes/sam-bankman-fried/
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.
One of my friends literally withdrew everything from FTX after seeing this originally, haha. Pretty sure the EV on whatever scheme occurred was higher than 51⁄49, so it follows....
I have to say I didn’t expect “all remaining assets across ftx empire ‘hacked’ and apps updated to have malware” as an outcome.
(as an aside it also seems quite unusual to apply this impartiality to the finances of EAs. If EAs were going to be financially impartial it seems like we would not really encourage trying to earn money in competitive financially zero sum ways such as a quant finance career or crypto trading)
Seriously, imagine dedicating your life to EA and then finding out you lost your life savings because one group of EAs defrauded you and the other top EAs decided you shouldn’t be alerted about it for as long as possible specifically because it might lead to you reaching safety. Of course none of the in-the-know people decided to put up their own money to defend against bank run, just decided it would be best if you kept doing so.
In that situation I have to say I would just go and never look back.
Aspiring to be impartially altruistic doesn’t mean we should shank eachother. The so-impartial-we-will-harvest-your-organs-and-steal-your-money version of EA has no future as a grassroots movement or even room to grow as far as I can tell.
This community norm strategy works if you determine that retaining socioeconomically normal people doesn’t actually matter and you just want to incubate billionaires, but I guess we have to hope the next billionare is not so (allegedly) impartial towards their users’ welfare.
I would like to be involved in the version of EAs where we look after eachother’s basic wellness even if it’s bad for FTX or other FTX depositors. I think people will find this version of EA more emotionally safe and inspiring.
To me there is just no normative difference between trying to suppress information and actively telling people they should go deposit on FTX when distress occurred (without communicating any risks involved), knowing that there was a good chance they’d get totally boned if they did so. Under your model this would be no net detriment, but it would also just be sociopathic.
Yes the version of EA where people suppress this information, rather than actively promote deposits, is safer. But both are quite cruel and not something I could earnestly suggest to a friend that they devote their lives to.
What I think: I think that FTX was insolvent such that even if FTT price was steady, user funds were not fully backed. That is, they literally bet the money on a speculative investment and lost it, and this caused a multibillion dollar financial hole. It is also possible that some or all of the assets—liabilities deficit was caused by a hack that happened months ago that they did not report.
As far as I can tell, you don’t think this. Well, if you really don’t think that, and it turns out you were wrong, then I’d like you to update. I think probabilities are a good way to enforce that, that is my actual good-faith belief. Of course I’m also always looking for profitable trades.
Is there any bet you’d take, that doesn’t rely on a legal system (which I agree adds a lot of confounders, not to mention delay), on the above claim? Could we bet on “By April 2023, evidence arises that FTX user funds were not even 95% backed before Binance’s FTT selloff?” Or maybe we could bet on Nuno’s belief on the backing?
BTW your chart is USDD not USDC. Idk what USDD is.
Also I’ve now spent like wayyy too much time chatting about this on here. Making a bet would involve further chatting. So FYI the most likely outcome is that I wake up tomorrow and pretend it was all just a dream. Sorry to disappoint and thanks for indulging me a bit in the end.
You’re Agrippa! The guy with very short timelines, is Berkeley adjacent and knows that cool DxE person.
No, I do care about you! I respect you quite a bit. I was wrong and I retract what I said before in at least a few comments, and I apologize for my behavior. Also, I’ll be happy to take any negative repercussions.
😳 That’s nice of you, thanks.
I’m actually not a guy though I don’t take any offense to the assumption, given my username.
Maybe Nuno would escrow for us.
I’m probably down for $500, would need to talk to my partner about going much higher anyway. If you are in the US we might not need escrow since suing eachother is an option, if we went >5k that would be worth it.
Re SBF vs FTX/Alameda paying: Yeah I meant SBF personally. I agree it’s a big difference. Jan 1st is the date but I also don’t know how fast this stuff ever goes and researching it sounds annoying.
Given that you think it’s likely FTX “gambled” user funds I am really not sure we disagree on anything interesting to begin with :-[
Maybe you think it’s only 70% likely and I think it’s a lot more than that?
Also, thanks for taking a position on both. We are on the same side of 50⁄50 for the “gambled deposits” question, though. I wish we could come up with something we disagree on that might also resolve sooner, I’ll think on it…
Maybe we disagree on just how big FTX’s financial hole is? Could we bet on “as of today, FTX liabilities—FTX liabilites >= 4bn”? I’d go positive on that one.
Dunno… Really can’t tell what you believe. You commented that folks are being too negative yet seem to also think that FTX “gambled” user deposits, which sounds pretty negative to me (though we can disagree about whether it was good to have done this). Oh wellz.
Yeah. (as a note I am also a fan of the animal welfare stuff).
This is good suggestion.
I think most of this stuff is too dry to hold my attention by itself. I would like a social environment that was engaging yet systematically directed my attention more often to things I care about. This happens naturally if I am around people who are interesting/fun but also highly engaged and motivated about a topic. As such I have focused on community and community spaces more than, for example, finding a good randomista newsletter or extracting randomista posts from the forums.
Another reason to focus on community interaction, is that it is both much more fun and much more useful to help with creative problem solving. But forum posts tend to report the results of problem solving / report news. I would rather be engaging with people before that step, but I don’t know of a place where one could go to participate in that aside from employment. In contrast, I do have a sense of where one could go to participate in this kind of group or community re: AI safety.