I’m an artist, writer, and human being.
To be a little more precise: I make video games, edit Wikipedia, and write here and on LessWrong!
I’m an artist, writer, and human being.
To be a little more precise: I make video games, edit Wikipedia, and write here and on LessWrong!
oof
This was a really fun read; thanks for helping put it together!!
+1 from me.
I was talking about the whole situation with my parents, and they mentioned that their local synagogue experienced a very similar catastrophe, with the community’s largest funder turning out to be a con-man. Everybody impacted had a lot of soul-searching to do, but ultimately in retrospect, there was really nothing they could or should have done differently—it was a black-swan event that hasn’t repeated in the quarter of a century or so since it happened, and there were no obvious red flags until it was too late. Yes, we can always find details to agonize over, but ultimately, I doubt it will be very productive to change our whole modus operandi to prevent this particular black swan event from repeating (with a few notable exceptions).
Same here, this is really helping me understand the (at least perceived) narrative flow of events
Thank you for sharing, I can understand why you might be feeling burnt out!! I’ve been in a workplace environment that reminds me of this, and especially if you care about the people and projects there...it’s painful.
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant
this seems probable to me, thanks for sharing a good-faith explanation
+1 to this, can attest I’ve done the same, and immediately regretted it lol
Curious what you think about screenshots like this one, which I’ve now seen in a few different places.
This is a fair critique imo, I’m updating against SBF using EA for sociopathic reasons. That being said, only slightly updating towards him using EA ideology as his main motivator to commit fraud, as that still may very well not be the case.
I’ll be honest, I’ve been putting judgement based on his (apparent) lifestyle on hold, as I’ve seen some anecdotes/memes floating around twitter suggesting that he may not have been honest about his veganism/other lifestyle choices. I don’t know enough about that situation to distinguish the actual truth of the matter, so it’s possible I’ve been subject to misinformation there (also I scrolled past it quickly on Twitter and it’s possible it was some meta-ironic meme or something). If there is (legitimate) evidence he was actually faking it, that would make me update strongly in the other direction, of course.
I got a free textbook (that cost some insane amount of money on Amazon) once from a professor when I asked if he could share a copy of his work for reference in a video game I was making at the time. I don’t know if that counts, but seems worth mentioning.
It certainly isn’t a good outcome for EA either way, and I don’t want us prematurely absolving ourselves of any responsibility we may end up holding. I just want to be as clear-thinking about this as possible, so we can best mend ourselves moving forward.
Thanks for this; it’s a nicely compact summary of a really messy situation that I can quickly share if necessary.
I have a sticker by my bed reading ‘What Would SBF Do?’ (from EAG SF 2022) (I should probably remove that)
Maybe don’t remove that—this seems emblematic of a category of mistakes worth remembering , if only so we don’t repeat it.
+1 on this. It is painfully clear that we need to radically improve our practices relating to due diligence moving forward.
This is really interesting—thanks for sharing!
This has created a potentially dangerous mismatch in public perception between what the more serious AI safety researchers think they’re doing (e.g. reducing X risk from AGI), and what the public thinks AI safety is doing (e.g. developing methods to automate partisan censorship, to embed woke values into AI systems, and to create new methods for mass-customized propaganda).
This is the crux of the problem, yes. I don’t think this is because of a “conservative vs liberal” political rift though; the left is just as frustrated by, say, censorship of sex education or queer topics as the right may be upset by censorship of “non-woke” discussion—what matters is that the particular triggers for people on what is appropriate or not to censor are extremely varied, both across populations and across time. I don’t think it’s necessary to bring politics into this as an explanatory factor (though it may of course exacerbate existing tension).
that’s a fair point, I’m reconsidering my original take.