This has included Will MacAskill and other thought leaders for the grave sin of not magically predicting that someone whose every external action suggested that he wanted to work with us to make the world a better place, would YOLO it and go Bernie Madoff.
A contingent of EAs (e.g., Oliver Habryka and the early Alameda exodus) seems to have had strongly negative views of SBF well in advance of the FTX fraud coming to light. So I think it’s worthwhile for some EAs to do a postmortem on why some people were super worried and others were (apparently) not worried at all.
Otherwise, I agree with you that folks have seemed to overreact more than underreact, and that there have been a lot of rushed overconfident claims, and a lot of hindsight-bias-y claims.
I have maybe 1-2 other people/organizations that I feel as doomy about as I did for SBF.
That said, there are definitely lots of other parts of EA that I feel unhappy about, and I do think there are some deeper things going wrong (like generally too much private scheming and empire building, and not enough blurting out of opinions and open curiosity and deep commitment to honesty).
Yep, definitely matters! I also think the current situation is some more evidence of my contention over your contention, though not like hugely so (at least without digging into the details).
At least in this case though we can agree that something terrible happened, so let’s start with an analysis of what things could have maybe prevented that. And maybe we will learn that it was really hard to prevent, or the prevention would have been too costly.
And I do agree and want to apologize at least a bit for pushing towards my contention in a way that I think was a bit too much driven by a specific inside view in which this did seem quite straightforwardly caused by a bunch of stuff that I have been concerned about for a while, but I don’t think I’ve explicated those models in sufficient detail to be compelling to many people.
Ollie has a very high false positive rate. I can’t imagine a person or project he wouldn’t get off on kicking the legs out from under (https://getyarn.io/yarn-clip/1767181b-4b9d-4f7f-95fa-be4cae266511). If I join in on this “re-drawing lines around the community” exercise he can stand with Kerry Vaugn, Emile Torres, and Robin Hanson on the other side of my big chalk circle.
No one in the Alameda exodus expected fraud. They just thought Sam was terrible on other dimensions, they’ve said as much. Heck, Lantern lost money they were holding on FTX because they trusted it there.
I said many times neither me nor others predicted the scale of the fraud and explosion at FTX.
I do think it was clear that Lantern was disassociating themselves from Sam, and stopped giving Sam resources, and that is the primary thing I think we should have done too, based on roughly the same info that Lantern and other past Alameda employees had.
About false-positives: I agree that false-positives are a key thing to pay attention to. I do think I have concerns about a bunch of things in EA here, though I think it’s really far from everyone, and also, I don’t think I have a historically bad track record (like, I did say that CEA was really corrupt and broken during 2015-2017 and I think that is accurate and I think current CEA would agree with that. I also think my concerns about Leverage are well-warranted. I was also one of the people most involved with kicking Diego out of the community, and in the Bay Area warned a lot of people about Brent earlier than others. I also think I was too pessimistic about CEA after 2018, and I was wrong about the future of EA Funds when I was frustrated with it at various points.)
We can make a concrete list of organizations and public intellectuals if you want, and then people can judge on their own if I have a huge false-positive rate.
(For some random examples: I think the 80k podcast is great. I think SSC/ACX is great. I think Open Phil’s research team does a lot of good work despite me deeply disagreeing with them a lot. I think MIRI has done good work but also produced a depression machine that made everyone there depressed. I think FHI was really great before it scaled a lot, now it is a sad husk of its former self, as the university has been smothering it. I think Nicole Ross’s team is great and they do valuable work, though I think they lack ambition. I think Julia Wise’s stuff is high-variance and sometimes has bad consequences, but I’ve come around over the years to thinking that it’s probably good for the world, though I am still hesitant. I think Redwood Research is genuinely well-intentioned and doing good work, and I trust Buck a lot, though almost all the work they do doesn’t seem to be on the critical path towards safe AI (but it’s still my favorite prosaic alignment place to send people to). I think Paul fucked up hard by endorsing OpenAI as much as he did, and I think something kind of bad is happening with his writing style, but he is a genuinely great thinker for AI Alignment and I’ve learned a lot from him about how to think about AI Alignment, and want him to have the resources to pursue his research even if it seems doomed to me. I think CFAR makes some great workshops, though also had some pretty fucked-up dynamics and Anna continues to have a not-great track record at identifying who will end up kind of crazy and causing a bunch of harm. I think Robin Hanson’s research and writing is great, though I heard he has some more dubious in-person behavior. I think the Atlas Fellowship seems pretty cool and I support it, though I think the presentation is too… I don’t know, fake-ambitious/EAish. In-general the Stanford EAs seem like they are doing cool stuff with a bunch of the events and workshops they are running.)
Guys, please have a go at people on another person’s post. God knows there are enough of them… This is exactly what I’m talking about and I will literally have a coronary. Lol.
Oof. What did Gideon Futerman do? Break the wagon circle to air his loss of confidence in EA leadership? I think it’s pretty reasonable to have said confidence shaken, especially when no one is talking (seemingly putting their own reputations, and PR, ahead of what’s best for the future of the EA community).
Honestly not sure why I seem to have become NeoMohists enemy! Like I just posted some questions, sure they were questioning and not the most sympathetic to the leadership, but its hardly enough I think to warrant this. On the otherhand, I am sure NeoMohist is going through a difficult time like many of us, so I sort of get jumping to attack me (I’m sure ai have been similarly unreasonable at times in the past)
Ollie has the highest false positive rate I’ve ever seen.
What are the false positives? Some of the big things I recall Ollie critiquing in the past are Leverage Research and “mainstream EA playing fast and loose with honesty”; those critiques seem to have aged well.
No one in the Alameda exodus expected fraud. They just thought Sam was terrible on other dimensions, they’ve said as much.
The claim isn’t “lots of people predicted fraud, therefore we should do a postmortem on why EA’s FTX boosters didn’t predict it”. Rather, the claim is that SBF had a bunch of red flags that plausibly would have sufficed for at least engaging with SBF with a lot more caution, as opposed to whole-heartedly embracing him in the way that a lot of EA did.
It might also have increased the probability of more of FTX’s shady and shoddy business practices coming to light, but I agree that this is more uncertain. The main question is just whether there were process or norm failures in terms of how we reacted to lesser warning signs. Lesser warning signs won’t always let you catch disasters in advance, but it does matter how we react to those signs—in expectation, whether or not it would have helped in this case.
Habryka’s comments have been pretty strongly upvoted on the forum these days, indicating that folks have found them helpful. It might be useful for you to provide receipts or anything that supports claims like “Ollie has the highest false positive rate”, “Ollie is terrible for EA”, otherwise this is just a fairly baseless accusation.
Seriously mate, I get your upset, but maybe take a break from the forum for a while? Like, this isn’t healthy. I get my critiques of the leadership I have has upset you, and maybe they are incorrect, but I don’t quite understand why you want me out of EA.
Also, if you do want to find out who I am before wanting to remove me from your community, please check out my profile so you can know who I an 🙂
A contingent of EAs (e.g., Oliver Habryka and the early Alameda exodus) seems to have had strongly negative views of SBF well in advance of the FTX fraud coming to light. So I think it’s worthwhile for some EAs to do a postmortem on why some people were super worried and others were (apparently) not worried at all.
Otherwise, I agree with you that folks have seemed to overreact more than underreact, and that there have been a lot of rushed overconfident claims, and a lot of hindsight-bias-y claims.
The way to avoid hindsight bias here, I think, is to ask what other similar contingents have similar negative views about what else.
Will we be flooded with 100 targets that people have negative views about? If so, maybe this kind of signal is usually a false positive
Will there only be 2 others? If so, maybe we should deal with those 2
I have maybe 1-2 other people/organizations that I feel as doomy about as I did for SBF.
That said, there are definitely lots of other parts of EA that I feel unhappy about, and I do think there are some deeper things going wrong (like generally too much private scheming and empire building, and not enough blurting out of opinions and open curiosity and deep commitment to honesty).
Your contention: Too much planning and building, not enough blurting out “deeply honest” (and very negative!) opinions.
My contention: Stop burning each other online, EA is not Mean Girls. Shut up, play nice, go into the real world, build, and get shit done.
This actually matters!
Yep, definitely matters! I also think the current situation is some more evidence of my contention over your contention, though not like hugely so (at least without digging into the details).
At least in this case though we can agree that something terrible happened, so let’s start with an analysis of what things could have maybe prevented that. And maybe we will learn that it was really hard to prevent, or the prevention would have been too costly.
And I do agree and want to apologize at least a bit for pushing towards my contention in a way that I think was a bit too much driven by a specific inside view in which this did seem quite straightforwardly caused by a bunch of stuff that I have been concerned about for a while, but I don’t think I’ve explicated those models in sufficient detail to be compelling to many people.
Ollie has a very high false positive rate. I can’t imagine a person or project he wouldn’t get off on kicking the legs out from under (https://getyarn.io/yarn-clip/1767181b-4b9d-4f7f-95fa-be4cae266511). If I join in on this “re-drawing lines around the community” exercise he can stand with Kerry Vaugn, Emile Torres, and Robin Hanson on the other side of my big chalk circle.
No one in the Alameda exodus expected fraud. They just thought Sam was terrible on other dimensions, they’ve said as much. Heck, Lantern lost money they were holding on FTX because they trusted it there.
This feels kind of strawmanny.
I said many times neither me nor others predicted the scale of the fraud and explosion at FTX.
I do think it was clear that Lantern was disassociating themselves from Sam, and stopped giving Sam resources, and that is the primary thing I think we should have done too, based on roughly the same info that Lantern and other past Alameda employees had.
About false-positives: I agree that false-positives are a key thing to pay attention to. I do think I have concerns about a bunch of things in EA here, though I think it’s really far from everyone, and also, I don’t think I have a historically bad track record (like, I did say that CEA was really corrupt and broken during 2015-2017 and I think that is accurate and I think current CEA would agree with that. I also think my concerns about Leverage are well-warranted. I was also one of the people most involved with kicking Diego out of the community, and in the Bay Area warned a lot of people about Brent earlier than others. I also think I was too pessimistic about CEA after 2018, and I was wrong about the future of EA Funds when I was frustrated with it at various points.)
We can make a concrete list of organizations and public intellectuals if you want, and then people can judge on their own if I have a huge false-positive rate.
(For some random examples: I think the 80k podcast is great. I think SSC/ACX is great. I think Open Phil’s research team does a lot of good work despite me deeply disagreeing with them a lot. I think MIRI has done good work but also produced a depression machine that made everyone there depressed. I think FHI was really great before it scaled a lot, now it is a sad husk of its former self, as the university has been smothering it. I think Nicole Ross’s team is great and they do valuable work, though I think they lack ambition. I think Julia Wise’s stuff is high-variance and sometimes has bad consequences, but I’ve come around over the years to thinking that it’s probably good for the world, though I am still hesitant. I think Redwood Research is genuinely well-intentioned and doing good work, and I trust Buck a lot, though almost all the work they do doesn’t seem to be on the critical path towards safe AI (but it’s still my favorite prosaic alignment place to send people to). I think Paul fucked up hard by endorsing OpenAI as much as he did, and I think something kind of bad is happening with his writing style, but he is a genuinely great thinker for AI Alignment and I’ve learned a lot from him about how to think about AI Alignment, and want him to have the resources to pursue his research even if it seems doomed to me. I think CFAR makes some great workshops, though also had some pretty fucked-up dynamics and Anna continues to have a not-great track record at identifying who will end up kind of crazy and causing a bunch of harm. I think Robin Hanson’s research and writing is great, though I heard he has some more dubious in-person behavior. I think the Atlas Fellowship seems pretty cool and I support it, though I think the presentation is too… I don’t know, fake-ambitious/EAish. In-general the Stanford EAs seem like they are doing cool stuff with a bunch of the events and workshops they are running.)
I’d be interested to hear what you think is going wrong with Paul’s writing style, if you want to share.
Guys, please have a go at people on another person’s post. God knows there are enough of them… This is exactly what I’m talking about and I will literally have a coronary. Lol.
Appreciate you ❤️
Oof. What did Gideon Futerman do? Break the wagon circle to air his loss of confidence in EA leadership? I think it’s pretty reasonable to have said confidence shaken, especially when no one is talking (seemingly putting their own reputations, and PR, ahead of what’s best for the future of the EA community).
Honestly not sure why I seem to have become NeoMohists enemy! Like I just posted some questions, sure they were questioning and not the most sympathetic to the leadership, but its hardly enough I think to warrant this. On the otherhand, I am sure NeoMohist is going through a difficult time like many of us, so I sort of get jumping to attack me (I’m sure ai have been similarly unreasonable at times in the past)
In case some readers need more context: The source of the “Some EAs thought Sam was bad” comments seem to originate with Tara Mac Aulay and some of her colleagues in 2018. Some of this group then started Lantern.
What are the false positives? Some of the big things I recall Ollie critiquing in the past are Leverage Research and “mainstream EA playing fast and loose with honesty”; those critiques seem to have aged well.
The claim isn’t “lots of people predicted fraud, therefore we should do a postmortem on why EA’s FTX boosters didn’t predict it”. Rather, the claim is that SBF had a bunch of red flags that plausibly would have sufficed for at least engaging with SBF with a lot more caution, as opposed to whole-heartedly embracing him in the way that a lot of EA did.
It might also have increased the probability of more of FTX’s shady and shoddy business practices coming to light, but I agree that this is more uncertain. The main question is just whether there were process or norm failures in terms of how we reacted to lesser warning signs. Lesser warning signs won’t always let you catch disasters in advance, but it does matter how we react to those signs—in expectation, whether or not it would have helped in this case.
As a moderator, I think a previous version of this comment was rude and clearly violated many Forum norms. Another comment also violates Forum norms.
While I appreciate the edit, this is a warning, if you leave another comment like these, you will probably receive a ban.
Habryka’s comments have been pretty strongly upvoted on the forum these days, indicating that folks have found them helpful. It might be useful for you to provide receipts or anything that supports claims like “Ollie has the highest false positive rate”, “Ollie is terrible for EA”, otherwise this is just a fairly baseless accusation.
See the heart-related, moderation request above. Berglund (aka. my new fav) has an excellent and I am told extra spicy new short form for this.
Seriously mate, I get your upset, but maybe take a break from the forum for a while? Like, this isn’t healthy. I get my critiques of the leadership I have has upset you, and maybe they are incorrect, but I don’t quite understand why you want me out of EA. Also, if you do want to find out who I am before wanting to remove me from your community, please check out my profile so you can know who I an 🙂