I said many times neither me nor others predicted the scale of the fraud and explosion at FTX.
I do think it was clear that Lantern was disassociating themselves from Sam, and stopped giving Sam resources, and that is the primary thing I think we should have done too, based on roughly the same info that Lantern and other past Alameda employees had.
About false-positives: I agree that false-positives are a key thing to pay attention to. I do think I have concerns about a bunch of things in EA here, though I think it’s really far from everyone, and also, I don’t think I have a historically bad track record (like, I did say that CEA was really corrupt and broken during 2015-2017 and I think that is accurate and I think current CEA would agree with that. I also think my concerns about Leverage are well-warranted. I was also one of the people most involved with kicking Diego out of the community, and in the Bay Area warned a lot of people about Brent earlier than others. I also think I was too pessimistic about CEA after 2018, and I was wrong about the future of EA Funds when I was frustrated with it at various points.)
We can make a concrete list of organizations and public intellectuals if you want, and then people can judge on their own if I have a huge false-positive rate.
(For some random examples: I think the 80k podcast is great. I think SSC/ACX is great. I think Open Phil’s research team does a lot of good work despite me deeply disagreeing with them a lot. I think MIRI has done good work but also produced a depression machine that made everyone there depressed. I think FHI was really great before it scaled a lot, now it is a sad husk of its former self, as the university has been smothering it. I think Nicole Ross’s team is great and they do valuable work, though I think they lack ambition. I think Julia Wise’s stuff is high-variance and sometimes has bad consequences, but I’ve come around over the years to thinking that it’s probably good for the world, though I am still hesitant. I think Redwood Research is genuinely well-intentioned and doing good work, and I trust Buck a lot, though almost all the work they do doesn’t seem to be on the critical path towards safe AI (but it’s still my favorite prosaic alignment place to send people to). I think Paul fucked up hard by endorsing OpenAI as much as he did, and I think something kind of bad is happening with his writing style, but he is a genuinely great thinker for AI Alignment and I’ve learned a lot from him about how to think about AI Alignment, and want him to have the resources to pursue his research even if it seems doomed to me. I think CFAR makes some great workshops, though also had some pretty fucked-up dynamics and Anna continues to have a not-great track record at identifying who will end up kind of crazy and causing a bunch of harm. I think Robin Hanson’s research and writing is great, though I heard he has some more dubious in-person behavior. I think the Atlas Fellowship seems pretty cool and I support it, though I think the presentation is too… I don’t know, fake-ambitious/EAish. In-general the Stanford EAs seem like they are doing cool stuff with a bunch of the events and workshops they are running.)
Guys, please have a go at people on another person’s post. God knows there are enough of them… This is exactly what I’m talking about and I will literally have a coronary. Lol.
This feels kind of strawmanny.
I said many times neither me nor others predicted the scale of the fraud and explosion at FTX.
I do think it was clear that Lantern was disassociating themselves from Sam, and stopped giving Sam resources, and that is the primary thing I think we should have done too, based on roughly the same info that Lantern and other past Alameda employees had.
About false-positives: I agree that false-positives are a key thing to pay attention to. I do think I have concerns about a bunch of things in EA here, though I think it’s really far from everyone, and also, I don’t think I have a historically bad track record (like, I did say that CEA was really corrupt and broken during 2015-2017 and I think that is accurate and I think current CEA would agree with that. I also think my concerns about Leverage are well-warranted. I was also one of the people most involved with kicking Diego out of the community, and in the Bay Area warned a lot of people about Brent earlier than others. I also think I was too pessimistic about CEA after 2018, and I was wrong about the future of EA Funds when I was frustrated with it at various points.)
We can make a concrete list of organizations and public intellectuals if you want, and then people can judge on their own if I have a huge false-positive rate.
(For some random examples: I think the 80k podcast is great. I think SSC/ACX is great. I think Open Phil’s research team does a lot of good work despite me deeply disagreeing with them a lot. I think MIRI has done good work but also produced a depression machine that made everyone there depressed. I think FHI was really great before it scaled a lot, now it is a sad husk of its former self, as the university has been smothering it. I think Nicole Ross’s team is great and they do valuable work, though I think they lack ambition. I think Julia Wise’s stuff is high-variance and sometimes has bad consequences, but I’ve come around over the years to thinking that it’s probably good for the world, though I am still hesitant. I think Redwood Research is genuinely well-intentioned and doing good work, and I trust Buck a lot, though almost all the work they do doesn’t seem to be on the critical path towards safe AI (but it’s still my favorite prosaic alignment place to send people to). I think Paul fucked up hard by endorsing OpenAI as much as he did, and I think something kind of bad is happening with his writing style, but he is a genuinely great thinker for AI Alignment and I’ve learned a lot from him about how to think about AI Alignment, and want him to have the resources to pursue his research even if it seems doomed to me. I think CFAR makes some great workshops, though also had some pretty fucked-up dynamics and Anna continues to have a not-great track record at identifying who will end up kind of crazy and causing a bunch of harm. I think Robin Hanson’s research and writing is great, though I heard he has some more dubious in-person behavior. I think the Atlas Fellowship seems pretty cool and I support it, though I think the presentation is too… I don’t know, fake-ambitious/EAish. In-general the Stanford EAs seem like they are doing cool stuff with a bunch of the events and workshops they are running.)
I’d be interested to hear what you think is going wrong with Paul’s writing style, if you want to share.
Guys, please have a go at people on another person’s post. God knows there are enough of them… This is exactly what I’m talking about and I will literally have a coronary. Lol.
Appreciate you ❤️