How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/​23/​2023

Link post

Interview with Holden Karnofsky on SBF and EA. If you prefer listening, x2 speed is available here. Not anything really new but I really resonate with the last 5 minutes and the idea of being ambitious about helping people tempered by moderation and pluralism in perspectives.

Parts not in the transcript that I liked:

“You don’t have to build your whole identity around EA in order to do an enormous amount of good, in order to be extremely excited about effective altruist ideas. If you can be basically a normal person in most respects, you have a lot going on in your life, you have a lot that you care about, and a job, and you work about as hard as a lot of people work at their jobs… you can do a huge amount of good.”

“Do the most good possible I think is a good idea in moderation. But it’s similar to running. Faster time is better, but you can do that in moderation. You can care about other things at the same time. I think there is a ton of value to coming at doing good with a mindset of finding out the way to do the most good with the resources I have. I think that brings a ton of value compared to just trying to do some good. But then, doing that in moderation I think does get you most of the gains, and ultimately where I think the most healthy place to be is, and probably in my guess the way to do the most good in the end too.”

Parts from transcript I liked:

”...What effective altruism means to me is basically, let’s be ambitious about helping a lot of people. … I feel like this is good, so I think I’m more in the camp of, this is a good idea in moderation. This is a good idea when accompanied by pluralism.”

”Longtermism tends to emphasize the importance of future generations. But there’s a separate idea of just, like, global catastrophic risk reduction. There’s some risks facing humanity that are really big and that we’ve got to be paying more attention to. One of them is climate change. One of them is pandemics. And then there’s AI. I think the dangers of certain kinds of AI that you could easily imagine being developed are vastly underappreciated.

So I would say that I’m currently more sold on bio risk and AI risk as just things that we’ve got to be paying more attention to, no matter what your philosophical orientation. I’m more sold on that than I am on longtermism.

But I am somewhat sold on both. I’ve always kind of thought, “Hey, future people are people and we should care about what happens in the future.” But I’ve always been skeptical of claims to go further than that and say something like, “The value of future generations, and in particular the value of as many people as possible getting to exist, is so vast that it just completely trumps everything else, and you shouldn’t even think about other ways to help people.” That’s a claim that I’ve never really been on board with, and I’m still not on board with.”″