Will MacAskill: So take great power war or something. I’m like great power wars are really important? We should be concerned about it. People normally say, “Oh, but what would we do?”. And I’m kinda like, I don’t know. I mean policy around hypersonic missiles is like one thing, but really I don’t know. We should be looking into it. And then people are like, “Well, I just don’t really know”. And so don’t feel excited about it. But I think that’s evidence of why diminishing marginal returns is not exactly correct. It’s actually an S curve. I think if there’d never been any like investment and discussion about AI and now suddenly we’re like, “Oh my God, AI’s this big thing”. They wouldn’t know what to do on Earth about this. So there’s an initial period of where you’re getting increasing returns where you’re just actually figuring out like where you can contribute. And that’s interesting if you get that increasing returns dynamic because it means that you don’t want really spread, even if it’s the case that–
Robert Wiblin: It’s a reason to group a little bit more.
Will MacAskill: Exactly. Yeah. And so I mean a couple of reasons which kind of favor AI work over these other things that maybe I think are just as important in the grand scheme of things is we’ve already done all the sunk cost of building kind of infrastructure to have an impact there. And then secondly, when you combine with the fact that just entirely objectively it’s boom time in AI. So if there’s any time that we’re going to focus on it, it’s when there’s vast increases in inputs. And so perhaps it is the case that maybe my conclusion is perhaps I’m just as worried about war or genetic enhancement or something, but while we’ve made the bet, we should follow through with it. But overall I still actually would be pretty pro people doing some significant research into other potential top causes and then figuring out what should the next thing that we focus quite heavily on be
Robert Wiblin: I guess especially people who haven’t already committed to working on some other area if they’re still very flexible. For example, maybe they should go and think about great power conflict if you’re still an undergraduate student.
Will MacAskill: Yeah, for sure and then especially different causes. One issue that we’ve found is that we’re talking so much about biorisk and AI risk and they’re just quite weird small causes that can’t necessarily absorb large numbers of people perhaps who don’t have… Like I couldn’t contribute to biorisk work, nor do I have a machine learning background and so on, whereas some other causes like climate change and great power war potentially can absorb just much larger quantities of people and that could be a strong reason for looking into them more too.
I didn’t mention it in the post because I wanted to keep it short but there was a related discussion on a recent 80,000 hours podcast with WillMacAskill with some good points: