Questions for Nick Beckstead’s fireside chat in EAGxAPAC this weekend

Hey everyone, I’m also hosting a 25-minute fireside chat with Nick Beckstead in the EAGxAPAC conference happening this weekend, and I’d like to get your thoughts and suggestions on what questions I should ask him.

The fireside chat is happening at 12:00 PM GMT+8 (where I live) on Nov. 22, Sunday. If you haven’t applied for the conference yet and would like to, you can do so until 11:59 pm PST on Wednesday, 18 November. You can view the schedule and list of speakers here.

About Nick

Nick is a program officer for the Open Philanthropy Project. He oversees a substantial part of Open Philanthropy’s research and grantmaking related to global catastrophic risk reduction.

About the fireside chat

This fireside chat will focus on questions people want to ask him about this 1-hour online talk he recently gave called “Existential risks: fundamentals, overview, and intervention points”. If you’re not familiar yet with the topic of existential risks, this talk should still be a good, thorough overview.

I’ve watched parts of the talk, and I think there’s some new material here even for people quite familiar with the topic of existential risks already (i.e. you have read “The Precipice”). I particularly like the section on three scenarios of how an existential catastrophe could arise from AI, which are from 34:47 to 54:25.

Questions I’ve come up with

Here are some questions I’ve come up with that I am thinking of asking him, roughly in this order. I may come up with a few follow-up questions though on the spot:

  1. What are you specifically working on at OpenPhil currently? How do you divide your time across different buckets of activities at OpenPhil?

  2. What have you changed your mind on or updated your beliefs the most on within the last year when it comes to existential risks?

  3. How does OpenPhil decide what x-risk related research to do itself, such as its “worldview investigations”? For example, why did OpenPhil decide to research on “How much computational power it takes to match the human brain” by Joseph Carlsmith and “Forecasting transformative AI with biological anchors” by Ajeya Cotra?

  4. How likely do you think it is that we will solve the problem of AI alignment before an existential catastrophe, and why?

  5. If we don’t solve the problem of AI alignment in time, is it almost certain (i.e. 90%+ chance) that we face an existential catastrophe?

  6. How or when would we confidently say, if it’s even possible, that we have “solved” the problem of AI alignment?

  7. You talk about 3 scenarios of how an existential catastrophe could arise from AI in your talk. Which of these 3 scenarios do you think is most plausible?

  8. Out of the 3 scenarios, the scenarios of stable authoritarianism and slow takeoff sound like they won’t lead to the extinction of humanity. Would you say that humanity is less likely to become extinct because of AI, and more likely to just be in a dystopian scenario instead?

  9. Is it possible that stable authoritarianism arising from AI could not be an existential catastrophe? It might be a trajectory change leading to worse outcomes, but maybe it’s not a “drastic curtailment” of our potential?

  10. What are your thoughts on Ajeya Cotra’s work on when might transformative AI be achieved? I know a public report hasn’t been released on it yet, but have you made any updates to your personal forecasts of when transformative AI will be achieved based on it?

  11. Some effective altruists subscribe to the view of patient longtermism, that instead of focusing on reducing specific existential risks this century, we should expect that the crucial moment to act lies in the future, and our main task should be to prepare for that time. What are your thoughts about this view, and do you think more effective altruists should be interested in that view?

  12. What underrepresented skills and backgrounds do you think the x-risk community needs more of?

  13. Most roles in the field of AI policy, AI alignment, and biosecurity are in the U.S. or U.K. How do you think people outside of these regions, especially those from Asia-Pacific, could still contribute to them remotely? Are one of the fields easier to contribute to remotely than the other?

  14. What projects or organizations in the field of existential risks would you like to see started in the Asia-Pacific region? How willing or excited would OpenPhil be to fund work in this region?

  15. What would you like to see happen in the effective altruism community in the Asia-Pacific region within the next 3 years?

Also, I want to ask a question more to drill down about Nick’s answer to population ethics as an objection to working on x-risk, which he talks about at 58:40. But I’m not familiar enough with population ethics to ask a question on that. If someone could frame a question better for him, let me know! Maybe I should also ask him a question about biorisk, but none come to my mind currently. I hope someone can suggest one below.

Upvote or downvote these below

I’ve pasted these questions as comments below, and you can upvote the ones you would want me to ask, or downvote the ones you think I shouldn’t ask. If you also know the answer to some of the questions I asked above, such as questions #5-6, feel free to comment the answer below too.

If there are topics or questions you think I should consider asking him about too, comment them down below. It would also be good for you to include some rationale on why you want him to answer that question. Thanks!