AMA: Toby Ord @ EA Global: Reconnect
Update: Here’s the video of Toby’s session.
As he did last year, Toby Ord will be answering questions submitted by community members during one of the main sessions at EA Global: Reconnect.
Submit your questions here by 11:59 pm PDT on Thursday, March 18, or vote for the questions you most want Toby to answer.
About Toby
Toby Ord is a moral philosopher at Oxford University’s Future of Humanity Institute and the author of The Precipice.
Toby’s work focuses on the big-picture questions facing humanity: What are the most important issues of our time? And how can we best address them? His current research is on avoiding the threat of human extinction, which he considers to be among the most pressing and neglected issues we face.
His earlier work explored the ethics of global health and global poverty, which led him to create Giving What We Can, whose members have pledged hundreds of millions of pounds to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement.
He has advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science. His work has been featured more than a hundred times in the national and international media.
You’ve previously spoken about the need to reach “existential security”—in order to believe the future is long and large, we need to believe that existential risk per year will eventually drop very close to zero. What are the best reasons for believing this can happen, and how convincing do they seem to you? Do you think that working on existential risk reduction or longtermist ideas would still be worthwhile for someone who believed existential security was very unlikely?
+1, very interested in this. I didn’t find the reasons in the Precipice that compelling/not detailed enough, so I’d be curious for more.
It’s been almost a year since the release of The Precipice. What do you feel about how people have been receiving it? What positive impact do you think the book has already caused?
Personally I really liked the audiobook, and I think it’s great that the book has been read or is being read by a lot of people in EA, especially those taking up Intro EA fellowships. I think the book is a great resource for getting people educated on longtermism and existential risk, but I’m curious to hear what you think about its reception and impact!
How do your thoughts on career advice differ from those of 80,000 Hours? If you could offer only generic advice in a paragraph or three, what would you say?
How were you able to advise top organizations such as the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science?
How can the EA community have a bigger positive influence on these top organizations?
One might think that most of the best opportunities to reduce existential risk over this century could be sufficiently justified solely on the grounds of reducing catastrophic risk to people who will live during this century. What do you think about that?
What are some central examples of practical overlap between the goals of reducing existential risk and reducing catastrophic risk this century? What are some central examples of practical divergence?
Do you see existential risks being mitigated without (1) strong governmental policy on those issue areas and (2) the ability for those policies to be sustained over a long time scale?
Follow-ups if yes:
1. How urgent is having a system where those governmental policies can reliably take hold?
2. Which country or countries should be prioritized?
Follow-up if no:
1. What would you recommend we focus on alongside or instead of governmental policy changes?
What does your personal investment portfolio look like? Are there any unusual steps you’ve taken due to your study of the future? What aspect of your approach to personal investment do you think readers might be wise to consider?
What are some of the central feedback loops by which people who are hoping to positively influence the long run future can evaluate their efforts? What are some feedback sources that seem underrated, or at least worth further consideration?
Do you think the emergence of COVID-19 has increased or decreased our level of existential risk this century, and why?
You recently shared a rather sweet anecdote about your daughter volunteering to be the youngest person in the world to take part in a COVID vaccine trial. This got me wondering: how do you think about parenting in relation to your career and commitments as an effective altruist? What crucial considerations (if any) do you think EAs should take into account when thinking about whether or not to become parents?
My understanding is that there remain a few important points of (friendly) disagreement between you and Will MacAskill e.g. on influentialness of the present, probability of x-risk this century.
Are you interested in discussing these topics with Will further to potentially reach agreement, or to more accurately identify the specific points of contention?
Alternatively are you happy to just agree to disagree?
Hey Toby, according to your Wikipedia page you originally started off studying computer science and then moved into ethics thinking that this would help you make a large positive difference in the world.
Two questions:
Are you happy you made this move?
If you could give career advice to your teenage self, what would you say?
What do you think about the claim that the world will need to develop much more effective global surveillance and policing capabilities to achieve stability in the face of continued technological development?
Are you aware of promising research or practical proposals for how such systems might implemented without grave risk of abuse? From the outside, serious discussion of this topic seems sparse (e.g. 80,000 Hours only has this). Do you think the topic is actually very neglected, or is it that most discussion is taking place in private for some reason?
Are you interested to start writing another book within the next 5 years? If so, what topics might you write about?
Are “existential risk / security factors” what you’d see as the current frontier in longtermist intervention research?
In The Precipice, you shared a personal estimate of the total risk of existential disaster in the next 100 years at ⅙.
What odds would you put on a catastrophe that leads us to record more than 500 million human deaths in a 12 month period before 2120?
Context: Our World in Data suggest that from 1950-present, 50-60 million people died each year. They estimate the number will be in the 60-120 million range up to 2100.