How do your thoughts on career advice differ from those of 80,000 Hours? If you could offer only generic advice in a paragraph or three, what would you say?
anonymous_banana
How has serving on the OpenAI board changed your thinking about AI?
How do your thoughts on career advice differ from those of 80,000 Hours? If you could offer only generic advice in a paragraph or three, what would you say?
Don’t tell me what you think, tell me what you have in your portfolio
—Nassim TalebWhat does your personal investment portfolio look like? Are there any unusual steps you’ve taken due to your study of the future? What aspect of your approach to personal investment do you think readers might be wise to consider?
What are some of the central feedback loops by which people who are hoping to positively influence the long run future can evaluate their efforts? What are some feedback sources that seem underrated, or at least worth further consideration?
Don’t tell me what you think, tell me what you have in your portfolio
—Nassim TalebWhat does your personal investment portfolio look like? Are there any unusual steps you’ve taken due to your study of the future? What aspect of your approach to personal investment do you think readers might be wise to consider?
What are some of the central feedback loops by which people who are hoping to positively influence the long run future can evaluate their efforts? What are some feedback sources that seem underrated, or at least worth further consideration?
What do you think about the claim that the world will need to develop much more effective global surveillance and policing capabilities to achieve stability in the face of continued technological development?
Are you aware of promising research or practical proposals for how such systems might implemented without grave risk of abuse? From the outside, serious discussion of this topic seems sparse (e.g. 80,000 Hours only has this). Do you think the topic is actually very neglected, or is it that most discussion is taking place in private for some reason?
One might think that most of the best opportunities to reduce existential risk over this century could be sufficiently justified solely on the grounds of reducing catastrophic risk to people who will live during this century. What do you think about that?
What are some central examples of practical overlap between the goals of reducing existential risk and reducing catastrophic risk this century? What are some central examples of practical divergence?
In The Precipice, you shared a personal estimate of the total risk of existential disaster in the next 100 years at ⅙.
What odds would you put on a catastrophe that leads us to record more than 500 million human deaths in a 12 month period before 2120?
Context: Our World in Data suggest that from 1950-present, 50-60 million people died each year. They estimate the number will be in the 60-120 million range up to 2100.
I had a similar experience in spring 2023, with an application to EAIF. The fundamental issue was the very slow process from application to decision. This was made worse by poor communication.