Hey Nate, congratulations! I think we briefly met in the office in February when I asked Luke about his plans; now it turns out I should have been quizzing you instead!
I have a huge list of questions; basically the same list I asked Seth Baum, actually. Feel free to answer as many or as few as you want. Apologies if you’ve already written on the subject elsewhere; feel free to just link if so.
What is your current marginal project(s)? How much will they cost, and what’s the expected output (if they get funded).
What is the biggest mistake you’ve made?
What is the biggest mistake you think others make?
What is the biggest thing you’ve changed your mind about recently? (say past year)
How do you balance the liklihood/risks of
FAI supergood
Everything continues much as now
UFAI
e.g. for what p would you prefer a p chance of FAI and a 1-p chance of UFAI over a guarantee of mankind continuing in a AGI-less fashion? (does this make sense in your current ontology?)
What’s your probability distribution for AGI timescale?
Do you have any major disagreements with Eliezer or Luke about 1) expectations for the future 2) strategy?
What do you think about the costs and benefits of publishing in journals as strategy?
Do you think the world has become better or worse over time? How? Why?
Do you think the world has become more or less at risk over time? How? Why?
What you think about Value Drift?
What do you think will be the impact of the Elon Musk money?
How do you think about weighing future value vs current value?
Personal question, feel free to disregard, but this is an AMA:
How has concern about AI’s affected your personal life, beyond the obvious. Has it affected your retirement savings? Do you plan / already have children?
Hey Larks, that’s a huge set of questions. It might be helpful to some themed bundles of questions from here and split them off into their own comments, so that others can upvote and read the questions according to their interest.
Hey Nate, congratulations! I think we briefly met in the office in February when I asked Luke about his plans; now it turns out I should have been quizzing you instead!
I have a huge list of questions; basically the same list I asked Seth Baum, actually. Feel free to answer as many or as few as you want. Apologies if you’ve already written on the subject elsewhere; feel free to just link if so.
What is your current marginal project(s)? How much will they cost, and what’s the expected output (if they get funded).
What is the biggest mistake you’ve made?
What is the biggest mistake you think others make?
What is the biggest thing you’ve changed your mind about recently? (say past year)
How do you balance the liklihood/risks of
FAI supergood
Everything continues much as now
UFAI
e.g. for what p would you prefer a p chance of FAI and a 1-p chance of UFAI over a guarantee of mankind continuing in a AGI-less fashion? (does this make sense in your current ontology?)
What’s your probability distribution for AGI timescale?
Do you have any major disagreements with Eliezer or Luke about 1) expectations for the future 2) strategy?
What do you think about the costs and benefits of publishing in journals as strategy?
Do you think the world has become better or worse over time? How? Why?
Do you think the world has become more or less at risk over time? How? Why?
What you think about Value Drift?
What do you think will be the impact of the Elon Musk money?
How do you think about weighing future value vs current value?
Personal question, feel free to disregard, but this is an AMA:
How has concern about AI’s affected your personal life, beyond the obvious. Has it affected your retirement savings? Do you plan / already have children?
Hey Larks, that’s a huge set of questions. It might be helpful to some themed bundles of questions from here and split them off into their own comments, so that others can upvote and read the questions according to their interest.