Excited to see this team expand! A few [optional] questions:
What do you think were some of your best and worst grants in the last 6 months?
What are your views on the value of “prosaic alignment” relative to “non-prosaic alignment?” To what extent do you think the most valuable technical research will look fairly similar to “standard ML research”, “pure theory research”, or other kinds of research?
What kinds of technical research proposals do you think are most difficult to evaluate, and why?
What are your favorite examples of technical alignment research from the past 6-12 months?
What, if anything, do you think you’ve learned in the last year? What advice would you have for a Young Ajeya who was about to start in your role?
Excited to see this team expand! A few [optional] questions:
What do you think were some of your best and worst grants in the last 6 months?
What are your views on the value of “prosaic alignment” relative to “non-prosaic alignment?” To what extent do you think the most valuable technical research will look fairly similar to “standard ML research”, “pure theory research”, or other kinds of research?
What kinds of technical research proposals do you think are most difficult to evaluate, and why?
What are your favorite examples of technical alignment research from the past 6-12 months?
What, if anything, do you think you’ve learned in the last year? What advice would you have for a Young Ajeya who was about to start in your role?