Hello! I really enjoyed your 80,000 Hours interview, and thanks for answering questions!
1 - Do you have any thoughts about the prudential/personal/non-altruistic implications of transformative AI in our lifetimes?
2 - I find fairness agreements between worldviews unintuitive but also intriguing. Are there any references you’d suggest on fairness agreements besides the OpenPhil cause prioritization update?
I haven’t put a lot of energy into thinking about personal implications, and don’t have very worked-out views right now.
I don’t have a citation off the top of my head for fairness agreements specifically, but they’re closely related to “variance normalization” approaches to moral uncertainty, which are described here (that blog post links to a few papers).
Hello! I really enjoyed your 80,000 Hours interview, and thanks for answering questions!
1 - Do you have any thoughts about the prudential/personal/non-altruistic implications of transformative AI in our lifetimes?
2 - I find fairness agreements between worldviews unintuitive but also intriguing. Are there any references you’d suggest on fairness agreements besides the OpenPhil cause prioritization update?
Thanks, I’m glad you enjoyed it!
I haven’t put a lot of energy into thinking about personal implications, and don’t have very worked-out views right now.
I don’t have a citation off the top of my head for fairness agreements specifically, but they’re closely related to “variance normalization” approaches to moral uncertainty, which are described here (that blog post links to a few papers).