Sorry for the delayed reply! Didn’t notice this until now.
Sure, I’d be happy to see your slides, thanks! Looking at your post on FAI and valence, it looks like reasons no. 3, 4, 5, and 9 are somewhat plausible to me. I also agree that there might be philosophical path-dependencies in AI development and that doing some of the initial work ourselves might help to discover them—but I feel like QRI isn’t aimed at this directly and could achieve this much better if it was; if it happens it’ll be a side-effect of QRI’s research.
For your flipped criticism:
--I think bolstering the EA community and AI risk communities is a good idea —I think “blue sky” research on global priorities, ethics, metaphilosophy, etc. is also a good idea if people seem likely to make progress on it —Obviously I think AI safety, AI governance, etc. are valuable —There are various other things that seem valuable because they support those things, e.g. trying to forecast decline of collective epistemology and/or prevent it.
--There are various other things that don’t impact AI safety but independently have a decently strong case that they are similarly important, e.g. ALLFED or pandemic preparedness.
--I’m probably missing a few things —My metaphysical uncertainty… If you mean how uncertain am I about various philosophical questions like what is happiness, what is consciousness, etc., then the answer is “very uncertain.” But I think the best thing to do is not try to think about it directly now, but rather to try to stabilize the world and get to the Long Reflection so we can think about it longer and better later.
Sorry for the delayed reply! Didn’t notice this until now.
Sure, I’d be happy to see your slides, thanks! Looking at your post on FAI and valence, it looks like reasons no. 3, 4, 5, and 9 are somewhat plausible to me. I also agree that there might be philosophical path-dependencies in AI development and that doing some of the initial work ourselves might help to discover them—but I feel like QRI isn’t aimed at this directly and could achieve this much better if it was; if it happens it’ll be a side-effect of QRI’s research.
For your flipped criticism:
--I think bolstering the EA community and AI risk communities is a good idea
—I think “blue sky” research on global priorities, ethics, metaphilosophy, etc. is also a good idea if people seem likely to make progress on it
—Obviously I think AI safety, AI governance, etc. are valuable
—There are various other things that seem valuable because they support those things, e.g. trying to forecast decline of collective epistemology and/or prevent it.
--There are various other things that don’t impact AI safety but independently have a decently strong case that they are similarly important, e.g. ALLFED or pandemic preparedness.
--I’m probably missing a few things
—My metaphysical uncertainty… If you mean how uncertain am I about various philosophical questions like what is happiness, what is consciousness, etc., then the answer is “very uncertain.” But I think the best thing to do is not try to think about it directly now, but rather to try to stabilize the world and get to the Long Reflection so we can think about it longer and better later.