Your points about sufficiently advanced AIs obsoleting human philosophers are well-taken, though I would touch back on my concern that we won’t have particular clarity on philosophical path-dependencies in AI development without doing some of the initial work ourselves, and these questions could end up being incredibly significant for our long-term trajectory — I gave a talk about this for MCS that I’ll try to get transcribed (in the meantime I can share my slides if you’re interested). I’d also be curious to flip your criticism and ping your models for a positive model for directing EA donations — is the implication that there are no good places to donate to, or that narrow-sense AI safety is the only useful place for donations? What do you think the highest-leverage questions to work on are? And how big are your ‘metaphysical uncertainty error bars’? What sorts of work would shrink these bars?
Sorry for the delayed reply! Didn’t notice this until now.
Sure, I’d be happy to see your slides, thanks! Looking at your post on FAI and valence, it looks like reasons no. 3, 4, 5, and 9 are somewhat plausible to me. I also agree that there might be philosophical path-dependencies in AI development and that doing some of the initial work ourselves might help to discover them—but I feel like QRI isn’t aimed at this directly and could achieve this much better if it was; if it happens it’ll be a side-effect of QRI’s research.
For your flipped criticism:
--I think bolstering the EA community and AI risk communities is a good idea —I think “blue sky” research on global priorities, ethics, metaphilosophy, etc. is also a good idea if people seem likely to make progress on it —Obviously I think AI safety, AI governance, etc. are valuable —There are various other things that seem valuable because they support those things, e.g. trying to forecast decline of collective epistemology and/or prevent it.
--There are various other things that don’t impact AI safety but independently have a decently strong case that they are similarly important, e.g. ALLFED or pandemic preparedness.
--I’m probably missing a few things —My metaphysical uncertainty… If you mean how uncertain am I about various philosophical questions like what is happiness, what is consciousness, etc., then the answer is “very uncertain.” But I think the best thing to do is not try to think about it directly now, but rather to try to stabilize the world and get to the Long Reflection so we can think about it longer and better later.
Hi Daniel,
Thanks for the reply! I am a bit surprised at this:
The quippy version is that, if we’re EAs trying to maximize utility, and we don’t have a good understanding of what utility is, more clarity on such concepts seems obviously insanely high-leverage. I’ve written about specific relevant to FAI here: https://opentheory.net/2015/09/fai_and_valence/ Relevance to building a better QALY here: https://opentheory.net/2015/06/effective-altruism-and-building-a-better-qaly/ And I discuss object-level considerations on how better understanding of emotional valence could lead to novel therapies for well-being here: https://opentheory.net/2018/08/a-future-for-neuroscience/ https://opentheory.net/2019/11/neural-annealing-toward-a-neural-theory-of-everything/ On mobile, pardon the formatting.
Your points about sufficiently advanced AIs obsoleting human philosophers are well-taken, though I would touch back on my concern that we won’t have particular clarity on philosophical path-dependencies in AI development without doing some of the initial work ourselves, and these questions could end up being incredibly significant for our long-term trajectory — I gave a talk about this for MCS that I’ll try to get transcribed (in the meantime I can share my slides if you’re interested). I’d also be curious to flip your criticism and ping your models for a positive model for directing EA donations — is the implication that there are no good places to donate to, or that narrow-sense AI safety is the only useful place for donations? What do you think the highest-leverage questions to work on are? And how big are your ‘metaphysical uncertainty error bars’? What sorts of work would shrink these bars?
Sorry for the delayed reply! Didn’t notice this until now.
Sure, I’d be happy to see your slides, thanks! Looking at your post on FAI and valence, it looks like reasons no. 3, 4, 5, and 9 are somewhat plausible to me. I also agree that there might be philosophical path-dependencies in AI development and that doing some of the initial work ourselves might help to discover them—but I feel like QRI isn’t aimed at this directly and could achieve this much better if it was; if it happens it’ll be a side-effect of QRI’s research.
For your flipped criticism:
--I think bolstering the EA community and AI risk communities is a good idea
—I think “blue sky” research on global priorities, ethics, metaphilosophy, etc. is also a good idea if people seem likely to make progress on it
—Obviously I think AI safety, AI governance, etc. are valuable
—There are various other things that seem valuable because they support those things, e.g. trying to forecast decline of collective epistemology and/or prevent it.
--There are various other things that don’t impact AI safety but independently have a decently strong case that they are similarly important, e.g. ALLFED or pandemic preparedness.
--I’m probably missing a few things
—My metaphysical uncertainty… If you mean how uncertain am I about various philosophical questions like what is happiness, what is consciousness, etc., then the answer is “very uncertain.” But I think the best thing to do is not try to think about it directly now, but rather to try to stabilize the world and get to the Long Reflection so we can think about it longer and better later.