Improving our understanding of consciousness / conscious experience
Improving our neuroimaging capabilities
Without a solid theory of consciousness, our views about what matters will keep being based on moral intuitions and it will be hard to make progress on disputes.
[Unstructured, quickly written collection of reactions]
I agree that those two things would be valuable, largely for the reason you mention. Improving our neuroimaging capabilities could also be useful for some interventions to reduce long-term risks from malevolence.
Though there could also be some downsides to each of those things; e.g.., better neuroimaging could perhaps be used for purposes that make totalitarianism or dystopias more likely/worse in expectation. (See “Which technological changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?”)
---
I think the main reason I didn’t already include a question directly about consciousness is what’s captured here:
This post can be seen as collecting questions relevant to the “strategy” level.
One could imagine a version of this post that “zooms out” to discuss crucial questions on the “values” level, or questions about cause prioritisation as a whole. This might involve more emphasis on questions about, for example, population ethics, the moral status of nonhuman animals, and the effectiveness of currently available global health interventions. But here we instead (a) mostly set questions about morality aside, and (b) take longtermism as a starting assumption.
Though I acknowledge that this division is somewhat arbitrary, and also that consciousness is at least arguably/largely/somewhat an empirical rather than “values”/”moral” matter. (One reason I’m implicitly putting it partly in the “moral” bucket is that we might be most interested in something like “consciousness of a morally relevant sort”, such that our moral views influence which features we’re interested in investigating.)
---
After reading your comment, I skimmed again through the list of questions to see what of the things I already had were closest, and where those points might “fit”. Here are the questions I saw that seemed related (though they don’t directly address our understanding of consciousness):
What is the possible quality of the human-influenced future?
How does the “difficulty” or “cost” of creating pleasure vs. pain compare?
Can and will we expand into space? In what ways, and to what extent? What are the implications?
Will we populate colonies with (some) nonhuman animals, e.g. through terraforming? [it’s the implications of terraforming that make this relevant]
Can and will we create sentient digital beings? To what extent? What are the implications?
Would their experiences matter morally?
Will some be created accidentally?
[...]
How close to the appropriate size should we expect influential agents’ moral circles to be “by default”?
I propose two additions to this list:
Improving our understanding of consciousness / conscious experience
Improving our neuroimaging capabilities
Without a solid theory of consciousness, our views about what matters will keep being based on moral intuitions and it will be hard to make progress on disputes.
[Unstructured, quickly written collection of reactions]
I agree that those two things would be valuable, largely for the reason you mention. Improving our neuroimaging capabilities could also be useful for some interventions to reduce long-term risks from malevolence.
Though there could also be some downsides to each of those things; e.g.., better neuroimaging could perhaps be used for purposes that make totalitarianism or dystopias more likely/worse in expectation. (See “Which technological changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?”)
---
I think the main reason I didn’t already include a question directly about consciousness is what’s captured here:
Though I acknowledge that this division is somewhat arbitrary, and also that consciousness is at least arguably/largely/somewhat an empirical rather than “values”/”moral” matter. (One reason I’m implicitly putting it partly in the “moral” bucket is that we might be most interested in something like “consciousness of a morally relevant sort”, such that our moral views influence which features we’re interested in investigating.)
---
After reading your comment, I skimmed again through the list of questions to see what of the things I already had were closest, and where those points might “fit”. Here are the questions I saw that seemed related (though they don’t directly address our understanding of consciousness):
Some related material in this blog post: How understanding valence could help make future AIs safer
Thanks for this!
fwiw I would definitely bucket consciousness research and neuroimaging under “strategy”, though agree that the bucketing is somewhat arbitrary.