[Unstructured, quickly written collection of reactions]
I agree that those two things would be valuable, largely for the reason you mention. Improving our neuroimaging capabilities could also be useful for some interventions to reduce long-term risks from malevolence.
Though there could also be some downsides to each of those things; e.g.., better neuroimaging could perhaps be used for purposes that make totalitarianism or dystopias more likely/âworse in expectation. (See âWhich technological changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?â)
---
I think the main reason I didnât already include a question directly about consciousness is whatâs captured here:
This post can be seen as collecting questions relevant to the âstrategyâ level.
One could imagine a version of this post that âzooms outâ to discuss crucial questions on the âvaluesâ level, or questions about cause prioritisation as a whole. This might involve more emphasis on questions about, for example, population ethics, the moral status of nonhuman animals, and the effectiveness of currently available global health interventions. But here we instead (a) mostly set questions about morality aside, and (b) take longtermism as a starting assumption.
Though I acknowledge that this division is somewhat arbitrary, and also that consciousness is at least arguably/âlargely/âsomewhat an empirical rather than âvaluesâ/ââmoralâ matter. (One reason Iâm implicitly putting it partly in the âmoralâ bucket is that we might be most interested in something like âconsciousness of a morally relevant sortâ, such that our moral views influence which features weâre interested in investigating.)
---
After reading your comment, I skimmed again through the list of questions to see what of the things I already had were closest, and where those points might âfitâ. Here are the questions I saw that seemed related (though they donât directly address our understanding of consciousness):
What is the possible quality of the human-influenced future?
How does the âdifficultyâ or âcostâ of creating pleasure vs. pain compare?
Can and will we expand into space? In what ways, and to what extent? What are the implications?
Will we populate colonies with (some) nonhuman animals, e.g. through terraforming? [itâs the implications of terraforming that make this relevant]
Can and will we create sentient digital beings? To what extent? What are the implications?
Would their experiences matter morally?
Will some be created accidentally?
[...]
How close to the appropriate size should we expect influential agentsâ moral circles to be âby defaultâ?
[Unstructured, quickly written collection of reactions]
I agree that those two things would be valuable, largely for the reason you mention. Improving our neuroimaging capabilities could also be useful for some interventions to reduce long-term risks from malevolence.
Though there could also be some downsides to each of those things; e.g.., better neuroimaging could perhaps be used for purposes that make totalitarianism or dystopias more likely/âworse in expectation. (See âWhich technological changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?â)
---
I think the main reason I didnât already include a question directly about consciousness is whatâs captured here:
Though I acknowledge that this division is somewhat arbitrary, and also that consciousness is at least arguably/âlargely/âsomewhat an empirical rather than âvaluesâ/ââmoralâ matter. (One reason Iâm implicitly putting it partly in the âmoralâ bucket is that we might be most interested in something like âconsciousness of a morally relevant sortâ, such that our moral views influence which features weâre interested in investigating.)
---
After reading your comment, I skimmed again through the list of questions to see what of the things I already had were closest, and where those points might âfitâ. Here are the questions I saw that seemed related (though they donât directly address our understanding of consciousness):
Some related material in this blog post: How understanding valence could help make future AIs safer
Thanks for this!
fwiw I would definitely bucket consciousness research and neuroimaging under âstrategyâ, though agree that the bucketing is somewhat arbitrary.