[Question] Is contributing to the development of a comprehensive theory of consciousness useful/​effective?

I’ve recently been reading The World Behind the World: Consciousness, Free Will, and the Limits of Science by neuroscientist Erik Hoel (it’s amazing btw-highly recommend) and wanted to share this snippet at the end of the sixth chapter, under the section “A Theory of Consciousness Cannot Come Soon Enough”:

We cannot wait too long...For we live in strange times: there are now creatures made only of language, and they make for potential substitutions. Contemporary AIs, like LaMDA, which is Google’s internal chatbot, have achieved fluency in their native element...The technology is moving so fast that questions of AI consciousness are now commonplace...AI [has] triggered a wave of discourse centered around the question: Are AIs conscious? Do they have an intrinsic perspective? It seems that, so far, most people have answered no. For how to judge with certainty whether an AI’s claim to consciousness is correct or incorrect, when it’ll say whatever we want it to? So many supposed experts immediately jumped into the fray to opine, but the problem is that we lack a scientific theory of consciousness that can differentiate between a being with actual experiences and a fake. If there were a real scientific consensus, then experts could refer back to it—but there’s not. All of which highlights how we need a good scientific theory of consciousness right now—look at what sort of moral debates we simply cannot resolve for certain without one. People are left only with their intuitions.

I’ve been intellectually interested in all things philosophy of mind, psychology, consciousness, and AI for a while now, and have seriously considered pursuing graduate research in those areas. The issue is that I am also a naive undergraduate student who feels compelled to do a lot of good with my life and have historically been unsure of the effectiveness of academic research of this sort.

This passage by Erik Hoel updated me: It seems likely that forging a theory of consciousness would in fact help make sense of all things AI (and humans, of course), and could thus contribute to AI safety work. Without such a theory, we cannot reliably determine whether AI claims to consciousness are valid.

Of course we are far, far from building a comprehensive theory of consciousness, although I think chipping away at one, no matter how slowly, is still possible and worthwhile. But again, as always, resources are limited, and I’ve also been concerned with the timelines of AGI recently.

What I’m looking for by mentioning all of this:

Just looking for advice/​opinions, really.

  • Do you know of anyone who is working on consciousness/​AI/​cog sci, from a non-technical/​more philosophical side? Are they in EA? Do they have any thoughts on this (like the effectiveness of their work)?

    • If so, I’d love to be put in touch with them! My email is juliana.eberschlag@gmail.com.

  • Timelines of AGI: Is doing research that is highly philosophical, rigorous, and uncertain worth it?

    • *uncertain in the sense that I’m unsure of how much current consciousness work is actually pushing the needle toward a comprehensive theory of consciousness. i.e don’t know if I’d actually make a difference, but also any marginal difference still may be vastly helpful in expectation.

  • Any other thoughts?

Thanks :)

No comments.