I have this fresh in my mind as we’ve had some internal discussion on the topic at Convergence. My personal take is that “consciousness” is a bit of a trap subject because it bakes in a set of distinct complex questions, people talk about it differently, it’s hard to peer inside the brain, and there’s slight mystification because consciousness feels a bit magical, from the inside. Sub-topics include but are not limited to: 1. Higher-order though. 2. Subjective experience. 3. Sensory integration. 3. Self-awareness. 4. Moral patienthood.
My recommendation is to try and talk in terms of these sub-topics as much as possible rather than the fuzzy, differently understood, and massive concept “consciousness”.
Is contributing to this work useful/effective? Well, I think it will be more useful if, when one works in this domain (or domains), one has specific goals (more in the direction of “understand self-awareness” or “understand moral patienthood” than “understand consciousness”) and one does them for specific purposes.
My personal take is that the current “direct AI risk reduction work” that has the highest value is AI strategy and AI governance. And hence, I would reckon that “consciousness”-work that has clear bearing on AI strategy and AI governance can be impactful.
I have this fresh in my mind as we’ve had some internal discussion on the topic at Convergence. My personal take is that “consciousness” is a bit of a trap subject because it bakes in a set of distinct complex questions, people talk about it differently, it’s hard to peer inside the brain, and there’s slight mystification because consciousness feels a bit magical, from the inside. Sub-topics include but are not limited to: 1. Higher-order though. 2. Subjective experience. 3. Sensory integration. 3. Self-awareness. 4. Moral patienthood.
My recommendation is to try and talk in terms of these sub-topics as much as possible rather than the fuzzy, differently understood, and massive concept “consciousness”.
Is contributing to this work useful/effective? Well, I think it will be more useful if, when one works in this domain (or domains), one has specific goals (more in the direction of “understand self-awareness” or “understand moral patienthood” than “understand consciousness”) and one does them for specific purposes.
My personal take is that the current “direct AI risk reduction work” that has the highest value is AI strategy and AI governance. And hence, I would reckon that “consciousness”-work that has clear bearing on AI strategy and AI governance can be impactful.