Ah, I think I see where you’re coming from. Of your points I find #4 to be the most crucial. Would it be too egregious to summarise this notion as: (i) all of these capabilities are super useful & (ii) consciousness will [almost if not actually] “come for free” once these capabilities are sufficiently implemented in machines?
Jay Luong
What claim is being made here?
Re: track record, I’m a coauthor on a position paper that we’ve been gradually rolling out to reviewers who are well-established in this topic.
Finally, please find information about the aims of the survey in the below comment & at this webpage.
What?
Inspired by the Philpapers survey, we are conducting a survey on experts’ beliefs about key topics pertaining to AI consciousness & moral status. These include:
🧠 Consciousness/sentience
⚖️ Moral status/moral agency
💥 Suffering risk (”S-risk”) related AI consciousness (e.g. AI suffering)
⚠️ Existential risk (”X-risk”) related to AI consciousness (e.g. resource competition with conscious AI)
Why?
Such a survey promises to enrich our understanding of key safety risks related to conscious AI in several ways.
📊 Most importantly, the results of this survey provide a general picture of experts’ views about the probability, promises, & perils of AI consciousness.
This summary can be used to gauge expert opinion, & make it easier for policymakers, journalists, & lay people to see where the experts stand, where they disagree, & where there is uncertainty.
Furthermore, the survey results can also be of use to experts themselves, who may harbour misconceptions about what most other experts believe, or, owing to their specialisation, may not be abreast of advances in other areas of AI research.
Overall, the survey enhances the accessibility of AI research, ultimately contributing to a more AI-literate (& hence, better prepared) populace.
⚔️ Analysing the types of answers given by respondents might help to identify fault lines between industry, academia, & policy.
📈 Repeating the survey on an annual basis can assist in monitoring trends (e.g. updates in belief in response to technological advances/breakthroughs, differences in attitudes between industry & academia, emergent policy levers, etc.).
Hey! I’m not sure I see the prima facie case for #1. What makes you think that building non-conscious AI would be more resource-intensive/expensive than building conscious AI? Current AIs are most likely non-conscious.
As for #2, I have heard such arguments before in other contexts (relating to meat industry) but I found them to be preposterous on the face of it.
Do you think that consciousness will come for free? I think that it seems like a very complex phenomenon that would be hard to accidentally engineer. On top of this, the more permissive your view of consciousness (veering towards panpsychism), the less ethically important consciousness becomes (since rocks & electrons would then have moral standing too). So if consciousness is to be a ground of moral status, it needs to be somewhat rare.