Hey! I’m not sure I see the prima facie case for #1. What makes you think that building non-conscious AI would be more resource-intensive/expensive than building conscious AI? Current AIs are most likely non-conscious.
As for #2, I have heard such arguments before in other contexts (relating to meat industry) but I found them to be preposterous on the face of it.
It could be the case that future AI systems are conscious by default, and that it is difficult to build them without them being conscious.
Let me try to spell out my intuition here:
If many organisms have property X, and property X is rare amongst non-organisms, then property X is evolutionarily advantageous.
Consciousness meets this condition, so it is likely evolutionarily advantageous.
The advantage that consciousness gives us is most likely something to do with our ability to reason, adapt behaviour, control our attention, compare options, and so on. In other words, it’s a “mental advantage” (as opposed to e.g. a physical or metabolic advantage).
We will put a lot of money into building AI that can reason, problem solve, adapt behaviour appropriately, control attention, compare options and so on. Given that many organisms employ consciousness to efficiently achieve these tasks, there is a non-trivial chance that AI will too.
To be clear, I don’t know that I would say “it’s more likely than not that AI will be conscious by default”.
Ah, I think I see where you’re coming from. Of your points I find #4 to be the most crucial. Would it be too egregious to summarise this notion as: (i) all of these capabilities are super useful & (ii) consciousness will [almost if not actually] “come for free” once these capabilities are sufficiently implemented in machines?
Do you think that consciousness will come for free? I think that it seems like a very complex phenomenon that would be hard to accidentally engineer. On top of this, the more permissive your view of consciousness (veering towards panpsychism), the less ethically important consciousness becomes (since rocks & electrons would then have moral standing too). So if consciousness is to be a ground of moral status, it needs to be somewhat rare.
Hey! I’m not sure I see the prima facie case for #1. What makes you think that building non-conscious AI would be more resource-intensive/expensive than building conscious AI? Current AIs are most likely non-conscious.
As for #2, I have heard such arguments before in other contexts (relating to meat industry) but I found them to be preposterous on the face of it.
Hello, to clarify #1 I would say:
It could be the case that future AI systems are conscious by default, and that it is difficult to build them without them being conscious.
Let me try to spell out my intuition here:
If many organisms have property X, and property X is rare amongst non-organisms, then property X is evolutionarily advantageous.
Consciousness meets this condition, so it is likely evolutionarily advantageous.
The advantage that consciousness gives us is most likely something to do with our ability to reason, adapt behaviour, control our attention, compare options, and so on. In other words, it’s a “mental advantage” (as opposed to e.g. a physical or metabolic advantage).
We will put a lot of money into building AI that can reason, problem solve, adapt behaviour appropriately, control attention, compare options and so on. Given that many organisms employ consciousness to efficiently achieve these tasks, there is a non-trivial chance that AI will too.
To be clear, I don’t know that I would say “it’s more likely than not that AI will be conscious by default”.
Ah, I think I see where you’re coming from. Of your points I find #4 to be the most crucial. Would it be too egregious to summarise this notion as: (i) all of these capabilities are super useful & (ii) consciousness will [almost if not actually] “come for free” once these capabilities are sufficiently implemented in machines?
I think you’ve understood me!
Do you think that consciousness will come for free? I think that it seems like a very complex phenomenon that would be hard to accidentally engineer. On top of this, the more permissive your view of consciousness (veering towards panpsychism), the less ethically important consciousness becomes (since rocks & electrons would then have moral standing too). So if consciousness is to be a ground of moral status, it needs to be somewhat rare.