(These were written in an x-risk framing, but implications for s-risk are fairly straightforward.)
As far as actionable points, I’ve been advocating working on metaphilosophy or AI philosophical competence, as a way of speeding up philosophical progress in general (so that it doesn’t fall behind other kinds of intellectual progress, such as scientific and technological progress, that seem likely to be greatly sped up by AI development by default), and improving the likelihood that human-descended civilization(s) eventually reach correct conclusions on important moral and philosophical questions, and will be motivated/guided by those conclusions.
In posts like this and this, I have lamented the extreme neglect of this field, even among people otherwise interested in philosophy and AI, such as yourself. It seems particularly puzzling why no professional philosopher has even publicly expressed a concern about AI philosophical competence and related risks (at least AFAIK), even as developments such as ChatGPT have greatly increased societal attention on AI and AI safety in the last couple of years. I wonder if you have any insights into why that is the case.
Lower than 1%? A lot more uncertainty due to important unsolved questions in philosophy of mind.
I agree that there is a lot of uncertainty, but don’t understand how that is compatible with a <1% likelihood of AI sentience. Doesn’t that represent near certainty that AIs will not be sentient?
Thanks a lot for the links, I will give them a read and get back to you!
Regarding the “Lower than 1%? A lot more uncertainty due to important unsolved questions in philosophy of mind.” part, it was a mistake because I was thinking of current AI systems. I will delete the % credence since I have so much uncertainty that any theory or argument that I find compelling (for the substrate-dependence or substate-independence of sentience) would change my credence substantially.
A couple of further considerations, or “stops on the crazy train”, that you may be interested in:
Is the potential astronomical waste in our universe too small to care about?
Beyond Astronomical Waste
(These were written in an x-risk framing, but implications for s-risk are fairly straightforward.)
As far as actionable points, I’ve been advocating working on metaphilosophy or AI philosophical competence, as a way of speeding up philosophical progress in general (so that it doesn’t fall behind other kinds of intellectual progress, such as scientific and technological progress, that seem likely to be greatly sped up by AI development by default), and improving the likelihood that human-descended civilization(s) eventually reach correct conclusions on important moral and philosophical questions, and will be motivated/guided by those conclusions.
In posts like this and this, I have lamented the extreme neglect of this field, even among people otherwise interested in philosophy and AI, such as yourself. It seems particularly puzzling why no professional philosopher has even publicly expressed a concern about AI philosophical competence and related risks (at least AFAIK), even as developments such as ChatGPT have greatly increased societal attention on AI and AI safety in the last couple of years. I wonder if you have any insights into why that is the case.
I agree that there is a lot of uncertainty, but don’t understand how that is compatible with a <1% likelihood of AI sentience. Doesn’t that represent near certainty that AIs will not be sentient?
Thanks a lot for the links, I will give them a read and get back to you!
Regarding the “Lower than 1%? A lot more uncertainty due to important unsolved questions in philosophy of mind.” part, it was a mistake because I was thinking of current AI systems. I will delete the % credence since I have so much uncertainty that any theory or argument that I find compelling (for the substrate-dependence or substate-independence of sentience) would change my credence substantially.