Could you expand on what you regard as a key difference in our epistemic position with respect to animals vs. even in theory with respect to AI systems? Could this difference be put in terms of a claim you accept when applied to animals but not even in theory when applied to AI systems?
In connection with evaluating animal/AI consciousness, you mention behavior, history, incentives, purpose, and mechanism. Do you regard any of these factors as most directly relevant to consciousness? Are any of these only relevant as proxies for, say, mechanisms?
(My hunch is that more information on these points would make it easier for me or other readers to try to change your mind!)
Re “A crux here is that philosophy of mind doesn’t really make much progress”: for what it’s worth, from the inside of the field, it feels to me like philosophy of mind makes a lot of progress, but (i) the signal-to-noise ratio in the field is bad, (ii) the field is large, sprawling, and uncoordinated, (iii) an impact-focused mindset is rare within the field, and (iv) only a small percentage of the effort in the field has been devoted to producing research that is directly relevant to AI welfare. This suggest to me that even if there isn’t a lot of relevant, discernible-from-the-outside progress in philosophy of mind, relevant progress may be fairly tractable.
Could you expand on what you regard as a key difference in our epistemic position with respect to animals vs. even in theory with respect to AI systems? Could this difference be put in terms of a claim you accept when applied to animals but not even in theory when applied to AI systems?
In connection with evaluating animal/AI consciousness, you mention behavior, history, incentives, purpose, and mechanism. Do you regard any of these factors as most directly relevant to consciousness? Are any of these only relevant as proxies for, say, mechanisms?
(My hunch is that more information on these points would make it easier for me or other readers to try to change your mind!)
Re “A crux here is that philosophy of mind doesn’t really make much progress”: for what it’s worth, from the inside of the field, it feels to me like philosophy of mind makes a lot of progress, but (i) the signal-to-noise ratio in the field is bad, (ii) the field is large, sprawling, and uncoordinated, (iii) an impact-focused mindset is rare within the field, and (iv) only a small percentage of the effort in the field has been devoted to producing research that is directly relevant to AI welfare. This suggest to me that even if there isn’t a lot of relevant, discernible-from-the-outside progress in philosophy of mind, relevant progress may be fairly tractable.