+1 on this being a relevant intuition. I’m not sure how limited these scenarios are—aren’t information asymmetries and commitment problems really common?
Today, somewhat, but that’s just because human brains can’t prove the state of their beliefs or share specifications with each other (ie, humans can lie about anything). There is no reason for artificial brains to have these limitations, and any trend towards communal/social factors in intelligence, or self-reflection (which is required for recursive self-improvement), then it’s actively costly to be cognitively opaque.
I agree that they’re really common in the current world. I was originally thinking that this might become substantially common in multipolar AGI scenarios (because future AIs may have better trust and commitment mechanisms than current humans do). Upon brief reflection, I think my original comment was overly concise and not very substantiated.
+1 on this being a relevant intuition. I’m not sure how limited these scenarios are—aren’t information asymmetries and commitment problems really common?
Today, somewhat, but that’s just because human brains can’t prove the state of their beliefs or share specifications with each other (ie, humans can lie about anything). There is no reason for artificial brains to have these limitations, and any trend towards communal/social factors in intelligence, or self-reflection (which is required for recursive self-improvement), then it’s actively costly to be cognitively opaque.
.
Double comment?
I agree that they’re really common in the current world. I was originally thinking that this might become substantially common in multipolar AGI scenarios (because future AIs may have better trust and commitment mechanisms than current humans do). Upon brief reflection, I think my original comment was overly concise and not very substantiated.