I don’t particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn’t feel that load-bearing to me.
This dialogue has a bit of a flavor of the kind of thing I am worried about: https://www.lesswrong.com/posts/vFqa8DZCuhyrbSnyx/integrity-in-ai-governance-and-advocacy?revision=1.0.0
I don’t particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn’t feel that load-bearing to me.
This dialogue has a bit of a flavor of the kind of thing I am worried about: https://www.lesswrong.com/posts/vFqa8DZCuhyrbSnyx/integrity-in-ai-governance-and-advocacy?revision=1.0.0