‘Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.’
This gives me the same feeling as Rebecca’s original post: that you have specific information about very bad stuff that you are (for good or bad reasons) not sharing.
I don’t particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn’t feel that load-bearing to me.
‘Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.’
This gives me the same feeling as Rebecca’s original post: that you have specific information about very bad stuff that you are (for good or bad reasons) not sharing.
I don’t particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn’t feel that load-bearing to me.
This dialogue has a bit of a flavor of the kind of thing I am worried about: https://​​www.lesswrong.com/​​posts/​​vFqa8DZCuhyrbSnyx/​​integrity-in-ai-governance-and-advocacy?revision=1.0.0