At least for me, I thought we should avoid talking about the pivotal act stuff through a combination of a) this is obviously an important candidate hypothesis but seems bad to talk about because then the Bad Guys will Get Ideas and b) other people who’re better at math/philosophy/alignment presumably know this and are privately considering it in detail, I have only so much to contribute here.
b) is plausibly a dereliction of duty, as is my relative weighting of the terms, but at least in my head it wasn’t (isn’t?) obvious to me that it was wrong for me not to spend a ton of time thinking about pivotal acts.
I think that makes sense as a worry, but I think EAs’ caution and reluctance to model-build and argue about this stuff has turned out to do more harm than good, so we should change tactics. (And we very probably should have done things differently from the get-go.)
If you’re worried that it’s dangerous to talk about something publicly, I’d start off by thinking about it privately and talking about it over Signal with friends, etc. Then you can progress to contacting more EAs privately, then to posting publicly, as it becomes increasingly clear “there’s real value in talking about this stuff” and “there’s not a strong-enough reason to keep quiet”.
Step one in doing that, though, has to be a willingness to think about the topic at all, even if there isn’t clear public social proof that this is a normal or “approved” direction to think in. I think a thing that helps here is to recognize how small the group of “EA leaders and elite researchers” is, how divided their attention is between hundreds of different tasks and subtasks, and how easy it is for many things to therefore fall through the cracks or just-not-happen.
At least for me, I thought we should avoid talking about the pivotal act stuff through a combination of a) this is obviously an important candidate hypothesis but seems bad to talk about because then the Bad Guys will Get Ideas and b) other people who’re better at math/philosophy/alignment presumably know this and are privately considering it in detail, I have only so much to contribute here.
b) is plausibly a dereliction of duty, as is my relative weighting of the terms, but at least in my head it wasn’t (isn’t?) obvious to me that it was wrong for me not to spend a ton of time thinking about pivotal acts.
I think that makes sense as a worry, but I think EAs’ caution and reluctance to model-build and argue about this stuff has turned out to do more harm than good, so we should change tactics. (And we very probably should have done things differently from the get-go.)
If you’re worried that it’s dangerous to talk about something publicly, I’d start off by thinking about it privately and talking about it over Signal with friends, etc. Then you can progress to contacting more EAs privately, then to posting publicly, as it becomes increasingly clear “there’s real value in talking about this stuff” and “there’s not a strong-enough reason to keep quiet”.
Step one in doing that, though, has to be a willingness to think about the topic at all, even if there isn’t clear public social proof that this is a normal or “approved” direction to think in. I think a thing that helps here is to recognize how small the group of “EA leaders and elite researchers” is, how divided their attention is between hundreds of different tasks and subtasks, and how easy it is for many things to therefore fall through the cracks or just-not-happen.