No, I’m saying the nearer and more probable you thing doom-causing AGI is, and the longer you stagnate on solving the problem, the less it makes sense to not let the rest of the world in on the work. If you don’t, you’re very probably doomed. If you do, you’re still very probably doomed, but at least you have orders of magnitude more people collaborating with you to prevent it, this increasing the chance of success.
(As a presumptuous comment) I don’t have a positive view about the work from strong circumstantial evidence. However, as sort of devils advocate:
There are very few good theories of change for very short timelines and one of them is build it yourself. So, I don’t see how that’s good to share.
Alignment might be entangled in this to the degree that sharing even alignment might be capabilities research.
The above might be awful beliefs but I don’t see how it’s wrong.
By the way, just to calibrate so people can read if I’m crazy:
It reads like MIRI or closely related people have tried to build AGI or find the requisite knowledge, many times over the years. The negative results seems to be an update about their beliefs.
Thanks. That kinda sorta makes sense. I still think if they’re trying to build an aligned AGI, it’s arrogant and unrealistic to think you can achieve it with a small group that’s not collaborating with others, faster than the entire AI capabilities community who are basically collaborating together can.
No, I’m saying the nearer and more probable you thing doom-causing AGI is, and the longer you stagnate on solving the problem, the less it makes sense to not let the rest of the world in on the work. If you don’t, you’re very probably doomed. If you do, you’re still very probably doomed, but at least you have orders of magnitude more people collaborating with you to prevent it, this increasing the chance of success.
I think what you said makes sense.
(As a presumptuous comment) I don’t have a positive view about the work from strong circumstantial evidence. However, as sort of devils advocate:
There are very few good theories of change for very short timelines and one of them is build it yourself. So, I don’t see how that’s good to share.
Alignment might be entangled in this to the degree that sharing even alignment might be capabilities research.
The above might be awful beliefs but I don’t see how it’s wrong.
By the way, just to calibrate so people can read if I’m crazy:
It reads like MIRI or closely related people have tried to build AGI or find the requisite knowledge, many times over the years. The negative results seems to be an update about their beliefs.
Thanks. That kinda sorta makes sense. I still think if they’re trying to build an aligned AGI, it’s arrogant and unrealistic to think you can achieve it with a small group that’s not collaborating with others, faster than the entire AI capabilities community who are basically collaborating together can.