It cannot both be controllable because it’s weak and also uncontrollabile.
That said, I expect more advanced techniques will be needed for more advanced AI; I just think control techniques probably keep up without sudden changes in control requirements.
Also LLMs are more controllable than weaker older designs (compare GPT4 vs Tay).
Yes. This is no comfort for me in terms of p(doom|AGI). There will be sudden changes in control requirements, judging by the big leaps of capability between GPT generations.
More controllable is one thing, but it doesn’t really matter much for reducing x-risk when the numbers being talked about are “29%”.
It cannot both be controllable because it’s weak and also uncontrollabile.
That said, I expect more advanced techniques will be needed for more advanced AI; I just think control techniques probably keep up without sudden changes in control requirements.
Also LLMs are more controllable than weaker older designs (compare GPT4 vs Tay).
Yes. This is no comfort for me in terms of p(doom|AGI). There will be sudden changes in control requirements, judging by the big leaps of capability between GPT generations.
More controllable is one thing, but it doesn’t really matter much for reducing x-risk when the numbers being talked about are “29%”.
That’s what I meant by “phase transition”