I agree that if you have to slow down all AI progress or none of it, you should slow it all down. But fortunately, you don’t—you can almost have the best of both worlds.
Insofar as AI x-risk looks like LLMs while awesome stuff like medicine (and robotics and autonomous vehicles and more) doesn’t look like LLMs, caution on LLMs doesn’t delay other awesome stuff.* So when you talk about slowing AI progress, make it clear that you only mean AI on the path to dangerous capabilities.
*That’s not exactly true: e.g. maybe an LLM can automate medical research, or recursively bootstrap itself to godhood and then solve medicine. But “caution with LLMs” doesn’t conflict with “progress on medicine now.”
I think more powerful (aligned) LLMs would lead to more awesome stuff, so caution on LLMs does delay other awesome stuff.
I agree with the point that “there’s value that can be gained from figuring out how to apply systems at current capabilities levels” (AI summer harvest), but I wouldn’t go as far as “you can almost have the best of both worlds.” It seems more like “we can probably do a lot of good with existing AI, so even though there are costs of caution, those costs are worth paying, and at least we can make some progress applying AI to pressing world problems while we figure out alignment/governance.” (My version isn’t catchy though, oops).
Often when people talk about awesome stuff they’re not referring to LLMs. In this case, there’s no need to slow down the awesome stuff they’re talking about.
Insofar as AI x-risk looks like LLMs while awesome stuff like medicine (and robotics and autonomous vehicles and more) doesn’t look like LLMs, caution on LLMs doesn’t delay other awesome stuff.* So when you talk about slowing AI progress, make it clear that you only mean AI on the path to dangerous capabilities.
AI biologists seem extremely dangerous to me—something “merely” as good at viral genomes as GPT-4 is at language would already be an existential threat to human civilization, if not necessarily homo sapiens.
I agree that if you have to slow down all AI progress or none of it, you should slow it all down. But fortunately, you don’t—you can almost have the best of both worlds.
Insofar as AI x-risk looks like LLMs while awesome stuff like medicine (and robotics and autonomous vehicles and more) doesn’t look like LLMs, caution on LLMs doesn’t delay other awesome stuff.* So when you talk about slowing AI progress, make it clear that you only mean AI on the path to dangerous capabilities.
*That’s not exactly true: e.g. maybe an LLM can automate medical research, or recursively bootstrap itself to godhood and then solve medicine. But “caution with LLMs” doesn’t conflict with “progress on medicine now.”
I think more powerful (aligned) LLMs would lead to more awesome stuff, so caution on LLMs does delay other awesome stuff.
I agree with the point that “there’s value that can be gained from figuring out how to apply systems at current capabilities levels” (AI summer harvest), but I wouldn’t go as far as “you can almost have the best of both worlds.” It seems more like “we can probably do a lot of good with existing AI, so even though there are costs of caution, those costs are worth paying, and at least we can make some progress applying AI to pressing world problems while we figure out alignment/governance.” (My version isn’t catchy though, oops).
Sure.
Often when people talk about awesome stuff they’re not referring to LLMs. In this case, there’s no need to slow down the awesome stuff they’re talking about.
Lots of awesome stuff requires AGI or superintelligence. People think LLMs (or stuff LLMs invent) will lead to AGI or superintelligence.
So wouldn’t slowing down LLM progress slow down the awesome stuff?
Yeah, that awesome stuff.
My impression is that most people who buy “LLMs --> superintelligence” favor caution despite caution slowing awesome stuff.
But this thread seems unproductive.
AI biologists seem extremely dangerous to me—something “merely” as good at viral genomes as GPT-4 is at language would already be an existential threat to human civilization, if not necessarily homo sapiens.