I think more powerful (aligned) LLMs would lead to more awesome stuff, so caution on LLMs does delay other awesome stuff.
I agree with the point that “there’s value that can be gained from figuring out how to apply systems at current capabilities levels” (AI summer harvest), but I wouldn’t go as far as “you can almost have the best of both worlds.” It seems more like “we can probably do a lot of good with existing AI, so even though there are costs of caution, those costs are worth paying, and at least we can make some progress applying AI to pressing world problems while we figure out alignment/governance.” (My version isn’t catchy though, oops).
Often when people talk about awesome stuff they’re not referring to LLMs. In this case, there’s no need to slow down the awesome stuff they’re talking about.
I think more powerful (aligned) LLMs would lead to more awesome stuff, so caution on LLMs does delay other awesome stuff.
I agree with the point that “there’s value that can be gained from figuring out how to apply systems at current capabilities levels” (AI summer harvest), but I wouldn’t go as far as “you can almost have the best of both worlds.” It seems more like “we can probably do a lot of good with existing AI, so even though there are costs of caution, those costs are worth paying, and at least we can make some progress applying AI to pressing world problems while we figure out alignment/governance.” (My version isn’t catchy though, oops).
Sure.
Often when people talk about awesome stuff they’re not referring to LLMs. In this case, there’s no need to slow down the awesome stuff they’re talking about.
Lots of awesome stuff requires AGI or superintelligence. People think LLMs (or stuff LLMs invent) will lead to AGI or superintelligence.
So wouldn’t slowing down LLM progress slow down the awesome stuff?
Yeah, that awesome stuff.
My impression is that most people who buy “LLMs --> superintelligence” favor caution despite caution slowing awesome stuff.
But this thread seems unproductive.