Sure.
Often when people talk about awesome stuff they’re not referring to LLMs. In this case, there’s no need to slow down the awesome stuff they’re talking about.
Lots of awesome stuff requires AGI or superintelligence. People think LLMs (or stuff LLMs invent) will lead to AGI or superintelligence.
So wouldn’t slowing down LLM progress slow down the awesome stuff?
Yeah, that awesome stuff.
My impression is that most people who buy “LLMs --> superintelligence” favor caution despite caution slowing awesome stuff.
But this thread seems unproductive.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Sure.
Often when people talk about awesome stuff they’re not referring to LLMs. In this case, there’s no need to slow down the awesome stuff they’re talking about.
Lots of awesome stuff requires AGI or superintelligence. People think LLMs (or stuff LLMs invent) will lead to AGI or superintelligence.
So wouldn’t slowing down LLM progress slow down the awesome stuff?
Yeah, that awesome stuff.
My impression is that most people who buy “LLMs --> superintelligence” favor caution despite caution slowing awesome stuff.
But this thread seems unproductive.