Comment: “Very short timelines” might be conflated with “inevitability”.
(The following isn’t my idea, I’ve heard about it several times now. It seems good to share, even though my explanation is really basic.)
For many people with short timelines, it’s less that they view AGI as coming in “15 or 50 years”, but more that they view the “shape of the path” of the emergence of AGI as inevitable in some deep sense.
To explain it in one way: to these people, watching civilization avoid dangerous AGI, is sort of like watching a drunkard walking forward in a landscape that has deep, dangerous holes. These holes get bigger and bigger over time as the drunkard walks.
Eventually, the holes are going to get so big, and gain such vast, slippery slopes, that even a skilled person won’t be able to escape slipping into it.
To get more “gearsy”, these people with negative views believe that AI hardware and models/patterns/training will get much better and widely distributed. Government regulation will be highly inadequate (e.g. due to “moloch”) and won’t even come close to being effective in preventing or regulating AGI.
If you have this belief, this gets even worse, once you consider other civilizations (“grabby aliens”). This is because, if you think aggressive AGI is inevitable, if you think it’s a “lower entropy” state, it must also be so for any civilization. Then even if your civilization manages to escape it, some other will come across it. Then, it seems likely some aggressive AGI will always emerge, and will prevail and grab the other civilizations.
This all might be relevant to S-risk, because if you can’t prevent AGI, you can shape the path it emerges, and you might avoid extremely dark, S-risk scenarios.
If you believe AGI is so inevitable, then it is logical to believe you can find it (and focused efforts can find it ahead of everyone else). This explains why some subset of people might be “trying to find AGI” or take certain other interventions, that might seem wilder to someone without these perspectives.
Note that some people with these beliefs might not have that “high of a probability” on S-risk or even have certain timelines on AGI. It’s more that they view S-risk as extremely bad, in a way that warrants serious attention (certainly more than right now). The reason for pointing this out is that the actual probability of S-risk might be low, and also that understanding this lower risk might make the presentation/explanation of this view more effective and reasonable.
Comment: “Very short timelines” might be conflated with “inevitability”.
(The following isn’t my idea, I’ve heard about it several times now. It seems good to share, even though my explanation is really basic.)
For many people with short timelines, it’s less that they view AGI as coming in “15 or 50 years”, but more that they view the “shape of the path” of the emergence of AGI as inevitable in some deep sense.
To explain it in one way: to these people, watching civilization avoid dangerous AGI, is sort of like watching a drunkard walking forward in a landscape that has deep, dangerous holes. These holes get bigger and bigger over time as the drunkard walks.
Eventually, the holes are going to get so big, and gain such vast, slippery slopes, that even a skilled person won’t be able to escape slipping into it.
To get more “gearsy”, these people with negative views believe that AI hardware and models/patterns/training will get much better and widely distributed. Government regulation will be highly inadequate (e.g. due to “moloch”) and won’t even come close to being effective in preventing or regulating AGI.
If you have this belief, this gets even worse, once you consider other civilizations (“grabby aliens”). This is because, if you think aggressive AGI is inevitable, if you think it’s a “lower entropy” state, it must also be so for any civilization. Then even if your civilization manages to escape it, some other will come across it. Then, it seems likely some aggressive AGI will always emerge, and will prevail and grab the other civilizations.
This all might be relevant to S-risk, because if you can’t prevent AGI, you can shape the path it emerges, and you might avoid extremely dark, S-risk scenarios.
If you believe AGI is so inevitable, then it is logical to believe you can find it (and focused efforts can find it ahead of everyone else). This explains why some subset of people might be “trying to find AGI” or take certain other interventions, that might seem wilder to someone without these perspectives.
Note that some people with these beliefs might not have that “high of a probability” on S-risk or even have certain timelines on AGI. It’s more that they view S-risk as extremely bad, in a way that warrants serious attention (certainly more than right now). The reason for pointing this out is that the actual probability of S-risk might be low, and also that understanding this lower risk might make the presentation/explanation of this view more effective and reasonable.