Historically, what has caused the subjectively biggest-feeling updates to your timelines views? (e.g. arguments, things you learned while writing the report, events in the world).
The first time I really thought about TAI timelines was in 2016, when I read Holden’s blog post. That got me to take the possibility of TAI soonish seriously for the first time (I hadn’t been explicitly convinced of long timelines earlier or anything, I just hadn’t thought about it).
Then I talked more with Holden and technical advisors over the next few years, and formed the impression that there was a relatively simple argument that many technical advisors believed that if a brain-sized model could be transformative, then there’s a relatively tight argument implying it would take X FLOP to train it, which would become affordable in the next couple decades. That meant that if we had a moderate probability on the first premise, we should have a moderate probability on TAI in the next couple decades. This made me take short timelines even more seriously because I found the biological analogy intuitively appealing, and I didn’t think that people who confidently disagreed had strong arguments against it.
Then I started digging into those arguments in mid-2019 for the project that ultimately became the report, and I started to be more skeptical again because it seemed that even conditional on assuming a brain-sized model would constitute TAI, there are many different hypotheses you could have about how much computation it would take to train it (what eventually became the biological anchors), and different technical advisors believed in different versions of this. In particular, it felt like the notion of a horizon length made sense and incorporating it into the argument(s) made timelines seem longer.
Then after writing up an earlier draft of the report, it felt like a number of people (including those who had longish timelines) felt that I was underweighting short and medium horizon lengths, which caused me to upweight those views some.
Also a big fan of your report. :)
Historically, what has caused the subjectively biggest-feeling updates to your timelines views? (e.g. arguments, things you learned while writing the report, events in the world).
Thanks! :)
The first time I really thought about TAI timelines was in 2016, when I read Holden’s blog post. That got me to take the possibility of TAI soonish seriously for the first time (I hadn’t been explicitly convinced of long timelines earlier or anything, I just hadn’t thought about it).
Then I talked more with Holden and technical advisors over the next few years, and formed the impression that there was a relatively simple argument that many technical advisors believed that if a brain-sized model could be transformative, then there’s a relatively tight argument implying it would take X FLOP to train it, which would become affordable in the next couple decades. That meant that if we had a moderate probability on the first premise, we should have a moderate probability on TAI in the next couple decades. This made me take short timelines even more seriously because I found the biological analogy intuitively appealing, and I didn’t think that people who confidently disagreed had strong arguments against it.
Then I started digging into those arguments in mid-2019 for the project that ultimately became the report, and I started to be more skeptical again because it seemed that even conditional on assuming a brain-sized model would constitute TAI, there are many different hypotheses you could have about how much computation it would take to train it (what eventually became the biological anchors), and different technical advisors believed in different versions of this. In particular, it felt like the notion of a horizon length made sense and incorporating it into the argument(s) made timelines seem longer.
Then after writing up an earlier draft of the report, it felt like a number of people (including those who had longish timelines) felt that I was underweighting short and medium horizon lengths, which caused me to upweight those views some.