I’d have to think more carefully about the probabilities you came up with and the model for the headline number, but everything else you discuss is pretty consistent with my view. (I also did a PhD in post-silicon computing technology, but unlike Ted I went right into industry R&D afterwards, so I imagine I have a less synoptic view of things like supply chains. I’m a bit more optimistic, apparently—you assign <1% probability to novel computing technologies running global-scale AI by 2043, but I put down a full percent!)
The table “Examples transistor improvements from history (not cherry-picked)” is interesting. I agree that the examples aren’t cherry picked, since I had nearly the same list (I decided to leave out lithography and included STI and the CFET on imec’s roadmap), but you could choose different prototype dates depending on what you’re interested in.
I think you’ve chosen a fairly relaxed definition for “prototype”, which is good for making the point that it’s almost certain that the transistors of 2043 will use a technology we already have a good handle on, as far as theoretical performance is concerned.
Another idea would be to follow something like this IRDS table that splits out “early invention” and “focused research”. They use what looks like a stricter interpretation of invention—they don’t explain further or give references, but I suspect they just have in mind more similarity to the eventual implementation in production. (There are still questions about what counts, e.g., 1987 for tri-gate or 1998 for FinFET?) That gives about 10–12 years from focused research to volume production.
So even if some unforseeable breakthrough is more performant or easily scalable than what we’re currently thinking about, it still looks pretty tough to get it out by 2043.
(Here’s my submission—I make some similar points but don’t do as much to back them up. The direction is more like “someone should try taking this sort of thing into account”—so I’m glad you did!)
I’d have to think more carefully about the probabilities you came up with and the model for the headline number, but everything else you discuss is pretty consistent with my view. (I also did a PhD in post-silicon computing technology, but unlike Ted I went right into industry R&D afterwards, so I imagine I have a less synoptic view of things like supply chains. I’m a bit more optimistic, apparently—you assign <1% probability to novel computing technologies running global-scale AI by 2043, but I put down a full percent!)
The table “Examples transistor improvements from history (not cherry-picked)” is interesting. I agree that the examples aren’t cherry picked, since I had nearly the same list (I decided to leave out lithography and included STI and the CFET on imec’s roadmap), but you could choose different prototype dates depending on what you’re interested in.
I think you’ve chosen a fairly relaxed definition for “prototype”, which is good for making the point that it’s almost certain that the transistors of 2043 will use a technology we already have a good handle on, as far as theoretical performance is concerned.
Another idea would be to follow something like this IRDS table that splits out “early invention” and “focused research”. They use what looks like a stricter interpretation of invention—they don’t explain further or give references, but I suspect they just have in mind more similarity to the eventual implementation in production. (There are still questions about what counts, e.g., 1987 for tri-gate or 1998 for FinFET?) That gives about 10–12 years from focused research to volume production.
So even if some unforseeable breakthrough is more performant or easily scalable than what we’re currently thinking about, it still looks pretty tough to get it out by 2043.
(Here’s my submission—I make some similar points but don’t do as much to back them up. The direction is more like “someone should try taking this sort of thing into account”—so I’m glad you did!)