Nice post! I donāt think we should assume that AI Mooreās law would be capped by the regular Mooreās law of total compute. If there is a new application of processors that is willing to pay a lot of money for a huge number of processors, I think we would build more chip fabs to keep up with demand. This does not necessarily accelerate the original Mooreās law (transistors per chip), but it would accelerate total compute. This would be consistent with Robin Hansonās vision of doubling of value (and I believe ~compute) roughly every month. Itās not even clear to me that the chips would be more expensive in such a scenario assuming we actually planned well for it, because generally we have learning where the cost per unit decreases with the cumulative production.
Thanks! Yeah, it might have been a bad idea to take general chip cost decreases as super relevant for specialized AI chipsā cost efficiency. I read Careyās estimates for cost decreases as applying to AI chips, when upon closer inspection he was referring to general chips. Probably weāll see faster gains in AI chipsā cost efficiency for a while as the low-hanging fruit is picked.
My point was something like, āDevelopment costs to make AI chips will largely be borne by leading AI companies. If this is right, then they wonāt be able to take advantage of cheaper, better chips in the same way that consumers have with Mooreās Lawāi.e. passively benefiting from the results without investing their own capital into R&Dā. I didnāt mean for it to sound like I was focusing on chip production capacityāI think cost efficiency is the key metric.
But I donāt have a sense of how much money will be spent on development costs for a certain increase in chipsā cost efficiency. It might be that early on, unit costs swamp development costs.
Frankly, Iām starting to think that my ideas about development costs may not be accurate. It looks like traditional chip companies are entering the AI chip business in force, although they could be 10% of the market or 90% for all I know. That could change things from the perspective of how much compute leading AI firms could afford to buy. This coupled with the aforementioned difference in cost efficiency rates between general chips and AI chips means I may have underestimated future increases in the cost efficiency of AI chips.
Nice post! I donāt think we should assume that AI Mooreās law would be capped by the regular Mooreās law of total compute. If there is a new application of processors that is willing to pay a lot of money for a huge number of processors, I think we would build more chip fabs to keep up with demand. This does not necessarily accelerate the original Mooreās law (transistors per chip), but it would accelerate total compute. This would be consistent with Robin Hansonās vision of doubling of value (and I believe ~compute) roughly every month. Itās not even clear to me that the chips would be more expensive in such a scenario assuming we actually planned well for it, because generally we have learning where the cost per unit decreases with the cumulative production.
Thanks! Yeah, it might have been a bad idea to take general chip cost decreases as super relevant for specialized AI chipsā cost efficiency. I read Careyās estimates for cost decreases as applying to AI chips, when upon closer inspection he was referring to general chips. Probably weāll see faster gains in AI chipsā cost efficiency for a while as the low-hanging fruit is picked.
My point was something like, āDevelopment costs to make AI chips will largely be borne by leading AI companies. If this is right, then they wonāt be able to take advantage of cheaper, better chips in the same way that consumers have with Mooreās Lawāi.e. passively benefiting from the results without investing their own capital into R&Dā. I didnāt mean for it to sound like I was focusing on chip production capacityāI think cost efficiency is the key metric.
But I donāt have a sense of how much money will be spent on development costs for a certain increase in chipsā cost efficiency. It might be that early on, unit costs swamp development costs.
Frankly, Iām starting to think that my ideas about development costs may not be accurate. It looks like traditional chip companies are entering the AI chip business in force, although they could be 10% of the market or 90% for all I know. That could change things from the perspective of how much compute leading AI firms could afford to buy. This coupled with the aforementioned difference in cost efficiency rates between general chips and AI chips means I may have underestimated future increases in the cost efficiency of AI chips.