This is a rigorous and well-structured argument, and I find the revenue growth framing particularly compelling it is the least theoretically laden of the three empirical anchors you present, and arguably the hardest to dismiss. I want to add a perspective that I think is largely absent from timeline discussions: what these timelines mean when you’re not in San Francisco, London, or Beijing. I’m based in Abidjan, Côte d’Ivoire. I work in governance and program management, and I’ve spent the last few years watching how technology including much more mundane technology than AGI lands in contexts where infrastructure is fragile, institutions are under-resourced, and regulatory capacity is almost nonexistent. What I observe is a consistent pattern: the capability arrives long before the governance does. And the communities that bear the consequences of that gap are rarely the ones who were part of the conversation about whether to deploy. Your point about METR’s benchmarks not generalizing to “messier, open-ended tasks” resonates strongly from where I sit. In Côte d’Ivoire, almost every consequential task is messy and open-ended. Agricultural supply chains, local health delivery, land tenure disputes, budget transparency these are exactly the domains where AI is most likely to be deployed next, and least likely to perform as cleanly as benchmarks suggest. The failure modes in these contexts are not theoretical. This leads me to a concern that I think deserves more attention in timeline discussions: the question is not only when transformative AI arrives, but who governs its deployment in the interim. The revenue growth you cite is overwhelmingly concentrated in a handful of countries. The regulatory frameworks being built right now in the EU, the US, the UK are being built without meaningful input from the regions most likely to be on the receiving end of AI deployment decisions made elsewhere. Whether timelines are short or long, that governance gap is already open. And closing it requires starting now not after we’ve resolved the empirical debate about 2035 versus 2052. I’d be curious whether others in this community are thinking seriously about what EA-aligned AI governance work looks like when it’s designed for and by the Global South, rather than exported to it.
Hi Kouadio. Just want to let you know that your comments don’t have paragraph breaks between the paragraphs. Maybe you are copying and pasting from another app and the formatting is getting messed up? I’m just saying this because the text looks like it’s all in one big block and that makes it harder to read. I want to make sure you get a fair shot at saying what you want to say, and fixing this formatting issue will make people more likely to read your comments.
This is a rigorous and well-structured argument, and I find the revenue growth framing particularly compelling it is the least theoretically laden of the three empirical anchors you present, and arguably the hardest to dismiss.
I want to add a perspective that I think is largely absent from timeline discussions: what these timelines mean when you’re not in San Francisco, London, or Beijing.
I’m based in Abidjan, Côte d’Ivoire. I work in governance and program management, and I’ve spent the last few years watching how technology including much more mundane technology than AGI lands in contexts where infrastructure is fragile, institutions are under-resourced, and regulatory capacity is almost nonexistent. What I observe is a consistent pattern: the capability arrives long before the governance does. And the communities that bear the consequences of that gap are rarely the ones who were part of the conversation about whether to deploy.
Your point about METR’s benchmarks not generalizing to “messier, open-ended tasks” resonates strongly from where I sit. In Côte d’Ivoire, almost every consequential task is messy and open-ended. Agricultural supply chains, local health delivery, land tenure disputes, budget transparency these are exactly the domains where AI is most likely to be deployed next, and least likely to perform as cleanly as benchmarks suggest. The failure modes in these contexts are not theoretical.
This leads me to a concern that I think deserves more attention in timeline discussions: the question is not only when transformative AI arrives, but who governs its deployment in the interim. The revenue growth you cite is overwhelmingly concentrated in a handful of countries. The regulatory frameworks being built right now in the EU, the US, the UK are being built without meaningful input from the regions most likely to be on the receiving end of AI deployment decisions made elsewhere.
Whether timelines are short or long, that governance gap is already open. And closing it requires starting now not after we’ve resolved the empirical debate about 2035 versus 2052.
I’d be curious whether others in this community are thinking seriously about what EA-aligned AI governance work looks like when it’s designed for and by the Global South, rather than exported to it.
Hi Kouadio. Just want to let you know that your comments don’t have paragraph breaks between the paragraphs. Maybe you are copying and pasting from another app and the formatting is getting messed up? I’m just saying this because the text looks like it’s all in one big block and that makes it harder to read. I want to make sure you get a fair shot at saying what you want to say, and fixing this formatting issue will make people more likely to read your comments.