This is a thoughtful post so it’s unfortunate it hasn’t gotten much engagement here. Do you have cruxes around the extent to which centralization is favorable or feasible? It seems like small models that could be run on a phone or laptop (~50GB) are becoming quite capable and decentralized training runs work for 10 billion parameter models which are close to that size range. I don’t know its exact size, but Gemini Flash 2.0 seems much better than I would have expected a model of that size to be in 2024.
I’m guessing that open weight models won’t matter that much in the grand scheme of things—largely because once models start having capabilities which the government doesn’t want bad actors to have, companies will be required to make sure bad actors don’t get access to models (which includes not making the weights available to download). Also, the compute needed to train frontier models and the associated costs are increasing exponentially, meaning there will be fewer and fewer actors willing to spend money to make models they don’t profit from.
So it seems like you’re saying there are at least two conditions: 1) someone with enough resources would have to want to release a frontier model with open weights, maybe Meta or a very large coalition of the opensource community if distributed training continues to scale, 2) it would need at least enough dangerous capability mitigations like unlearning and tamper resistant weights or cloud inference monitoring, or be behind the frontier enough so governments don’t try to stop it. Does that seem right? What do you think is the likely price range for AGI?
I’m not sure the government is moving fast enough or interested in trying to lock down the labs too much given it might slow them down more than it increases their lead or they don’t fully buy into risk arguments for now. I’m not sure what the key factors to watch here are. I expected reasoning systems next year, but it seems like even open weight ones were released this year that seem around o1 preview level just a few weeks after, indicating that multiple parties are pursuing similar lines of AI research somewhat independently.
Yup those conditions seem roughly right. I’d guess the cost to train will be somewhere between $30B and $3T. I’d also guess the government will be very willing to get involved once AI becomes a major consideration for national security (and there exist convincing demonstrations or common knowledge that this is true).
This is a thoughtful post so it’s unfortunate it hasn’t gotten much engagement here. Do you have cruxes around the extent to which centralization is favorable or feasible? It seems like small models that could be run on a phone or laptop (~50GB) are becoming quite capable and decentralized training runs work for 10 billion parameter models which are close to that size range. I don’t know its exact size, but Gemini Flash 2.0 seems much better than I would have expected a model of that size to be in 2024.
I’m guessing that open weight models won’t matter that much in the grand scheme of things—largely because once models start having capabilities which the government doesn’t want bad actors to have, companies will be required to make sure bad actors don’t get access to models (which includes not making the weights available to download). Also, the compute needed to train frontier models and the associated costs are increasing exponentially, meaning there will be fewer and fewer actors willing to spend money to make models they don’t profit from.
So it seems like you’re saying there are at least two conditions: 1) someone with enough resources would have to want to release a frontier model with open weights, maybe Meta or a very large coalition of the opensource community if distributed training continues to scale, 2) it would need at least enough dangerous capability mitigations like unlearning and tamper resistant weights or cloud inference monitoring, or be behind the frontier enough so governments don’t try to stop it. Does that seem right? What do you think is the likely price range for AGI?
I’m not sure the government is moving fast enough or interested in trying to lock down the labs too much given it might slow them down more than it increases their lead or they don’t fully buy into risk arguments for now. I’m not sure what the key factors to watch here are. I expected reasoning systems next year, but it seems like even open weight ones were released this year that seem around o1 preview level just a few weeks after, indicating that multiple parties are pursuing similar lines of AI research somewhat independently.
Yup those conditions seem roughly right. I’d guess the cost to train will be somewhere between $30B and $3T. I’d also guess the government will be very willing to get involved once AI becomes a major consideration for national security (and there exist convincing demonstrations or common knowledge that this is true).