So like the terms of art here are “training” versus “inference”. I don’t have a reference or guide (because the relative size is not something that most people think about versus the absolute size of each individually) but if you google them and scroll through some papers or posts I think you will see some clear examples.
Just LARPing here. I don’t really know anything about AI or machine learning.
I guess in some deeper sense you are right and (my simulated version of) what Holden has written is imprecise.
We don’t really see many “continuously” updating models where training continues live with use. So the mundane pattern we see today of inference, where we trivially running the instructions from the model (often on specific silicon made for inference) being much cheaper than training, may not apply for some reason, to the pattern that the out of control AI uses.
It’s not impossible that if the system needs to be self improving, it has to provision a large fraction of its training cost, or something, continually.
It’s not really clear what the “shape” of this “relative cost curve” would be, if this would be a short period of time, and it doesn’t make it any less dangerous.
Yes, the last sentence is exactly correct.
So like the terms of art here are “training” versus “inference”. I don’t have a reference or guide (because the relative size is not something that most people think about versus the absolute size of each individually) but if you google them and scroll through some papers or posts I think you will see some clear examples.
Just LARPing here. I don’t really know anything about AI or machine learning.
I guess in some deeper sense you are right and (my simulated version of) what Holden has written is imprecise.
We don’t really see many “continuously” updating models where training continues live with use. So the mundane pattern we see today of inference, where we trivially running the instructions from the model (often on specific silicon made for inference) being much cheaper than training, may not apply for some reason, to the pattern that the out of control AI uses.
It’s not impossible that if the system needs to be self improving, it has to provision a large fraction of its training cost, or something, continually.
It’s not really clear what the “shape” of this “relative cost curve” would be, if this would be a short period of time, and it doesn’t make it any less dangerous.