the amount of expected serial time a successful (let’s say $10 billion dollar) AI startup is likely to counterfactually burn. In the post I claimed that this seems unlikely to be more than a few weeks. Would you agree with this?
No, see my comment above. Its the difference between a super duper AGI and only a super-human AGI, which could be years or months (but very very critical months!). Plus whatever you add to the hype, plus worlds where you somehow make $10 billion from this are also worlds where you’ve had an inordinate impact, which makes me more suspicious the $10 billion company world is the one where someone decided to just make the company another AGI lab.
the relative value of serial time to money (which is exchangeable with parallel time). If you agree with the first statement, would you trade $10 billion dollars for 3 weeks of serial time at the current margin?
Definitely not! Alignment is currently talent and time constrained, and very much not funding constrained. I don’t even know what we’d buy that’d be worth $10 billion. Maybe some people have some good ideas. Perhaps we could buy lots of compute? But we can already buy lots of compute. I don’t know why we aren’t, but I doubt its because we can’t afford it.
Maybe I’d trade a day for $10 billion? I don’t think I’d trade 2 days for $20 billion though. Maybe I’m just not imaginative enough. Any ideas yourself?
If you would not trade $10 billion for 3 weeks that could be because:
I’m more optimistic about empirical research / think the time iterating at the end when we have the systems is significantly more important than the time now when we can only try to reason about them.
you think money will be much less useful than I expect it to be
I wouldn’t trade $10 billion, but I think empirical research is good. It just seems like we can already afford a bunch of the stuff we want, and I expect we will continue to get lots of money without needing to sacrifice 3 weeks.
I also think people are generally bad consequentialists on questions like these. There is an obvious loss, and a speculative gain. The speculative gain looks very shiny because you make lots of money and end up doing something cool. The obvious loss does not seem very important because its not immediately world destroying, and somewhat boring.
No, see my comment above. Its the difference between a super duper AGI and only a super-human AGI, which could be years or months (but very very critical months!). Plus whatever you add to the hype, plus worlds where you somehow make $10 billion from this are also worlds where you’ve had an inordinate impact, which makes me more suspicious the $10 billion company world is the one where someone decided to just make the company another AGI lab.
Definitely not! Alignment is currently talent and time constrained, and very much not funding constrained. I don’t even know what we’d buy that’d be worth $10 billion. Maybe some people have some good ideas. Perhaps we could buy lots of compute? But we can already buy lots of compute. I don’t know why we aren’t, but I doubt its because we can’t afford it.
Maybe I’d trade a day for $10 billion? I don’t think I’d trade 2 days for $20 billion though. Maybe I’m just not imaginative enough. Any ideas yourself?
Didn’t see the second part there.
I wouldn’t trade $10 billion, but I think empirical research is good. It just seems like we can already afford a bunch of the stuff we want, and I expect we will continue to get lots of money without needing to sacrifice 3 weeks.
I also think people are generally bad consequentialists on questions like these. There is an obvious loss, and a speculative gain. The speculative gain looks very shiny because you make lots of money and end up doing something cool. The obvious loss does not seem very important because its not immediately world destroying, and somewhat boring.