You probably agree with me that (a) we can’t know whether it will rain on 2/10/2050 and (b) we can be pretty sure that there will be a solar eclipse on 7/22/2028. You are actively participating in a prediction market, so you seem to believe in some ability to forecast the future better than a magic 8 ball.
Where do you think the limits are to what kinds of things we can make useful predictions about, and how confident those predictions can be?
Harlan
Karma: 33
AI Impacts Quarterly Newsletter, Apr-Jun 2023
AI Impacts Quarterly Newsletter, Jan-Mar 2023
Thanks for this great writeup, this seems like a topic that deserves more discussion.
“Coming into physical contact with grabby aliens within the next, say, 1000 years is very unlikely. The reason for this is that grabby aliens have existed, on average, for many millions of years, and thus, the only way we will encounter them physically any time soon is if we happened to right now be on the exact outer edge of their current sphere of colonization, which seems implausible.”
Or encounters with grabby aliens are so dangerous that they don’t leave any conscious survivors, in which case we would need to be careful about using the lack of encounters in our past as evidence about the likelihood or frequency in the future.
Great post!
I broadly agree with the point that compute could be scaled up significantly, but I want to add a few notes about the claim that $10B buys 100x the compute of GPT-4.
Altman said “more” when asked if GPT-4 had cost $100M to train. We don’t know how much more. But PaLM seems to have only cost $9M-$23M so $100M is probably reasonable.
If OpenAI was buying up 100x the compute of GPT-4, maybe that would be a big enough spike in demand for GPUs that they would become more expensive. I’m pretty uncertain about what to expect there, but I estimated that PaLM used the equivalent of 0.01% of the world’s current GPU/TPU computing capacity for 2 months. GPT-4 seems to be bigger than PaLM, so 100x the compute used for it might be the equivalent of more than 1% of the world’s existing GPU/TPU computing capacity for 2 months.