If you manage to convince an investor that timelines are very short without simultaneously convincing them to care a lot about x-risk, I feel like their immediate response will be to rush to invest briefcases full of cash into the AI race, thus helping make timelines shorter and more dangerous.
Iâm the corresponding author for a paper that Holly is maybe subtweeting and was worried about this before publication but donât really feel like those fears were realized.
Firstly, I donât think there are actually very many people who sincerely think that timelines are short but arenât scared by that. I think what you are referring to is people who think âtimelines are shortâ means something like âAI companies will 100x their revenue in the next five yearsâ, not âAI companies will be capable of instituting a global totalitarian state in the next five years.â There are some people who believe the latter and arenât bothered by it but in my experience they are pretty rare.
Secondly, when VCs get the âAI companies will 100x their revenue in the next five yearsâ version of short timelines they seem to want to invest into LLM-wrapper startups, which makes sense because almost all VC firms lack the AUM to invest in the big labs.[1] I think there are plausible ways in which this makes timelines shorter and more dangerous but it seems notably different from investing in the big labs.[2]
Overall, my experience has mostly been that getting people to take short timelines seriously is very close to synonymous with getting them to care about AI risk.
Caveat that ~everyone has the AUM to invest in publicly traded stocks. I didnât notice any bounce in share price for e.g. NVDA when we published and would be kind of surprised if there was a meaningful effect, but hard to say.
Of course, thereâs probably some selection bias in terms of who reaches out to me. Masayoshi Son probably feels like he has better info than what I could publish, but by that same token me publishing stuff doesnât cause much harm.
Iâm the corresponding author for a paper that Holly is maybe subtweeting and was worried about this before publication but donât really feel like those fears were realized.
Firstly, I donât think there are actually very many people who sincerely think that timelines are short but arenât scared by that. I think what you are referring to is people who think âtimelines are shortâ means something like âAI companies will 100x their revenue in the next five yearsâ, not âAI companies will be capable of instituting a global totalitarian state in the next five years.â There are some people who believe the latter and arenât bothered by it but in my experience they are pretty rare.
Secondly, when VCs get the âAI companies will 100x their revenue in the next five yearsâ version of short timelines they seem to want to invest into LLM-wrapper startups, which makes sense because almost all VC firms lack the AUM to invest in the big labs.[1] I think there are plausible ways in which this makes timelines shorter and more dangerous but it seems notably different from investing in the big labs.[2]
Overall, my experience has mostly been that getting people to take short timelines seriously is very close to synonymous with getting them to care about AI risk.
Caveat that ~everyone has the AUM to invest in publicly traded stocks. I didnât notice any bounce in share price for e.g. NVDA when we published and would be kind of surprised if there was a meaningful effect, but hard to say.
Of course, thereâs probably some selection bias in terms of who reaches out to me. Masayoshi Son probably feels like he has better info than what I could publish, but by that same token me publishing stuff doesnât cause much harm.