I am having trouble understanding why AI safety people are even trying to convince the general public that timelines are short.
If you manage to convince an investor that timelines are very short without simultaneously convincing them to care a lot about x-risk, I feel like their immediate response will be to rush to invest briefcases full of cash into the AI race, thus helping make timelines shorter and more dangerous.
Also, if you make a bold prediction about short timelines and turn out to be wrong, won’t people stop taking you seriously the next time around?
If you manage to convince an investor that timelines are very short without simultaneously convincing them to care a lot about x-risk, I feel like their immediate response will be to rush to invest briefcases full of cash into the AI race, thus helping make timelines shorter and more dangerous.
I’m the corresponding author for a paper that Holly is maybe subtweeting and was worried about this before publication but don’t really feel like those fears were realized.
Firstly, I don’t think there are actually very many people who sincerely think that timelines are short but aren’t scared by that. I think what you are referring to is people who think “timelines are short” means something like “AI companies will 100x their revenue in the next five years”, not “AI companies will be capable of instituting a global totalitarian state in the next five years.” There are some people who believe the latter and aren’t bothered by it but in my experience they are pretty rare.
Secondly, when VCs get the “AI companies will 100x their revenue in the next five years” version of short timelines they seem to want to invest into LLM-wrapper startups, which makes sense because almost all VC firms lack the AUM to invest in the big labs.[1] I think there are plausible ways in which this makes timelines shorter and more dangerous but it seems notably different from investing in the big labs.[2]
Overall, my experience has mostly been that getting people to take short timelines seriously is very close to synonymous with getting them to care about AI risk.
Caveat that ~everyone has the AUM to invest in publicly traded stocks. I didn’t notice any bounce in share price for e.g. NVDA when we published and would be kind of surprised if there was a meaningful effect, but hard to say.
Of course, there’s probably some selection bias in terms of who reaches out to me. Masayoshi Son probably feels like he has better info than what I could publish, but by that same token me publishing stuff doesn’t cause much harm.
Yes, I agree. I think what we need to spend our effort on is convincing people that AI development is dangerous and needs to be handled very cautiously if at all, not that superintelligence is imminent and there’s NO TIME. I don’t think the exact level of urgency or the exact level of risk matters much after like p(doom)=5. The thing we need to convince people of is how to handle the risk.
A lot of AI Safety messages expect the audience to fill in most of the interpretive details—“As you can see, this forecast is very well-researched. ASI is coming. You take it from here.”—when actually what they need to know is what those claims mean for them and what they can do.
I think this is an important tension that’s been felt for a while. I believe there’s been discussion on this at least 10 years back. For a while, few people were “allowed”[1] to publicly promote AI safety issues, because it was so easy to mess things up.
I’d flag that there isn’t much work actively marketing information about there being short timelines. There’s research here, but generally EAs aren’t excited to heavily market this research broadly. I think there’s a tricky line between “doing useful research in ways that are transparent” and “not raising alarm in ways that could be damaging.”
[1] As in, if someone wanted to host a big event on AI safety, and they weren’t close to (and respected by) the MIRI cluster, they were often discouraged from this.
Those are reasonable points, but I’m not sure they are enough to overcome the generally reasonable heuristic that dramatic events will go better if people involved anticipate them and have had a chance to think about them and plan responses beforehand, than if they take them by surprise.
I don’t think she’s saying that people shouldn’t think and plan responses, I think it’s more that endless naval gazing about timelines and rapidly shifting responses isn’t the most useful response
I am having trouble understanding why AI safety people are even trying to convince the general public that timelines are short.
If you manage to convince an investor that timelines are very short without simultaneously convincing them to care a lot about x-risk, I feel like their immediate response will be to rush to invest briefcases full of cash into the AI race, thus helping make timelines shorter and more dangerous.
Also, if you make a bold prediction about short timelines and turn out to be wrong, won’t people stop taking you seriously the next time around?
I’m the corresponding author for a paper that Holly is maybe subtweeting and was worried about this before publication but don’t really feel like those fears were realized.
Firstly, I don’t think there are actually very many people who sincerely think that timelines are short but aren’t scared by that. I think what you are referring to is people who think “timelines are short” means something like “AI companies will 100x their revenue in the next five years”, not “AI companies will be capable of instituting a global totalitarian state in the next five years.” There are some people who believe the latter and aren’t bothered by it but in my experience they are pretty rare.
Secondly, when VCs get the “AI companies will 100x their revenue in the next five years” version of short timelines they seem to want to invest into LLM-wrapper startups, which makes sense because almost all VC firms lack the AUM to invest in the big labs.[1] I think there are plausible ways in which this makes timelines shorter and more dangerous but it seems notably different from investing in the big labs.[2]
Overall, my experience has mostly been that getting people to take short timelines seriously is very close to synonymous with getting them to care about AI risk.
Caveat that ~everyone has the AUM to invest in publicly traded stocks. I didn’t notice any bounce in share price for e.g. NVDA when we published and would be kind of surprised if there was a meaningful effect, but hard to say.
Of course, there’s probably some selection bias in terms of who reaches out to me. Masayoshi Son probably feels like he has better info than what I could publish, but by that same token me publishing stuff doesn’t cause much harm.
Yes, I agree. I think what we need to spend our effort on is convincing people that AI development is dangerous and needs to be handled very cautiously if at all, not that superintelligence is imminent and there’s NO TIME. I don’t think the exact level of urgency or the exact level of risk matters much after like p(doom)=5. The thing we need to convince people of is how to handle the risk.
A lot of AI Safety messages expect the audience to fill in most of the interpretive details—“As you can see, this forecast is very well-researched. ASI is coming. You take it from here.”—when actually what they need to know is what those claims mean for them and what they can do.
I think this is an important tension that’s been felt for a while. I believe there’s been discussion on this at least 10 years back. For a while, few people were “allowed”[1] to publicly promote AI safety issues, because it was so easy to mess things up.
I’d flag that there isn’t much work actively marketing information about there being short timelines. There’s research here, but generally EAs aren’t excited to heavily market this research broadly. I think there’s a tricky line between “doing useful research in ways that are transparent” and “not raising alarm in ways that could be damaging.”
Generally, there is some marketing on focused AI safety discussions. For example, see Robert Miles or Rational Animations.
[1] As in, if someone wanted to host a big event on AI safety, and they weren’t close to (and respected by) the MIRI cluster, they were often discouraged from this.
Those are reasonable points, but I’m not sure they are enough to overcome the generally reasonable heuristic that dramatic events will go better if people involved anticipate them and have had a chance to think about them and plan responses beforehand, than if they take them by surprise.
I don’t think she’s saying that people shouldn’t think and plan responses, I think it’s more that endless naval gazing about timelines and rapidly shifting responses isn’t the most useful response