Presumably there are at least some people who have long timelines, but also believe in high risk and don’t want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. It’s hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They don’t have to be raging dogmatists to worry about this happening again, and it’s reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.
*I’m less surely merely catastrophic biorisk from human misuse is low sadly.
Presumably there are at least some people who have long timelines, but also believe in high risk and don’t want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. It’s hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They don’t have to be raging dogmatists to worry about this happening again, and it’s reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.
*I’m less surely merely catastrophic biorisk from human misuse is low sadly.