I’m sympathetic to this style of approach. I attempted to do a similar “x-risk is a thing” style pitch here.
Two wrinkles with it:
Many people when asked state they think xrisk is likely. I agree it’s not clear if they ‘really believe’ this, but just saying “xrisk is 1%” might not sound very persuasive if they already say it’s higher than that.
It’s not clear that AI safety and GCBRs are the top priorities if you don’t put significant weight on future generations, due to the diminishing returns that are likely over the next 100 years.
Both points mean I think it is important to bring in longtermism at some point, though it doesn’t need to be the opening gambit.
If I was going to try to write my article again, I’d try to mention pandemics more early on, and I’d be more cautious about the ‘most people think x-risk is low’ claim.
One other thing to play with: You could experiment with going even more directly for ‘x-risk is a thing’ and not having the lead in section on leverage. With AI, what I’ve been playing with is opening with Katja’s survey results: “even the people developing AGI say they think it has a 10% chance of ending up with an extremely bad outcome ‘e.g. extinction’.” And then you could try to establish that AGI is likely to come in our lifetimes with bio anchors: “if you just extrapolate forward current trends, it’s likely we have ML models bigger than human brains in our lifetimes.”
I’m sympathetic to this style of approach. I attempted to do a similar “x-risk is a thing” style pitch here.
Two wrinkles with it:
Many people when asked state they think xrisk is likely. I agree it’s not clear if they ‘really believe’ this, but just saying “xrisk is 1%” might not sound very persuasive if they already say it’s higher than that.
It’s not clear that AI safety and GCBRs are the top priorities if you don’t put significant weight on future generations, due to the diminishing returns that are likely over the next 100 years.
Both points mean I think it is important to bring in longtermism at some point, though it doesn’t need to be the opening gambit.
If I was going to try to write my article again, I’d try to mention pandemics more early on, and I’d be more cautious about the ‘most people think x-risk is low’ claim.
One other thing to play with: You could experiment with going even more directly for ‘x-risk is a thing’ and not having the lead in section on leverage. With AI, what I’ve been playing with is opening with Katja’s survey results: “even the people developing AGI say they think it has a 10% chance of ending up with an extremely bad outcome ‘e.g. extinction’.” And then you could try to establish that AGI is likely to come in our lifetimes with bio anchors: “if you just extrapolate forward current trends, it’s likely we have ML models bigger than human brains in our lifetimes.”