there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people
This is not clear to me and my impression is that most AI safety people would disagree with this statement as well, considering the high generality of AI capabilities.
Current LLMs already have some level of biological capabilities and near-zero contribution to cumulative GDP growth. The assertion that “there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people” seems to imply believing biological capabilities will scale orders of magnitude less than capabilities in every other field required to contribute to GDP, and I see absolutely no evidence to believe that.
I think this is comparing apples and oranges: biological capabilities on benchmarks (AFAIK not that helpful in real-world lab settings yet) versus actual economic impact. The question is whether real world bio capabilities will outstrip real world broad economic capabilities.
It’s certainly possible that an AI will trigger a biorisk if-then commitment before it has general capabilities capable of 10% cumulative GDP growth. But I would be pretty surprised if we get a system so helpful that it could counterfactually enable laypeople to dramatically surpass the current state of the art in the specific domain of bio-offense without having previously gotten systems that are pretty helpful at counterfactually enabling professionals to do their jobs somewhat better and automate some routine tasks. I think your claim implies something like, as AI automates things, it will hit “making a bioweapon that ends the world, which no one can currently do” before it hits “the easiest ~15% of the stuff we already do, weighted by market value” (assuming labor is ~2/3 of GDP). This seems unlikely, especially since bioweapons involves a bunch of physical processes where AIs seem likely to struggle mightily for a while, though again I concede not impossible.
In terms of whether “most AI safety people” believe this, consider that the great takeoff speeds debate was operationalized in terms of whether AI would produce a cumulative 100% growth in four years before it produced 100% growth in one year. To the extent that this debate loosely tracked a debate within the community more broadly, it seems to imply a large constituency for the view that we will see much more than 10% cumulative growth before AI becomes existentially scary.
This is not clear to me and my impression is that most AI safety people would disagree with this statement as well, considering the high generality of AI capabilities.
Why does the high generality of AI capabilities imply that a similar level of capabilities produces 10% cumulative GDP growth and extinction?
Current LLMs already have some level of biological capabilities and near-zero contribution to cumulative GDP growth. The assertion that “there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people” seems to imply believing biological capabilities will scale orders of magnitude less than capabilities in every other field required to contribute to GDP, and I see absolutely no evidence to believe that.
I think this is comparing apples and oranges: biological capabilities on benchmarks (AFAIK not that helpful in real-world lab settings yet) versus actual economic impact. The question is whether real world bio capabilities will outstrip real world broad economic capabilities.
It’s certainly possible that an AI will trigger a biorisk if-then commitment before it has general capabilities capable of 10% cumulative GDP growth. But I would be pretty surprised if we get a system so helpful that it could counterfactually enable laypeople to dramatically surpass the current state of the art in the specific domain of bio-offense without having previously gotten systems that are pretty helpful at counterfactually enabling professionals to do their jobs somewhat better and automate some routine tasks. I think your claim implies something like, as AI automates things, it will hit “making a bioweapon that ends the world, which no one can currently do” before it hits “the easiest ~15% of the stuff we already do, weighted by market value” (assuming labor is ~2/3 of GDP). This seems unlikely, especially since bioweapons involves a bunch of physical processes where AIs seem likely to struggle mightily for a while, though again I concede not impossible.
In terms of whether “most AI safety people” believe this, consider that the great takeoff speeds debate was operationalized in terms of whether AI would produce a cumulative 100% growth in four years before it produced 100% growth in one year. To the extent that this debate loosely tracked a debate within the community more broadly, it seems to imply a large constituency for the view that we will see much more than 10% cumulative growth before AI becomes existentially scary.