It feels like there’s an obvious trade between the EA worldview on AI and Thiel’s, where the strategy is “laissez faire for the kinds of AI that cause late-90s-internet-scale effects (~10% cumulative GDP growth), aggressive regulation for the kinds of AI that inspire the ‘apocalyptic fears’ that he agrees should be taken seriously, and require evaluations of whether a given frontier AI poses those risks at the pre-deployment stage so you know which of these you’re dealing with.”
Indeed, this is pretty much the “if-then” policy structure Holden proposes here, seemingly with the combination of skepticism of capabilities and distrust of regulation very much in mind.
Obviously the devil (as it were) is in the details. But it feels like there are a bunch of design features that would move in this direction: very little regulation of AI systems that don’t trigger very high capability thresholds (i.e. nothing currently available), aiming for low-cost and accurate risk evaluations for specific threat models like very powerful scheming, self-improvement, and bioterrorism uplift. Idk, maybe I’m failing the ideological turing test here and Thiel would say this is already a nanny state proposal or would lapse into totalitarianism, but like, there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people—really feels like there’s some governance structure that allows/promotes the former and regulates the latter.
there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people
This is not clear to me and my impression is that most AI safety people would disagree with this statement as well, considering the high generality of AI capabilities.
Current LLMs already have some level of biological capabilities and near-zero contribution to cumulative GDP growth. The assertion that “there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people” seems to imply believing biological capabilities will scale orders of magnitude less than capabilities in every other field required to contribute to GDP, and I see absolutely no evidence to believe that.
I think this is comparing apples and oranges: biological capabilities on benchmarks (AFAIK not that helpful in real-world lab settings yet) versus actual economic impact. The question is whether real world bio capabilities will outstrip real world broad economic capabilities.
It’s certainly possible that an AI will trigger a biorisk if-then commitment before it has general capabilities capable of 10% cumulative GDP growth. But I would be pretty surprised if we get a system so helpful that it could counterfactually enable laypeople to dramatically surpass the current state of the art in the specific domain of bio-offense without having previously gotten systems that are pretty helpful at counterfactually enabling professionals to do their jobs somewhat better and automate some routine tasks. I think your claim implies something like, as AI automates things, it will hit “making a bioweapon that ends the world, which no one can currently do” before it hits “the easiest ~15% of the stuff we already do, weighted by market value” (assuming labor is ~2/3 of GDP). This seems unlikely, especially since bioweapons involves a bunch of physical processes where AIs seem likely to struggle mightily for a while, though again I concede not impossible.
In terms of whether “most AI safety people” believe this, consider that the great takeoff speeds debate was operationalized in terms of whether AI would produce a cumulative 100% growth in four years before it produced 100% growth in one year. To the extent that this debate loosely tracked a debate within the community more broadly, it seems to imply a large constituency for the view that we will see much more than 10% cumulative growth before AI becomes existentially scary.
It feels like there’s an obvious trade between the EA worldview on AI and Thiel’s, where the strategy is “laissez faire for the kinds of AI that cause late-90s-internet-scale effects (~10% cumulative GDP growth), aggressive regulation for the kinds of AI that inspire the ‘apocalyptic fears’ that he agrees should be taken seriously, and require evaluations of whether a given frontier AI poses those risks at the pre-deployment stage so you know which of these you’re dealing with.”
Indeed, this is pretty much the “if-then” policy structure Holden proposes here, seemingly with the combination of skepticism of capabilities and distrust of regulation very much in mind.
Obviously the devil (as it were) is in the details. But it feels like there are a bunch of design features that would move in this direction: very little regulation of AI systems that don’t trigger very high capability thresholds (i.e. nothing currently available), aiming for low-cost and accurate risk evaluations for specific threat models like very powerful scheming, self-improvement, and bioterrorism uplift. Idk, maybe I’m failing the ideological turing test here and Thiel would say this is already a nanny state proposal or would lapse into totalitarianism, but like, there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people—really feels like there’s some governance structure that allows/promotes the former and regulates the latter.
This is not clear to me and my impression is that most AI safety people would disagree with this statement as well, considering the high generality of AI capabilities.
Why does the high generality of AI capabilities imply that a similar level of capabilities produces 10% cumulative GDP growth and extinction?
Current LLMs already have some level of biological capabilities and near-zero contribution to cumulative GDP growth. The assertion that “there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people” seems to imply believing biological capabilities will scale orders of magnitude less than capabilities in every other field required to contribute to GDP, and I see absolutely no evidence to believe that.
I think this is comparing apples and oranges: biological capabilities on benchmarks (AFAIK not that helpful in real-world lab settings yet) versus actual economic impact. The question is whether real world bio capabilities will outstrip real world broad economic capabilities.
It’s certainly possible that an AI will trigger a biorisk if-then commitment before it has general capabilities capable of 10% cumulative GDP growth. But I would be pretty surprised if we get a system so helpful that it could counterfactually enable laypeople to dramatically surpass the current state of the art in the specific domain of bio-offense without having previously gotten systems that are pretty helpful at counterfactually enabling professionals to do their jobs somewhat better and automate some routine tasks. I think your claim implies something like, as AI automates things, it will hit “making a bioweapon that ends the world, which no one can currently do” before it hits “the easiest ~15% of the stuff we already do, weighted by market value” (assuming labor is ~2/3 of GDP). This seems unlikely, especially since bioweapons involves a bunch of physical processes where AIs seem likely to struggle mightily for a while, though again I concede not impossible.
In terms of whether “most AI safety people” believe this, consider that the great takeoff speeds debate was operationalized in terms of whether AI would produce a cumulative 100% growth in four years before it produced 100% growth in one year. To the extent that this debate loosely tracked a debate within the community more broadly, it seems to imply a large constituency for the view that we will see much more than 10% cumulative growth before AI becomes existentially scary.