whatever benefits AI might bring in the future will still be available in a century, or a millennium, as long as humanity survives. That tree full of golden apples will still be there for the plucking
In the Foundation series, I believe Isaac Asimov expressed the counterargument to this quite well: ||It is fine to take the conservative route if we are alone in the Universe. If we are not alone in the universe, then we are in an existential race and just haven’t met the other racers yet.||
I agree that ‘AI alignment’ is probably impossible, for the reasons you described, plus many others.
The main downside is that current generations might not get some of the benefits of early AI development.
How do you reconcile these two points? If the chance of alignment is epsilon, and deceleration results in significant unnecessary deaths/suffering in the very near future, it feels like you would essentially have to have zero discount on future utility to decide to choose deceleration?
Humans have many other ways to stigmatize, cancel, demonize, and ostracize behaviors that they perceive as risky and evil.
I think this is a good/valid point. However, I weakly believe that this sort of cultural stigmatization takes a very long time to build up to the levels necessary for meaningfully slowing AI research and I don’t think we have the time to do that. I suspect a weak stigma (one that isn’t shared by society as a whole) is more likely to just lead to conflict and bloodshed than to actually stopping advancement in the way we would need it to.
It’s true that if we’re not alone in the universe, slower AI development might put us at marginally higher risk from aliens. However, if the aliens haven’t shown up in the last 540 million years since the Cambrian explosion, they’re not likely to show up in the next few centuries. On the other hand, if they’re quietly watching and waiting, the development of advanced AI might trigger them to intervene suddenly, since we’ll have become a more formidable threat. Maybe best to keep a low profile for the moment, in terms of AI development, until we can work more on both AI safety and astrobiology.
Re. reconciling those two points, I do have pretty close to a zero discount on future utility. AI alignment might be impossible, or it might just be really, really hard. Harder than reaping some near-term benefits of AI (e.g. help with longevity research), but those benefits could come at serious long-term risk. The longer we think about alignment, the more likely we are to either get closer to alignment, or to deciding that it’s really not possible, and we should permanently ban AI using whatever strategies are most effective.
Re. stigmatizing AI, my sense is that it can be much faster for people to develop moral outrage about an emerging issue, than it is to pass effective regulation about that issue. And, passing effective regulation often requires moral outrage as a prerequisite. For example, within a few years of Bitcoin’s invention, traditional finance, mainstream media, and governments based on fiat currency had successfully coordinated to demonize crypto—and it remains a marginal part of the economy. Or, within a few weeks of the Covid-19 pandemic, people developed moral outrage against anyone walking around in public unmasked. Or, within a few weeks of He Jiankui using CRISPR to genetically modify twin babies in 2019, there was a massive global backlash against germ-line genetic engineering of humans. In the social media era, moral outrage travels faster than almost anything else.
I agree that generating outrage can happen pretty quickly. My claim here is that the level of universality required to meaningfully hinder AI development needs to be far higher than any of the examples you have given or any I can think of. You need a stigma as strong as something like incest or child molestation. One that is near universally held and very strongly enforced at the social layer, to the point that it is difficult to find any other humans who will even talk to you about the subject.
With crypto, COVID-19, and CRISPR there are still very large communities of people who are in opposition to the outraged individuals and who continue to make significant progress/gains against the outraged groups.
Micah—well, it’s an interesting empirical question how much stigma would be required to slow down large-scale AI development.
In terms of ‘ethical investment’, investors might easily be scared away from investing in tech that is stigmatized, given that it faces radically increased regulatory risk, adverse PR, and might be penalized under ESG standards.
In terms of talent recruitment & retention, stigma could be very powerful in dissuading smart, capable young people from joining an industry that would make them unpopular as friends, unattractive as mates, and embarrassments to their parents and family.
Without money and people, the AI industry would starve and slow down.
Of course, terrorist cells and radical activists might still try to develop and deploy AI, but they’re not likely to make much progress without large-scale institutional support.
I think your reasoning here is sound, but we have what I believe is a strong existence proof that when there is money to be made weak stigma doesn’t do much:
Porn.
I think the porn industry fits nicely into your description of a weakly stigmatized industry, yet it is a booming industry that has many smart/talented people working in it even though it is weakly stigmatized.
If we are all correct, AI will be bigger (in terms of money) than the porn industry (which is huge) and I suspect demand will be higher than for porn. People may use VPNs and private browsers when using AIs, but it won’t stop them I don’t think.
Micah—that’s a fascinating comparison actually. I’ll have to think about it further.
My first reaction is, well, porn’s a huge industry overall. But it’s incredibly decentralized among a lot of very small-scale producers (down to the level of individual OnlyFans producers). The capital and talent required to make porn videos seems relatively modest: a couple of performers, a crew of 2-3 people with some basic A/V training, a few thousand dollars of equipment (camera, sound, lights), a rental property for a day, and some basic video editing services. By contrast, the capital and talent required to make or modify an AI seems substantially higher. (Epistemic status: I know about the porn industry mostly from teaching human sexuality classes for 20 years, and lecturing about the academic psychology research concerning it; I’m not an expert on its economics.)
If porn was more like AI, and required significant investment capital (e.g. a tens of millions of dollars, rather than tens of thousands), if it required recruiting and managing several smart and skilled developers, if it required access to cloud computing resources, and if it required long-term commercial property rental, it seems like there are lot more chokepoints where moral stigmatization could slow down AI progress.
But it’s certainly worth doing some compare-and-contrast studies of morally stigmatized industries (which might include porn, sex work, guns, gambling, drugs, etc).
Cybercrime probably has somewhat higher barriers to entry than porn (although less than creating an AGI) and arguably higher levels of stigma. It doesn’t take as much skill as it used to, but still needs skilled actors at the higher levels of complexity. Yet it flourishes in many jurisdictions, including with the acquiescence (if not outright support) of nation-states. So that might be another “industry” to consider.
I suspect there will also be quite a bit of overlap between cybercrime and advanced AI (esp. for ‘social engineering’ attacks) in the coming years. Just as crypto’s (media-exaggerated) association with cybercrime in the early 2010s led to increased stigma against crypto, any association between advanced AI and cybercrime might increase stigma against AI.
I believe PornHub is a bigger company than most of today’s AI companies (~150 employees, half software engineers according to Glass Door)? If Brave AI is to be believed, they have $100B in annual revenue and handle 15TB of uploads per day.
If this is the benchmark for the limits of an AI company in a world where AI research is stigmatized, then I am of the opinion that all that stigmatization will accomplish is to make it so people who are OK working in the dark get to make decisions on what gets built. I feel like PornHub sized companies are big enough to produce AGI.
I agree with you that Porn is a very distributed industry overall, and I do suspect that is partially because of the stigmatization. However, this has resulted in a rather robust organization arrangement where individuals work independently and these large companies (like PornHub) focus on handling the IT side of things.
In a stigmatized AI future, perhaps individuals all over the world will work on different pieces of AI stuff while a small number of big AI companies perhaps do bulk training or coordination. Interestingly, this sort of decentralized approach to building could result in a better AI outcome because we wouldn’t end up with a small number of very powerful people deciding trajectory, and instead would have a large number of individuals working independently and in competition with each other.
I do like your idea about comparing to other stigmatized industries! Gambling and drugs are, of course, other great examples of how an absolutely massive industry can grow in the face of weak stigmatization!
The PornHub example raises something a lot of people seem not to understand very well about the porn industry. PornHub and its associated sites (owned by MindGeek) are ‘content aggregators’ that basically act as free advertising for the porn content produced by independent operators and small production companies—which all make their real money through subscription services. PornHub is a huge aggregator site, but as far as I know, it doesn’t actually produce any content of its own. So it’s quite unlike Netflix in this regard—Netflix spent about $17 billion in 2022 on original content, whereas PornHub spent roughly zero on original content, as far as I can tell.
So, one could imagine ‘AI aggregator sites’ that offer a range of AI services produced by small independent AI developers. These could potentially compete with Big Tech outfits like OpenAI or DeepMind (which would be more analogous to Netflix, in terms of investing large sums in ‘original content’, i.e. original software).
But, whether that would increase or decrease AI risk, I’m not sure. My hunch is that the more people and organizations who are involved in AI development, the higher the risk that a few bad actors will produce truly dangerous AI systems, whether accidentally or deliberately. But, as you say, a more diverse AI ecosystem could reduce the change that a few big AI companies acquire and abuse a lot of power.
In the Foundation series, I believe Isaac Asimov expressed the counterargument to this quite well: ||It is fine to take the conservative route if we are alone in the Universe. If we are not alone in the universe, then we are in an existential race and just haven’t met the other racers yet.||
How do you reconcile these two points? If the chance of alignment is epsilon, and deceleration results in significant unnecessary deaths/suffering in the very near future, it feels like you would essentially have to have zero discount on future utility to decide to choose deceleration?
I think this is a good/valid point. However, I weakly believe that this sort of cultural stigmatization takes a very long time to build up to the levels necessary for meaningfully slowing AI research and I don’t think we have the time to do that. I suspect a weak stigma (one that isn’t shared by society as a whole) is more likely to just lead to conflict and bloodshed than to actually stopping advancement in the way we would need it to.
Micah -
It’s true that if we’re not alone in the universe, slower AI development might put us at marginally higher risk from aliens. However, if the aliens haven’t shown up in the last 540 million years since the Cambrian explosion, they’re not likely to show up in the next few centuries. On the other hand, if they’re quietly watching and waiting, the development of advanced AI might trigger them to intervene suddenly, since we’ll have become a more formidable threat. Maybe best to keep a low profile for the moment, in terms of AI development, until we can work more on both AI safety and astrobiology.
Re. reconciling those two points, I do have pretty close to a zero discount on future utility. AI alignment might be impossible, or it might just be really, really hard. Harder than reaping some near-term benefits of AI (e.g. help with longevity research), but those benefits could come at serious long-term risk. The longer we think about alignment, the more likely we are to either get closer to alignment, or to deciding that it’s really not possible, and we should permanently ban AI using whatever strategies are most effective.
Re. stigmatizing AI, my sense is that it can be much faster for people to develop moral outrage about an emerging issue, than it is to pass effective regulation about that issue. And, passing effective regulation often requires moral outrage as a prerequisite. For example, within a few years of Bitcoin’s invention, traditional finance, mainstream media, and governments based on fiat currency had successfully coordinated to demonize crypto—and it remains a marginal part of the economy. Or, within a few weeks of the Covid-19 pandemic, people developed moral outrage against anyone walking around in public unmasked. Or, within a few weeks of He Jiankui using CRISPR to genetically modify twin babies in 2019, there was a massive global backlash against germ-line genetic engineering of humans. In the social media era, moral outrage travels faster than almost anything else.
I agree that generating outrage can happen pretty quickly. My claim here is that the level of universality required to meaningfully hinder AI development needs to be far higher than any of the examples you have given or any I can think of. You need a stigma as strong as something like incest or child molestation. One that is near universally held and very strongly enforced at the social layer, to the point that it is difficult to find any other humans who will even talk to you about the subject.
With crypto, COVID-19, and CRISPR there are still very large communities of people who are in opposition to the outraged individuals and who continue to make significant progress/gains against the outraged groups.
Micah—well, it’s an interesting empirical question how much stigma would be required to slow down large-scale AI development.
In terms of ‘ethical investment’, investors might easily be scared away from investing in tech that is stigmatized, given that it faces radically increased regulatory risk, adverse PR, and might be penalized under ESG standards.
In terms of talent recruitment & retention, stigma could be very powerful in dissuading smart, capable young people from joining an industry that would make them unpopular as friends, unattractive as mates, and embarrassments to their parents and family.
Without money and people, the AI industry would starve and slow down.
Of course, terrorist cells and radical activists might still try to develop and deploy AI, but they’re not likely to make much progress without large-scale institutional support.
I think your reasoning here is sound, but we have what I believe is a strong existence proof that when there is money to be made weak stigma doesn’t do much:
Porn.
I think the porn industry fits nicely into your description of a weakly stigmatized industry, yet it is a booming industry that has many smart/talented people working in it even though it is weakly stigmatized.
If we are all correct, AI will be bigger (in terms of money) than the porn industry (which is huge) and I suspect demand will be higher than for porn. People may use VPNs and private browsers when using AIs, but it won’t stop them I don’t think.
Micah—that’s a fascinating comparison actually. I’ll have to think about it further.
My first reaction is, well, porn’s a huge industry overall. But it’s incredibly decentralized among a lot of very small-scale producers (down to the level of individual OnlyFans producers). The capital and talent required to make porn videos seems relatively modest: a couple of performers, a crew of 2-3 people with some basic A/V training, a few thousand dollars of equipment (camera, sound, lights), a rental property for a day, and some basic video editing services. By contrast, the capital and talent required to make or modify an AI seems substantially higher. (Epistemic status: I know about the porn industry mostly from teaching human sexuality classes for 20 years, and lecturing about the academic psychology research concerning it; I’m not an expert on its economics.)
If porn was more like AI, and required significant investment capital (e.g. a tens of millions of dollars, rather than tens of thousands), if it required recruiting and managing several smart and skilled developers, if it required access to cloud computing resources, and if it required long-term commercial property rental, it seems like there are lot more chokepoints where moral stigmatization could slow down AI progress.
But it’s certainly worth doing some compare-and-contrast studies of morally stigmatized industries (which might include porn, sex work, guns, gambling, drugs, etc).
Cybercrime probably has somewhat higher barriers to entry than porn (although less than creating an AGI) and arguably higher levels of stigma. It doesn’t take as much skill as it used to, but still needs skilled actors at the higher levels of complexity. Yet it flourishes in many jurisdictions, including with the acquiescence (if not outright support) of nation-states. So that might be another “industry” to consider.
Jason—yes, that’s another good example.
I suspect there will also be quite a bit of overlap between cybercrime and advanced AI (esp. for ‘social engineering’ attacks) in the coming years. Just as crypto’s (media-exaggerated) association with cybercrime in the early 2010s led to increased stigma against crypto, any association between advanced AI and cybercrime might increase stigma against AI.
I believe PornHub is a bigger company than most of today’s AI companies (~150 employees, half software engineers according to Glass Door)? If Brave AI is to be believed, they have $100B in annual revenue and handle 15TB of uploads per day.
If this is the benchmark for the limits of an AI company in a world where AI research is stigmatized, then I am of the opinion that all that stigmatization will accomplish is to make it so people who are OK working in the dark get to make decisions on what gets built. I feel like PornHub sized companies are big enough to produce AGI.
I agree with you that Porn is a very distributed industry overall, and I do suspect that is partially because of the stigmatization. However, this has resulted in a rather robust organization arrangement where individuals work independently and these large companies (like PornHub) focus on handling the IT side of things.
In a stigmatized AI future, perhaps individuals all over the world will work on different pieces of AI stuff while a small number of big AI companies perhaps do bulk training or coordination. Interestingly, this sort of decentralized approach to building could result in a better AI outcome because we wouldn’t end up with a small number of very powerful people deciding trajectory, and instead would have a large number of individuals working independently and in competition with each other.
I do like your idea about comparing to other stigmatized industries! Gambling and drugs are, of course, other great examples of how an absolutely massive industry can grow in the face of weak stigmatization!
Micah—very interesting points.
The PornHub example raises something a lot of people seem not to understand very well about the porn industry. PornHub and its associated sites (owned by MindGeek) are ‘content aggregators’ that basically act as free advertising for the porn content produced by independent operators and small production companies—which all make their real money through subscription services. PornHub is a huge aggregator site, but as far as I know, it doesn’t actually produce any content of its own. So it’s quite unlike Netflix in this regard—Netflix spent about $17 billion in 2022 on original content, whereas PornHub spent roughly zero on original content, as far as I can tell.
So, one could imagine ‘AI aggregator sites’ that offer a range of AI services produced by small independent AI developers. These could potentially compete with Big Tech outfits like OpenAI or DeepMind (which would be more analogous to Netflix, in terms of investing large sums in ‘original content’, i.e. original software).
But, whether that would increase or decrease AI risk, I’m not sure. My hunch is that the more people and organizations who are involved in AI development, the higher the risk that a few bad actors will produce truly dangerous AI systems, whether accidentally or deliberately. But, as you say, a more diverse AI ecosystem could reduce the change that a few big AI companies acquire and abuse a lot of power.