What is the risk level below which you’d be OK with unpausing AI?
I think approximately 1 in 10,000 chance of extinction for each new GPT would be acceptable given the benefits of AI. This is approximately my guess for GPT-5, so if we could release that model and then pause, I’d be okay with that.
A major consideration here is the use of AI to mitigate other x-risks. Some of Toby Ord’s x-risk estimates:
If there was a concrete plan under which AI could be used to mitigate pandemics and anthropogenic risks, then I would be ok with a higher probability of AI extinction, but it seems more likely that AI progress would increase these risks before it decreased them.
AI could be helpful for climate change and eventually nuclear war. So maybe I should be willing to go a little higher on the risk. But we might need a few more GPTs to fix these problems and if each new GPT is 1 in 10,000 then it starts to even out.
What do you think about the potential benefits from AI?
I’m very bullish about the benefits of an aligned AGI. Besides mitigating x-risk, I think curing aging should be a top priority and is worth taking some risks to obtain.
I’ve read the post quickly, but I don’t have a background in economics, so it would take me a while to fully absorb. My first impression is that it is interesting but not that useful for making decisions right now. The simplifications required by the model offset the gains in rigor. What do you think? Is it something I should take the time to understand?
My guess would be that the discount rate is pretty cruxy. Intuitively I would expect almost any gains over the next 1000 years to be offset by reductions in x-risk since we could have zillions of years to reap the benefits. (On a meta-level I believe moral questions are not “truthy” so this is just according to my vaguely total utilitarian preferences, not some deeper truth).
I think approximately 1 in 10,000 chance of extinction for each new GPT would be acceptable given the benefits of AI. This is approximately my guess for GPT-5, so I think if we could release that model and then pause, I’d be okay with that.
To me, this is wild. 1⁄10,000 * 8 billion people = 800,000 current lives lost in expectation, not even counting future lives. If you think GPT-5 is worth 800k+ human lives, you must have high expectations. :)
When you’re weighing existential risks (or other things which steer human civilization on a large scale) against each other, effects are always going to be denominated in a very large number of lives. And this is what OP said they were doing: “a major consideration here is the use of AI to mitigate other x-risks”. So I don’t think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).
So I don’t think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).
I used to prefer focussing on tail risk, but I now think expected deaths are a better metric.
Interventions in the effective altruism community are usually assessed under 2 different frameworks, existential risk mitigation, and nearterm welfare improvement. It looks like 2 distinct frameworks are needed given the difficulty of comparing nearterm and longterm effects. However, I do not think this is quite the right comparison under a longtermist perspective, where most of the expected value of one’s actions results from influencing the longterm future, and the indirect longterm effects of saving lives outside catastrophes cannot be neglected.
In this case, I believe it is better to use a single framework for assessing interventions saving human lives in catastrophes and normal times. One way of doing this, which I consider in this post, is supposing the benefits of saving one life are a function of the population size.
Assuming the benefits of saving a life are proportional to the ratio between the initial and final population, and that the cost to save a life does not depend on this ratio, it looks like saving lives in normal times is better to improve the longterm future than doing so in catastrophes.
1⁄10,000 * 8 billion people = 800,000 current lives lost in expectation
The expected death toll would be much greater than 800 k assuming a typical tail distribution. This is the expected death toll linked solely to the maximum severity, but lower levels of severity would add to it. Assuming deaths follow a Pareto distribution with a tail index of 1.60, which characterises war deaths, the minimum deaths would be 25.3 M (= 8*10^9*(10^-4)^(1/1.60)). Consequently, the expected death toll would be 67.6 M (= 1.60/(1.60 − 1)*25.3*10^6), i.e. 1.11 (= 67.6/61) times the number of deaths in 2023, or 111 (= 67.6/0.608) times the number of malaria deaths in 2022. I certainly agree undergoing this risk would be wild.
Side note. I think the tail distribution will eventually decay faster than that of a Pareto distribution, but this makes my point stronger. In this case, the product between the deaths and their probability density would be lower for higher levels of severity, which means the expected deaths linked to such levels would represent a smaller fraction of the overall expected death toll.
A major consideration here is the use of AI to mitigate other x-risks. Some of Toby Ord’s x-risk estimates
I think Toby’s existential risk estimates are many orders of magnitude higher than warranted. I estimated an annual extinction risk of 5.93*10^-12 for nuclear wars, 2.20*10^-14 for asteroids and comets, 3.38*10^-14 for supervolcanoes, a prior of 6.36*10^-14 for wars, and a prior of 4.35*10^-15 for terrorist attacks. These values are already super low, but I believe existential risk would still be orders of magnitude lower. I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, being existential. I got my estimate assuming:
An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:
Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1⁄2).
Consequently, one should expect the time between i) and ii) to be 2 times (= 1⁄0.50) as long as that if there were no such catastrophes.
An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.
What is the risk level above which you’d be OK with pausing AI?
My loose off-the-cuff response to this question is that I’d be OK with pausing if there was a greater than 1⁄3 chance of doom from AI, with the caveats that:
I don’t think p(doom) is necessarily the relevant quantity. What matters is the relative benefit of pausing vs. unpausing, rather than the absolute level of risk.
“doom” lumps together a bunch of different types of risks, some of which I’m much more OK with compared to others. For example, if humans become a gradually weaker force in the world over time, and then eventually die off in some crazy accident in the far future, that might count as “humans died because of AI” but it’s a lot different than a scenario in which some early AIs overthrow our institutions in a coup and then commit genocide against humans.
I think it would likely be more valuable to pause later in time during AI takeoff, rather than before AI takeoff
Under what conditions would you be happy to attend a protest? (LMK if you have already attended one!)
I attended the protest against Meta because I thought their approach to AI safety wasn’t very thoughtful, although I’m still not sure it was a good decision to attend. I’m not sure what would make me happy to attend a protest, but these scenarios might qualify:
A company or government is being extremely careless about deploying systems that pose great risks to the world. (This doesn’t count situations in which the system poses negligible risks but some future system could pose a greater risk.)
The protesters have clear, reasonable demands that I broadly agree with (e.g. they don’t complain much about AI taking people’s jobs, or AI being trained on copyrighted data, but are instead focused on real catastrophic risks that are directly addressed by the protest).
There’s a crux which is very important. If you only want to attend protests where the protesters are reasonable and well informed and agree with you, then you implicitly only want to attend small protests.
It seems pretty clear to me that most people are much less concerned about x-risk than job loss and other concerns. So we have to make a decision—do we stick to our guns and have the most epistemically virtuous protest movement in history and make it 10x harder to recruit new people and grow the moment? Or do we compromise and welcome people with many concerns, form alliances with groups we don’t agree with in order to have a large and impactful movement?
It would be a failure of instrumental rationality to demand the former. This is just a basic reality about solving coordination problems.
[To provide a counter argument: having a big movement that doesn’t understand the problem is not useful. At some point the misalignment between the movement and the true objective will be catastrophic.
I don’t really buy this because I think that pausing is a big and stable enough target and it is a good solution for most concerns.]
This is something I am actually quite uncertain about so I would like to hear your opinion.
I think it’s worth trying hard to stick to strict epistemic norms. The main argument you bring against is that it’s more effective to be more permissive about bad epistemics. I doubt this. It seems to me that people overstate the track record of populist activism at solving complicated problems. If you’re considering populist activism, I would think hard about where, how, and on what it has worked.
Consider environmentalism. It seems quite uncertain whether the environmentalist movement has been net positive (!). This is an insane admission to have to make, given that the science is fairly straightforward, environmentalism is clearly necessary, and the movement has had huge wins (e.g. massive shift in public opinion, pushing governments to make commitments, & many mundane environmental improvements in developed country cities over the past few decades). However, the environmentalist movement has repeatedly spent enormous efforts on directly harming their stated goals through things like opposing nuclear power and GMOs. These failures seem very directly related to bad epistemics.
In contrast, consider EA. It’s not trivial to imagine a movement much worse along the activist/populist metrics than EA. But EA seems quite likely positive on net, and the loosely-construed EA community has gained a striking amount of power despite its structural disadvantages.
Or consider nuclear strategy. It seems a lot of influence was had by e.g. the staff of RAND and other sober-minded, highly-selected, epistemically-strong actors. Do you want more insiders at think-tanks and governments and companies, and more people writing thoughtful pieces that swing elite opinion, all working in a field widely seen as credible and serious? Or do you want more loud activists protesting on the streets?
I’m definitely not an expert here, but by thinking through what I understand about the few cases I can think of, the impression I get is that activism and protest have worked best to fix the wrongs of simple and widespread political oppression, but that on complex technical issues higher-bandwidth methods are usually how actual progress is made.
I think there are also some powerful but abstract points:
Choosing your methods is not just a choice over methods, but also a choice over who you appeal to. And who you appeal to will change the composition of your movement, and therefore, in the long run, the choice of methods. Consider carefully before summoning forces you can’t control (this applies both to superhuman AI as well as epistemically-shoddy charismatic activist-leaders).
If we make the conversation about AIS more thoughtful, reasonable, and rational, it increases the chances that the right thing (whatever that ends up being—I think we should have a lot of intellectual humility here!) ends up winning. If we make it more activist, political, and emotional, we privilege the voice of whoever is better at activism, politics, and narratives. I think you basically always want to push the thoughtfulness/reasonableness/rationality. This point is made well in one of Scott Alexander’s best essays (see section IV in particular, for the concept of asymmetric vs symmetric weapons). There is a spirit here, of truth-seeking and liberalism and building things, of fighting Moloch rather than sacrificing our epistemics to him for +30% social clout. I admit that this is partly an aesthetic preference on my part. But I do believe in it strongly.
My primary response is that you are falling for status-quo bias. Yes this path might be risky, but the default path is more risky. My perception is the current governance of AI is on track to let us run some terrible gambles with the fate of humanity.
Consider environmentalism. It seems quite uncertain whether the environmentalist movement has been net positive (!).
We can play reference class tennis all day but I can counter with the example of the Abolitionists, the Suffragettes, the Civil Rights movement, Gay Pride or the American XL Bully.
It seems to me that people overstate the track record of populist activism at solving complicated problems ... the science is fairly straightforward, environmentalism is clearly necessary, and the movement has had huge wins
As I argue in the post, I think this is an easier problem than climate change. Just as most people don’t need a detailed understanding of the greenhouse effect, most people don’t need a detailed understanding of the alignment problem (“creating something smarter than yourself is dangerous”).
The advantage with AI is that there is a simple solution that doesn’t require anyone to make big sacrifices, unlike with climate change. With PauseAI, the policy proposal is right there in the name, so it is harder to become distorted than vaguer goals of “environmental justice”.
fighting Moloch rather than sacrificing our epistemics to him for +30% social clout
I think to a significant extent it is possible for PauseAI leadership to remain honest while still having broad appeal. Most people are fine if you say that “I in particular care mostly about x-risk, but I would like to form a coalition with artists who have lost work to AI.”
There is a spirit here, of truth-seeking and liberalism and building things, of fighting Moloch rather than sacrificing our epistemics to him for +30% social clout. I admit that this is partly an aesthetic preference on my part. But I do believe in it strongly.
I’m less certain about this but I think the evidence is much less strong than rationalists would like to believe. Consider: why has no successful political campaign ever run on actually good, nuanced policy arguments? Why do advertising campaigns not make rational arguments for why should prefer their product, instead appealing to your emotions? Why did it take until 2010 for people to have the idea of actually trying to figure out which charities are effective? The evidence is overwhelming that emotional appeals are the only way to persuade large numbers of people.
If we make the conversation about AIS more thoughtful, reasonable, and rational, it increases the chances that the right thing (whatever that ends up being—I think we should have a lot of intellectual humility here!) ends up winning.
Again, this seems like it would be good, but the evidence is mixed. People were making thoughtful arguments for why pandemics are a big risk long before Covid, but the world’s institutions were sufficiently irrational that they failed to actually do anything. If there had been an emotional, epistemically questionable mass movement calling for pandemic preparedness, that would have probably been very helpful.
Most economists seem to agree that European monetary policy is pretty bad and significantly harms Europe, but our civilization is too inadequate to fix the problem. Many people make great arguments about why aging sucks and it should really be a top priority to fix, but it’s left to Silicon Valley to actually do something. Similarly for shipping policy, human challenge trials and starting school later. There is long list of preventable, disastrous policies which society has failed to fix due lack of political will, not lack of sensible arguments.
What if we don’t have very long? You aren’t really factoring in the time crunch we are in (the whole reason that PauseAI is happening now is short timelines).
A few questions:
What is the risk level below which you’d be OK with unpausing AI?
What do you think about the potential benefits from AI?
How do you interpret models of AI pause, such as this one from Chad Jones?
I think approximately 1 in 10,000 chance of extinction for each new GPT would be acceptable given the benefits of AI. This is approximately my guess for GPT-5, so if we could release that model and then pause, I’d be okay with that.
A major consideration here is the use of AI to mitigate other x-risks. Some of Toby Ord’s x-risk estimates:
AI − 1 in 10
Engineering Pandemic − 1 in 30
Unforeseen anthropogenic risks (eg. dystopian regime, nanotech) − 1 in 30
Other anthropogenic risks − 1 in 50
Nuclear war − 1 in 1000
Climate change − 1 in 1000
Other environmental damage 1 in 1000
Supervolcano − 1 in 10,000
If there was a concrete plan under which AI could be used to mitigate pandemics and anthropogenic risks, then I would be ok with a higher probability of AI extinction, but it seems more likely that AI progress would increase these risks before it decreased them.
AI could be helpful for climate change and eventually nuclear war. So maybe I should be willing to go a little higher on the risk. But we might need a few more GPTs to fix these problems and if each new GPT is 1 in 10,000 then it starts to even out.
I’m very bullish about the benefits of an aligned AGI. Besides mitigating x-risk, I think curing aging should be a top priority and is worth taking some risks to obtain.
I’ve read the post quickly, but I don’t have a background in economics, so it would take me a while to fully absorb. My first impression is that it is interesting but not that useful for making decisions right now. The simplifications required by the model offset the gains in rigor. What do you think? Is it something I should take the time to understand?
My guess would be that the discount rate is pretty cruxy. Intuitively I would expect almost any gains over the next 1000 years to be offset by reductions in x-risk since we could have zillions of years to reap the benefits. (On a meta-level I believe moral questions are not “truthy” so this is just according to my vaguely total utilitarian preferences, not some deeper truth).
To me, this is wild. 1⁄10,000 * 8 billion people = 800,000 current lives lost in expectation, not even counting future lives. If you think GPT-5 is worth 800k+ human lives, you must have high expectations. :)
When you’re weighing existential risks (or other things which steer human civilization on a large scale) against each other, effects are always going to be denominated in a very large number of lives. And this is what OP said they were doing: “a major consideration here is the use of AI to mitigate other x-risks”. So I don’t think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).
Thanks for the comment, Richard.
I used to prefer focussing on tail risk, but I now think expected deaths are a better metric.
Thanks for pointing that out, Ted!
The expected death toll would be much greater than 800 k assuming a typical tail distribution. This is the expected death toll linked solely to the maximum severity, but lower levels of severity would add to it. Assuming deaths follow a Pareto distribution with a tail index of 1.60, which characterises war deaths, the minimum deaths would be 25.3 M (= 8*10^9*(10^-4)^(1/1.60)). Consequently, the expected death toll would be 67.6 M (= 1.60/(1.60 − 1)*25.3*10^6), i.e. 1.11 (= 67.6/61) times the number of deaths in 2023, or 111 (= 67.6/0.608) times the number of malaria deaths in 2022. I certainly agree undergoing this risk would be wild.
Side note. I think the tail distribution will eventually decay faster than that of a Pareto distribution, but this makes my point stronger. In this case, the product between the deaths and their probability density would be lower for higher levels of severity, which means the expected deaths linked to such levels would represent a smaller fraction of the overall expected death toll.
Thanks for elaborating, Joseph!
I think Toby’s existential risk estimates are many orders of magnitude higher than warranted. I estimated an annual extinction risk of 5.93*10^-12 for nuclear wars, 2.20*10^-14 for asteroids and comets, 3.38*10^-14 for supervolcanoes, a prior of 6.36*10^-14 for wars, and a prior of 4.35*10^-15 for terrorist attacks. These values are already super low, but I believe existential risk would still be orders of magnitude lower. I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, being existential. I got my estimate assuming:
An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:
An exponential distribution with a mean of 66 M years describes the time between:
2 consecutive such catastrophes.
i) and ii) if there are no such catastrophes.
Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1⁄2).
Consequently, one should expect the time between i) and ii) to be 2 times (= 1⁄0.50) as long as that if there were no such catastrophes.
An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.
Hi Matthew! I’d be curious to hear your thoughts on a couple of questions (happy for you to link if you’ve posted elsewhere):
1/ What is the risk level above which you’d be OK with pausing AI?
2/ Under what conditions would you be happy to attend a protest? (LMK if you have already attended one!)
My loose off-the-cuff response to this question is that I’d be OK with pausing if there was a greater than 1⁄3 chance of doom from AI, with the caveats that:
I don’t think p(doom) is necessarily the relevant quantity. What matters is the relative benefit of pausing vs. unpausing, rather than the absolute level of risk.
“doom” lumps together a bunch of different types of risks, some of which I’m much more OK with compared to others. For example, if humans become a gradually weaker force in the world over time, and then eventually die off in some crazy accident in the far future, that might count as “humans died because of AI” but it’s a lot different than a scenario in which some early AIs overthrow our institutions in a coup and then commit genocide against humans.
I think it would likely be more valuable to pause later in time during AI takeoff, rather than before AI takeoff
I attended the protest against Meta because I thought their approach to AI safety wasn’t very thoughtful, although I’m still not sure it was a good decision to attend. I’m not sure what would make me happy to attend a protest, but these scenarios might qualify:
A company or government is being extremely careless about deploying systems that pose great risks to the world. (This doesn’t count situations in which the system poses negligible risks but some future system could pose a greater risk.)
The protesters have clear, reasonable demands that I broadly agree with (e.g. they don’t complain much about AI taking people’s jobs, or AI being trained on copyrighted data, but are instead focused on real catastrophic risks that are directly addressed by the protest).
There’s a crux which is very important. If you only want to attend protests where the protesters are reasonable and well informed and agree with you, then you implicitly only want to attend small protests.
It seems pretty clear to me that most people are much less concerned about x-risk than job loss and other concerns. So we have to make a decision—do we stick to our guns and have the most epistemically virtuous protest movement in history and make it 10x harder to recruit new people and grow the moment? Or do we compromise and welcome people with many concerns, form alliances with groups we don’t agree with in order to have a large and impactful movement?
It would be a failure of instrumental rationality to demand the former. This is just a basic reality about solving coordination problems.
[To provide a counter argument: having a big movement that doesn’t understand the problem is not useful. At some point the misalignment between the movement and the true objective will be catastrophic.
I don’t really buy this because I think that pausing is a big and stable enough target and it is a good solution for most concerns.]
This is something I am actually quite uncertain about so I would like to hear your opinion.
I think it’s worth trying hard to stick to strict epistemic norms. The main argument you bring against is that it’s more effective to be more permissive about bad epistemics. I doubt this. It seems to me that people overstate the track record of populist activism at solving complicated problems. If you’re considering populist activism, I would think hard about where, how, and on what it has worked.
Consider environmentalism. It seems quite uncertain whether the environmentalist movement has been net positive (!). This is an insane admission to have to make, given that the science is fairly straightforward, environmentalism is clearly necessary, and the movement has had huge wins (e.g. massive shift in public opinion, pushing governments to make commitments, & many mundane environmental improvements in developed country cities over the past few decades). However, the environmentalist movement has repeatedly spent enormous efforts on directly harming their stated goals through things like opposing nuclear power and GMOs. These failures seem very directly related to bad epistemics.
In contrast, consider EA. It’s not trivial to imagine a movement much worse along the activist/populist metrics than EA. But EA seems quite likely positive on net, and the loosely-construed EA community has gained a striking amount of power despite its structural disadvantages.
Or consider nuclear strategy. It seems a lot of influence was had by e.g. the staff of RAND and other sober-minded, highly-selected, epistemically-strong actors. Do you want more insiders at think-tanks and governments and companies, and more people writing thoughtful pieces that swing elite opinion, all working in a field widely seen as credible and serious? Or do you want more loud activists protesting on the streets?
I’m definitely not an expert here, but by thinking through what I understand about the few cases I can think of, the impression I get is that activism and protest have worked best to fix the wrongs of simple and widespread political oppression, but that on complex technical issues higher-bandwidth methods are usually how actual progress is made.
I think there are also some powerful but abstract points:
Choosing your methods is not just a choice over methods, but also a choice over who you appeal to. And who you appeal to will change the composition of your movement, and therefore, in the long run, the choice of methods. Consider carefully before summoning forces you can’t control (this applies both to superhuman AI as well as epistemically-shoddy charismatic activist-leaders).
If we make the conversation about AIS more thoughtful, reasonable, and rational, it increases the chances that the right thing (whatever that ends up being—I think we should have a lot of intellectual humility here!) ends up winning. If we make it more activist, political, and emotional, we privilege the voice of whoever is better at activism, politics, and narratives. I think you basically always want to push the thoughtfulness/reasonableness/rationality. This point is made well in one of Scott Alexander’s best essays (see section IV in particular, for the concept of asymmetric vs symmetric weapons). There is a spirit here, of truth-seeking and liberalism and building things, of fighting Moloch rather than sacrificing our epistemics to him for +30% social clout. I admit that this is partly an aesthetic preference on my part. But I do believe in it strongly.
Thanks, Rudolf, I think this is a very important point, and probably the best argument against PauseAI. It’s true in general that The Ends Do Not Justify the Means (Among Humans).
My primary response is that you are falling for status-quo bias. Yes this path might be risky, but the default path is more risky. My perception is the current governance of AI is on track to let us run some terrible gambles with the fate of humanity.
We can play reference class tennis all day but I can counter with the example of the Abolitionists, the Suffragettes, the Civil Rights movement, Gay Pride or the American XL Bully.
As I argue in the post, I think this is an easier problem than climate change. Just as most people don’t need a detailed understanding of the greenhouse effect, most people don’t need a detailed understanding of the alignment problem (“creating something smarter than yourself is dangerous”).
The advantage with AI is that there is a simple solution that doesn’t require anyone to make big sacrifices, unlike with climate change. With PauseAI, the policy proposal is right there in the name, so it is harder to become distorted than vaguer goals of “environmental justice”.
I think to a significant extent it is possible for PauseAI leadership to remain honest while still having broad appeal. Most people are fine if you say that “I in particular care mostly about x-risk, but I would like to form a coalition with artists who have lost work to AI.”
I’m less certain about this but I think the evidence is much less strong than rationalists would like to believe. Consider: why has no successful political campaign ever run on actually good, nuanced policy arguments? Why do advertising campaigns not make rational arguments for why should prefer their product, instead appealing to your emotions? Why did it take until 2010 for people to have the idea of actually trying to figure out which charities are effective? The evidence is overwhelming that emotional appeals are the only way to persuade large numbers of people.
Again, this seems like it would be good, but the evidence is mixed. People were making thoughtful arguments for why pandemics are a big risk long before Covid, but the world’s institutions were sufficiently irrational that they failed to actually do anything. If there had been an emotional, epistemically questionable mass movement calling for pandemic preparedness, that would have probably been very helpful.
Most economists seem to agree that European monetary policy is pretty bad and significantly harms Europe, but our civilization is too inadequate to fix the problem. Many people make great arguments about why aging sucks and it should really be a top priority to fix, but it’s left to Silicon Valley to actually do something. Similarly for shipping policy, human challenge trials and starting school later. There is long list of preventable, disastrous policies which society has failed to fix due lack of political will, not lack of sensible arguments.
>in the long run
What if we don’t have very long? You aren’t really factoring in the time crunch we are in (the whole reason that PauseAI is happening now is short timelines).