I’m somewhat of a skeptic on the dangers of AI, so I may not be the best person to address that point. On pandemics, I think it’s likely that Gain of Function research should be heavily curtailed—but I don’t think that’s a core neoliberal value or anything, just my personal opinion.
More broadly, I don’t really think of x-risk and economic growth as things that necessarily have to be traded for each other. I think that in many important ways a more prosperous society has more stability and less x-risk. One important area to worry about is the possibility of nuclear conflict. To me it seems pretty clear that the more than every country in the world can become a rich country, the more stable international geopolitics will be and the lower the risk of a catastrophic war will be.
Morally, I think any attempt to slow growth in the name of x-risk had better be really damn sure that the x-risk is truly intolerable, because in practical terms ‘trade off for growth’ means ‘potentially impoverish millions/billions of people’. Purposefully trying to slow economic growth seems to me to be a moral evil in almost all cases, barring some exceptional edge cases, simply because economic growth is so good (and this is especially true in developing countries).
I think Tyler Cowen’s general idea that economic growth is extremely important is true and underrated in our political discourse. I’m not always in agreement with the prescriptions that Cowen believes would actually achieve that growth. There’s an exciting revival in the general big-tent-neoliberal world of a ‘pro-growth progressive’ attitude. Sometimes you hear this called a ‘supply side liberal’. Ezra Klein wrote about this in the NYTimes, but I would also recommend the general works of Sam Hammond at the Niskanen Center, Matt Yglesias and Noah Smith for other versions of this.
I often see people citing Tyler Cowen on the moral imperative to do economic growth for our descendants while simultaneously claiming that existential risk is low but nonzero. Of course there’s nothing wrong with agreeing with someone on one thing but not another, but in this case I feel like there’s a premise missing or something.
I’m somewhat of a skeptic on the dangers of AI, so I may not be the best person to address that point. On pandemics, I think it’s likely that Gain of Function research should be heavily curtailed—but I don’t think that’s a core neoliberal value or anything, just my personal opinion.
More broadly, I don’t really think of x-risk and economic growth as things that necessarily have to be traded for each other. I think that in many important ways a more prosperous society has more stability and less x-risk. One important area to worry about is the possibility of nuclear conflict. To me it seems pretty clear that the more than every country in the world can become a rich country, the more stable international geopolitics will be and the lower the risk of a catastrophic war will be.
Morally, I think any attempt to slow growth in the name of x-risk had better be really damn sure that the x-risk is truly intolerable, because in practical terms ‘trade off for growth’ means ‘potentially impoverish millions/billions of people’. Purposefully trying to slow economic growth seems to me to be a moral evil in almost all cases, barring some exceptional edge cases, simply because economic growth is so good (and this is especially true in developing countries).
I think Tyler Cowen’s general idea that economic growth is extremely important is true and underrated in our political discourse. I’m not always in agreement with the prescriptions that Cowen believes would actually achieve that growth. There’s an exciting revival in the general big-tent-neoliberal world of a ‘pro-growth progressive’ attitude. Sometimes you hear this called a ‘supply side liberal’. Ezra Klein wrote about this in the NYTimes, but I would also recommend the general works of Sam Hammond at the Niskanen Center, Matt Yglesias and Noah Smith for other versions of this.
To be clear, Tyler Cowen believes that growth is more important than averting x-risk as a logical consequence of believing that human extinction on the timescale of 700 years or so is inevitable (H/T AppliedDivinityStudies, pg.12).
I often see people citing Tyler Cowen on the moral imperative to do economic growth for our descendants while simultaneously claiming that existential risk is low but nonzero. Of course there’s nothing wrong with agreeing with someone on one thing but not another, but in this case I feel like there’s a premise missing or something.