I’ve gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.
Have you pressed Tyler Cowen on this?
I’m fairly confident that he has heard ~all the arguments that the effective altruism community has heard, and that he has understood them deeply. So I default to thinking that there’s an interesting disagreement here, rather than a boring “hasn’t heard the arguments” or “is making a basic mistake” thing going on.
In a recent note, I sketched a couple of possibilities.
(1) Stagnation is riskier than growth
Stubborn Attachments puts less emphasis on sustainability than other long-term thinkers like Nick Bostrom, Derek Parfit, Richard Posner, Martin Rees and Toby Ord. On the 80,000 Hours podcast, Tyler explained that existential risk was much more prominent in early drafts of the book, but he decided to de-emphasise it after Posner and others began writing on the topic. In any case, Tyler agrees with the claim that we should put more resources into reducing existential risk at current margins. However, it seems as though he, like Peter Thiel, sees the political risk of economic stagnation as a more immediate and existential concern than these other long-term thinkers. Speaking at one of the first effective altruism conferences, Thiel said if the rich world continues on a path of stagnation, it’s a one-way path to apocalypse. If we start innovating again, we at least have a chance of getting through, despite the grave risk of finding a black ball.
(2) Tyler is being Straussian
Tyler may have a different view about what messages are helpful to blast into the public sphere. Perhaps this is partly due to a Deutsch / Thiel-style worry about the costs of cultural pessimism about technology. Martin Rees, who sits in the UK House of Lords, claims that democratic politicians are hard to influence unless you first create a popular concern. My guess is Tyler may think both that politicians aren’t the centre of leverage for this issue, and that there are safer, more direct ways to influence them on this topic. In any case, it’s clear Tyler thinks that most people should focus on maximising the growth rate, and only a minority should focus on sustainability issues, including existential safety. It is not inconsistent to think that growth is too slow and that sustainability is underrated. Some listeners will hear the “sustainable” in “maximise the (sustainable) growth rate” and consider making that their focus. Most will not, and that’s fine.
Many more people can participate in the project of “maximise the (sustainable) rate of economic growth” than “minimise existential risk”.
(3) Something else?
I have a few other ideas, but I don’t want to share the half-baked thoughts just yet.
One I’ll gesture at: the phrase “cone of value”, his catchphrase “all thinkers are regional thinkers”, Bernard Williams, and anti-realism.
A couple relevant quotes from Tyler’s interview with Dwarkesh Patel:
[If you are a space optimist you may think that we can relax more about safety once we begin spreading to the stars.] You can get rid of that obsession with safety and replace it with an obsession with settling galaxies. But that also has a weirdness that I want to avoid, because that also means that something about the world we live in does not matter very much, you get trapped in this other kind of Pascal’s wager, where it is just all about space and NASA and like fuck everyone else, right? And like if that is right it is right. But my intuition is that Pascal’s Wager type arguments, they both don’t apply and shouldn’t apply here, that we need to use something that works for humans here on earth.
On the 800 years claim:
In the Stanford Talk, I estimated in semi-joking but also semi-serious fashion, that we had 700 or 800 years left in us.
Thanks! I think that’s a good summary of possible views.
FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven’t been quite ready to express them publicly, and I don’t think they’re endorsed by other members of the Progress community.
Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I’m heavily paraphrasing there.
He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway.
Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won’t speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like:
Ramp up high skilled immigration (especially from China, especially in AI, biotech, EE and physics) by expanding visa access and proactively recruiting scientists
@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you’re at?
Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather… nascent. For example:
(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I’m sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to “take a punt” on some extremely high stakes issues.
(b) I’m struggling to think of examples of public discussion of how “strong” a version of DTD we should aim for in practice (pointers, anyone?).
Yes, the upshot from that piece is “eh”. I think there are some plausible XR-minded arguments in favor of economic growth, but I don’t find them overly compelling.
In practice, I think the particulars matter a lot. If you were to say, make progress on a cost-effective malaria vaccine, it’s hard to argue that it’ll end up bringing about superintelligence in the next couple decades. But it depends on your time scale. If you think AI is more on a 100 year time horizon, there might be more reason to be worried about growth.
R.e. DTD, I think it depends way more than EA/XR people tend to think on global coordination.
Have you pressed Tyler Cowen on this?
I’m fairly confident that he has heard ~all the arguments that the effective altruism community has heard, and that he has understood them deeply. So I default to thinking that there’s an interesting disagreement here, rather than a boring “hasn’t heard the arguments” or “is making a basic mistake” thing going on.
In a recent note, I sketched a couple of possibilities.
(1) Stagnation is riskier than growth
(2) Tyler is being Straussian
Many more people can participate in the project of “maximise the (sustainable) rate of economic growth” than “minimise existential risk”.
(3) Something else?
I have a few other ideas, but I don’t want to share the half-baked thoughts just yet.
One I’ll gesture at: the phrase “cone of value”, his catchphrase “all thinkers are regional thinkers”, Bernard Williams, and anti-realism.
A couple relevant quotes from Tyler’s interview with Dwarkesh Patel:
On the 800 years claim:
Thanks! I think that’s a good summary of possible views.
FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven’t been quite ready to express them publicly, and I don’t think they’re endorsed by other members of the Progress community.
Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I’m heavily paraphrasing there.
He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway.
Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won’t speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like:
Ramp up high skilled immigration (especially from China, especially in AI, biotech, EE and physics) by expanding visa access and proactively recruiting scientists
@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you’re at?
Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather… nascent. For example:
(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I’m sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to “take a punt” on some extremely high stakes issues.
(b) I’m struggling to think of examples of public discussion of how “strong” a version of DTD we should aim for in practice (pointers, anyone?).
Hey sorry for the late reply, I missed this.
Yes, the upshot from that piece is “eh”. I think there are some plausible XR-minded arguments in favor of economic growth, but I don’t find them overly compelling.
In practice, I think the particulars matter a lot. If you were to say, make progress on a cost-effective malaria vaccine, it’s hard to argue that it’ll end up bringing about superintelligence in the next couple decades. But it depends on your time scale. If you think AI is more on a 100 year time horizon, there might be more reason to be worried about growth.
R.e. DTD, I think it depends way more than EA/XR people tend to think on global coordination.