I would agree that the article is too wide-ranging. There’s a whole host of content ranging from criticisms of expected value theory, arguments for degrowth, arguments for democracy, and then criticisms of specific risk estimates. I agreed with some parts of the paper, but it is hard to engage with such a wide range of topics.
The paper doesn’t explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.
”For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential.” Personally, I consider a long-term future with a 48.6% child and infant mortality rate abhorrent and opposed to human potential, but the authors don’t seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.
There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable. ”Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated” “The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible” ”regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option” implies to me that one of those three options is a feasible option, or is at least worth investigating.
While they don’t explicitly advocate degrowth, I think it is reasonable to read them as doing such, as John does.
“For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential.”
Point taken. Thank you for pointing this out.
“The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible” ″regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option” implies to me that one of those three options is a feasible option, or is at least worth investigating.
I think this is more about stopping the development of specific technologies—for example, they suggest that stopping AGI from being developed is an option. Stopping the development of certain technologies isn’t necessarily related to degrowth—for example, many jurisdictions now ban government use of facial recognition technology, and there have been calls to abolish its use, but these are motivated by civil liberties concerns.
I think this conflates the criticism of the idea of unitary and unstoppable technological progress with opposition to any and all technological progress.
Suggesting that a future without industrialization is morally tolerable does not imply opposition to “any and all” technological progress, but the amount of space left is very small. I don’t think they’re taking an opinion on the value of better fishhooks.
I would agree that the article is too wide-ranging. There’s a whole host of content ranging from criticisms of expected value theory, arguments for degrowth, arguments for democracy, and then criticisms of specific risk estimates. I agreed with some parts of the paper, but it is hard to engage with such a wide range of topics.
Where? The paper doesn’t mention economic growth at all.
The paper doesn’t explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.
”For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential.” Personally, I consider a long-term future with a 48.6% child and infant mortality rate abhorrent and opposed to human potential, but the authors don’t seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.
There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable.
”Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated”
“The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible”
”regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option” implies to me that one of those three options is a feasible option, or is at least worth investigating.
While they don’t explicitly advocate degrowth, I think it is reasonable to read them as doing such, as John does.
Point taken. Thank you for pointing this out.
I think this is more about stopping the development of specific technologies—for example, they suggest that stopping AGI from being developed is an option. Stopping the development of certain technologies isn’t necessarily related to degrowth—for example, many jurisdictions now ban government use of facial recognition technology, and there have been calls to abolish its use, but these are motivated by civil liberties concerns.
I think this conflates the criticism of the idea of unitary and unstoppable technological progress with opposition to any and all technological progress.
Suggesting that a future without industrialization is morally tolerable does not imply opposition to “any and all” technological progress, but the amount of space left is very small. I don’t think they’re taking an opinion on the value of better fishhooks.
It is morally tenable under some moral codes but not others. That’s the point.