I don’t have a strong preference. There a some aspects in which longerism can be better framing, at least sometimes.
I. In a “longetermist” framework, x-risk reduction is the most important thing to work on for many orders of magnitude of uncertainty about the probability of x-risk in the next e.g. 30 years. (due to the weight of the long term future). Even if AI related x-risk is only 10ˆ-3 in next 30 years, it is still an extremely important problem or the most important one. In a “short-termist” view with, say, a discount rate of 5%, it is not nearly so clear.
The short-termist urgency of x-risk (“you and everyone you know will die”) depends on the x-risk probability being actually high, like of order 1 percent, or tens of percents . Arguments why this probability is actually so high are usually brittle pieces of mathematical philosophy (eg many specific individual claims by Eliezer Yudkowsky) or brittle use of proxies with lot of variables obviously missing from the reasoning (eg the report by Ajeya Cotra). Actual disagreements about probabilities are often in fact grounded in black-box intuitions about esoteric mathematical concepts. It is relatively easy to come with brittle pieces of philosophy arguing in the opposite direction: why this number is low. In fact my actual, action guidingestimate is not based on an argument conveyable by a few paragraphs, but more on something like “feeling you get after working on this over years”. What I can offer other is something like “an argument from testimony”, and I don’t think it’s that great.
II. Longermism is a positive word, pointing toward the fact that future could be large and nice. X-risk is the opposite.
Similar: AI safety vs AI alignment. My guess is the “AI safety” framing is by default more controversial and gets more of a pushback (eg “safety department” is usually not the most loved part of an organisation, with connotations like “safety people want to prevent us from doing what we want”)
I don’t have a strong preference. There a some aspects in which longerism can be better framing, at least sometimes.
I. In a “longetermist” framework, x-risk reduction is the most important thing to work on for many orders of magnitude of uncertainty about the probability of x-risk in the next e.g. 30 years. (due to the weight of the long term future). Even if AI related x-risk is only 10ˆ-3 in next 30 years, it is still an extremely important problem or the most important one. In a “short-termist” view with, say, a discount rate of 5%, it is not nearly so clear.
The short-termist urgency of x-risk (“you and everyone you know will die”) depends on the x-risk probability being actually high, like of order 1 percent, or tens of percents . Arguments why this probability is actually so high are usually brittle pieces of mathematical philosophy (eg many specific individual claims by Eliezer Yudkowsky) or brittle use of proxies with lot of variables obviously missing from the reasoning (eg the report by Ajeya Cotra). Actual disagreements about probabilities are often in fact grounded in black-box intuitions about esoteric mathematical concepts. It is relatively easy to come with brittle pieces of philosophy arguing in the opposite direction: why this number is low. In fact my actual, action guiding estimate is not based on an argument conveyable by a few paragraphs, but more on something like “feeling you get after working on this over years”. What I can offer other is something like “an argument from testimony”, and I don’t think it’s that great.
II. Longermism is a positive word, pointing toward the fact that future could be large and nice. X-risk is the opposite.
Similar: AI safety vs AI alignment. My guess is the “AI safety” framing is by default more controversial and gets more of a pushback (eg “safety department” is usually not the most loved part of an organisation, with connotations like “safety people want to prevent us from doing what we want”)