Isn’t the opposite end of the p(doom)–longtermism quadrant also relevant? E.g. my p(doom) is 2%, but I take the arguments for longtermism seriously and think that’s high enough of a chance to justify working on the alignment problem.
interesting, I would instinctively still consider 2% to be a high p(doom) within the next 100 years. In the AI field, what is generally seen as a “high” or “low” p(doom)
Isn’t the opposite end of the p(doom)–longtermism quadrant also relevant? E.g. my p(doom) is 2%, but I take the arguments for longtermism seriously and think that’s high enough of a chance to justify working on the alignment problem.
interesting, I would instinctively still consider 2% to be a high p(doom) within the next 100 years. In the AI field, what is generally seen as a “high” or “low” p(doom)