Seems better than the previous one, though imo still worse than my suggestion, for 3 reasons:
it’s more complex than asking about immediate extinction. (Why exactly 100 year cutoff? why 50%?)
since the definition explicitly allows for different x-risks to be differently bad, the amount you’d pay to reduce them would vary depending on the x-risk. So the question is underspecified.
The independence assumption is better if funders often face opportunities to reduce a Y%-risk that’s roughly independent from most other x-risk this century. Your suggestion is better if funders often face opportunities to reduce Y percentage points of all x-risk this century (e.g. if all risks are completely disjunctive, s.t. if you remove a risk, you’re guaranteed to not be hit by any other risk).
For your two examples, the risks from asteroids and climate change are mostly independent from the majority of x-risk this century, so there the independence assumption is better.
The disjunctive assumption can happen if we e.g. study different mutually exclusive cases, e.g. reducing risk from worlds with fast AI take-off vs reducing risk from worlds with slow AI take-off.
I weakly think that the former is more common.
(Note that the difference only matters if total x-risk this century is large.)
Edit: This is all about what version of this question is the best version, independent of inertia. If you’re attached to percentage points because you don’t want to change to an independence assumption after there’s already been some discussion on the post, then this your latest suggestion seems good enough. (Though I think most people have been assuming low total amount of x-risk, so probably independence or not doesn’t matter that much for the existing discussion.)
How do people feel about a proposed new definition:
Seems better than the previous one, though imo still worse than my suggestion, for 3 reasons:
it’s more complex than asking about immediate extinction. (Why exactly 100 year cutoff? why 50%?)
since the definition explicitly allows for different x-risks to be differently bad, the amount you’d pay to reduce them would vary depending on the x-risk. So the question is underspecified.
The independence assumption is better if funders often face opportunities to reduce a Y%-risk that’s roughly independent from most other x-risk this century. Your suggestion is better if funders often face opportunities to reduce Y percentage points of all x-risk this century (e.g. if all risks are completely disjunctive, s.t. if you remove a risk, you’re guaranteed to not be hit by any other risk).
For your two examples, the risks from asteroids and climate change are mostly independent from the majority of x-risk this century, so there the independence assumption is better.
The disjunctive assumption can happen if we e.g. study different mutually exclusive cases, e.g. reducing risk from worlds with fast AI take-off vs reducing risk from worlds with slow AI take-off.
I weakly think that the former is more common.
(Note that the difference only matters if total x-risk this century is large.)
Edit: This is all about what version of this question is the best version, independent of inertia. If you’re attached to percentage points because you don’t want to change to an independence assumption after there’s already been some discussion on the post, then this your latest suggestion seems good enough. (Though I think most people have been assuming low total amount of x-risk, so probably independence or not doesn’t matter that much for the existing discussion.)