is wrong even given the assumption of a net-positive future. For example, suppose both problems are equally tractable and there is a 50% chance of extinction. Then x=1. But if the future is only a super tiny bit positive on net, then increasing WOW longterm has massive effects. Like if well-being vs. suffering is distributed 51%-49%, then increasing well-being by 1% doubles how good the future is.
In general, I’m pretty sure the correct formula would have goodness of the future as a scalar, and that it would be the same formula whether the future is positive or not.
I don’t entirely understand the other formula, but I don’t believe it fixes the problem. Could be wrong.
If I understand you correctly you believe the formula does not take into account how good the future will be. I do somewhat agree that there is a related problem in my analysis, however I don’t think that the problem is related to the formula.
The problem your talking about is actually being taken into account by “t”. You should note that the formula is about “net well-being”, so “all well-being” minus “all suffering”. So if future “net well-being” is very low, then the tractability of WAW will be high (aka “t” being low). E.g. lets say “net well-being” = 1 (made up unit), than it’s gonna be alot easier to increase by 1 % than if “net well-being” = 1000.
However I do agree that estimations for expectations on how good the future is going to be, is technically needed for making this analysis correctly. Specifically for estimating “t” and “net negative future” (or u(negative)) in for the “main formula”. I may fix this in the future.
The problem your talking about is actually being taken into account by “t”.
If you intended it that way, then the formula is technically correct, but only because you’ve offloaded all the difficulty into defining this parameter. The value of t is now strongly dependent on the net proportion of well-being vs. suffering in the entire universe, which is extremely difficult to estimate and not something that people usually mean by tractability of a cause. (And in fact, it’s also not what you talk about in this post in the section on tractability.)
The value we care about here is something like well-beingwell-being −sufering. If well-being and suffering are close together, this quantity becomes explosively larger, and so does the relative impact of improving WAW permanently relative to x-risk reduction. Since again, I don’t think this is what anyone thinks of when they talk about tractability, I think it should be in the formula.
I do agree that t in the formula is quite complicated to understand (and does not mean the same as the typical meant by tractability), I tried to explain it, but since no one edited my work, I might be overestimating the understandability of my formulations. “t” is something like “the cost-effectiveness of reducing the likelihood of x-risk by 1 percentage point” divided by “the cost-effectiveness of increasing net happiness of x-risk by 1 percent”.
When that’s been said, I still think that the analysis lacks an estimation for how good the future will be. Which could make the numbers for “t” and “net negative future” (or u(negative)) “more objective”.
I think this formula (under “simple formula”)
is wrong even given the assumption of a net-positive future. For example, suppose both problems are equally tractable and there is a 50% chance of extinction. Then x=1. But if the future is only a super tiny bit positive on net, then increasing WOW longterm has massive effects. Like if well-being vs. suffering is distributed 51%-49%, then increasing well-being by 1% doubles how good the future is.
In general, I’m pretty sure the correct formula would have goodness of the future as a scalar, and that it would be the same formula whether the future is positive or not.
I don’t entirely understand the other formula, but I don’t believe it fixes the problem. Could be wrong.
If I understand you correctly you believe the formula does not take into account how good the future will be. I do somewhat agree that there is a related problem in my analysis, however I don’t think that the problem is related to the formula.
The problem your talking about is actually being taken into account by “t”. You should note that the formula is about “net well-being”, so “all well-being” minus “all suffering”. So if future “net well-being” is very low, then the tractability of WAW will be high (aka “t” being low). E.g. lets say “net well-being” = 1 (made up unit), than it’s gonna be alot easier to increase by 1 % than if “net well-being” = 1000.
However I do agree that estimations for expectations on how good the future is going to be, is technically needed for making this analysis correctly. Specifically for estimating “t” and “net negative future” (or u(negative)) in for the “main formula”. I may fix this in the future.
If you intended it that way, then the formula is technically correct, but only because you’ve offloaded all the difficulty into defining this parameter. The value of t is now strongly dependent on the net proportion of well-being vs. suffering in the entire universe, which is extremely difficult to estimate and not something that people usually mean by tractability of a cause. (And in fact, it’s also not what you talk about in this post in the section on tractability.)
The value we care about here is something like well-beingwell-being −sufering. If well-being and suffering are close together, this quantity becomes explosively larger, and so does the relative impact of improving WAW permanently relative to x-risk reduction. Since again, I don’t think this is what anyone thinks of when they talk about tractability, I think it should be in the formula.
I do agree that t in the formula is quite complicated to understand (and does not mean the same as the typical meant by tractability), I tried to explain it, but since no one edited my work, I might be overestimating the understandability of my formulations. “t” is something like “the cost-effectiveness of reducing the likelihood of x-risk by 1 percentage point” divided by “the cost-effectiveness of increasing net happiness of x-risk by 1 percent”.
When that’s been said, I still think that the analysis lacks an estimation for how good the future will be. Which could make the numbers for “t” and “net negative future” (or u(negative)) “more objective”.