I think it’d be interesting to run a sensitivity analysis on Tarsney’s model(s), and to think about the value of information we’d get from further investigation of:
how likely the future is to resemble Tarsney’s cubic growth model vs his steady model
whether there are other models that are substantially likely, whether the model structures should be changed
what the most reasonable distribution for each parameter is.
It seems like the value of information from that might be very high, at least if we think we don’t want to accept fanaticism. This is because Tarsney’s paper suggests reasonable empirical views could either support the case for longtermism without requiring fanaticism or only support the case for longtermism if we accept fanaticism. So further research on these models, alternative models, and these parameters could perhaps give us a much better sense of how robust the case for longtermism is.
To some extent, this comment can be boiled down to something that was obvious already: “The case for longtermism seems plausible but uncertain, and whether it’s true seems very decision-relevant, so maybe investigating whether it’s true would be really valuable.” But I think Tarsney’s paper highlights specific points to look into, and that it would allow for (rough) quantitative estimates of the value of information to be gained by investigating each point.
For a quick and non-quantitative example, it seems that the probability of interstellar settlement has a very large bearing on the results of the model, and it also seems like we should be quite uncertain about that probability.
Some caveats to that:
I’m not sure how tractable these investigations would be.
But note that it could be useful just to become somewhat less uncertain than we currently are, even if we still remain quite uncertainty.
Tarsney’s models focus on a particular working example of a longtermist intervention/priority (increasing the chance that there’s an intelligent civilization at any given time point). As discussed in other comments here, how good the success of that intervention would be depends on other things not modelled by Tarsney (essentially, what that civilization does with the accessible universe), and there are many other interventions/priorities we might focus on.
So ideally we’d run the sensitivity analysis and value of information calculations on either a more general version of the models or on a set of models that collectively represent various major possible priorities.
Tarsney’s models make various ethical and decision-theoretic assumptions that are conducive to longtermism. An ideal version of the sensitivity analysis and value of information calculations might also allow for investigation of what happens if these assumptions are relaxed.
I think it’d be interesting to run a sensitivity analysis on Tarsney’s model(s), and to think about the value of information we’d get from further investigation of:
how likely the future is to resemble Tarsney’s cubic growth model vs his steady model
whether there are other models that are substantially likely, whether the model structures should be changed
what the most reasonable distribution for each parameter is.
It seems like the value of information from that might be very high, at least if we think we don’t want to accept fanaticism. This is because Tarsney’s paper suggests reasonable empirical views could either support the case for longtermism without requiring fanaticism or only support the case for longtermism if we accept fanaticism. So further research on these models, alternative models, and these parameters could perhaps give us a much better sense of how robust the case for longtermism is.
To some extent, this comment can be boiled down to something that was obvious already: “The case for longtermism seems plausible but uncertain, and whether it’s true seems very decision-relevant, so maybe investigating whether it’s true would be really valuable.” But I think Tarsney’s paper highlights specific points to look into, and that it would allow for (rough) quantitative estimates of the value of information to be gained by investigating each point.
For a quick and non-quantitative example, it seems that the probability of interstellar settlement has a very large bearing on the results of the model, and it also seems like we should be quite uncertain about that probability.
Some caveats to that:
I’m not sure how tractable these investigations would be.
But note that it could be useful just to become somewhat less uncertain than we currently are, even if we still remain quite uncertainty.
Tarsney’s models focus on a particular working example of a longtermist intervention/priority (increasing the chance that there’s an intelligent civilization at any given time point). As discussed in other comments here, how good the success of that intervention would be depends on other things not modelled by Tarsney (essentially, what that civilization does with the accessible universe), and there are many other interventions/priorities we might focus on.
So ideally we’d run the sensitivity analysis and value of information calculations on either a more general version of the models or on a set of models that collectively represent various major possible priorities.
Tarsney’s models make various ethical and decision-theoretic assumptions that are conducive to longtermism. An ideal version of the sensitivity analysis and value of information calculations might also allow for investigation of what happens if these assumptions are relaxed.
But that might become unwieldy.