Thanks. I respect that the model is flexible and that it doesn’t attempt to answer all questions. But at the end of the day, the model will be used to “help assess potential research projects at Rethink Priorities” and I fear it will undervalue longterm-focused stuff by a factor of >10^20.
AFAICT, the model also doesn’t consider far future effects of animal welfare and GHD interventions. And against relative ratios like >10^20 between x-risk and neartermist interventions, see:
(I agree that the actual ratio isn’t like 10^20. In my view this is mostly because of the long-term effects of neartermist stuff,* which the model doesn’t consider, so my criticism of the model stands. Maybe I should have said “undervalue longterm-focused stuff by a factor of >10^20 relative to the component of neartermist stuff that the model considers.”)
*Setting aside causing others to change prioritization, which it feels wrong for this model to consider.
Thanks. I respect that the model is flexible and that it doesn’t attempt to answer all questions. But at the end of the day, the model will be used to “help assess potential research projects at Rethink Priorities” and I fear it will undervalue longterm-focused stuff by a factor of >10^20.
I believe Marcus and Peter will release something before long discussing how they actually think about prioritization decisions.
AFAICT, the model also doesn’t consider far future effects of animal welfare and GHD interventions. And against relative ratios like >10^20 between x-risk and neartermist interventions, see:
https://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/
https://longtermrisk.org/how-the-simulation-argument-dampens-future-fanaticism
(I agree that the actual ratio isn’t like 10^20. In my view this is mostly because of the long-term effects of neartermist stuff,* which the model doesn’t consider, so my criticism of the model stands. Maybe I should have said “undervalue longterm-focused stuff by a factor of >10^20 relative to the component of neartermist stuff that the model considers.”)
*Setting aside causing others to change prioritization, which it feels wrong for this model to consider.