I was wondering about the welfare part of the equation, and it’s not obvious how people get their welfare estimates in the calculator, from what I see in the post.
Are we talking about the welfare of just humans ? Animals ? (Farmed or wild animals)? Artificial sentience ? How do we reconcile all of these when we’re not sure today whether global welfare is net positive ?
Of course, this depends on very important questions that are hard to assess.
What are the consequences of bringing wild animals suffering to our planet? Is factory farming going to continue for a long time, especially as that long-termists are very optimistic about technology replacing all forms of animal farming, where it’s not so obvious? Are artificial sentience going to have lives worth living ? How are we going to impact animals on other planets ?
So overall, what should we include in the ‘welfare’ part of the calculator?
The calculators are intentionally silent on the welfare side, on the thought that in practice it’s much easier to treat as a mostly independent question. That’s not to say it actually is independent, and ideally I would like the output to include more information about what the pathways to either extinction or an interstellar state, so that people can do some further function on the output. I do think it’s reasonable, even on a totalising view, to prioritise improving future welfare conditional on it existing and largely ignoring the question of whether it will—but that’s not a question the calculators can help with except inasmuch as you condition on the pathway.
Even if they gave pathways, they would be agnostic on whose welfare qualified. Personally I’m interested in maximising total valence (I have an old essay still waiting for its conclusion on the subject), so every sentient being’s mental state ‘counts’, but you could use these with a different perspective in mind. Primarily empirical questions about e.g. the duration of factory farming, and animal suffering in terraformed systems seem like they’d need their own research projects.
Thanks for the calculator.
I was wondering about the welfare part of the equation, and it’s not obvious how people get their welfare estimates in the calculator, from what I see in the post.
Are we talking about the welfare of just humans ? Animals ? (Farmed or wild animals)? Artificial sentience ? How do we reconcile all of these when we’re not sure today whether global welfare is net positive ?
Of course, this depends on very important questions that are hard to assess. What are the consequences of bringing wild animals suffering to our planet? Is factory farming going to continue for a long time, especially as that long-termists are very optimistic about technology replacing all forms of animal farming, where it’s not so obvious? Are artificial sentience going to have lives worth living ? How are we going to impact animals on other planets ?
So overall, what should we include in the ‘welfare’ part of the calculator?
Hey Corentin,
The calculators are intentionally silent on the welfare side, on the thought that in practice it’s much easier to treat as a mostly independent question. That’s not to say it actually is independent, and ideally I would like the output to include more information about what the pathways to either extinction or an interstellar state, so that people can do some further function on the output. I do think it’s reasonable, even on a totalising view, to prioritise improving future welfare conditional on it existing and largely ignoring the question of whether it will—but that’s not a question the calculators can help with except inasmuch as you condition on the pathway.
Even if they gave pathways, they would be agnostic on whose welfare qualified. Personally I’m interested in maximising total valence (I have an old essay still waiting for its conclusion on the subject), so every sentient being’s mental state ‘counts’, but you could use these with a different perspective in mind. Primarily empirical questions about e.g. the duration of factory farming, and animal suffering in terraformed systems seem like they’d need their own research projects.