It’s pretty crucial how much less weight you place on future people, right? If you weight there lives at say 1/1000 saving the life of a current person, and there are in expectation going to be 1 million x more people in the future than exist currently, then most of the value of preventing extinction will still come from the fact that it allows future people to come into existence.
I buy that. One way of putting it would be to say that if you use a parliamentary method of resolving moral uncertainty, the “non-totalist population ethics rep” and the “non-longtermist rep” should both say that farmed animal welfare as greater in scale than biorisk. Does that seem more useful?
I don’t know enough about moral uncertainty and the parliamentary model to say.
It’s worth saying that although in EA, people favour approaches to moral uncertainty that reject “just pick the theory that you think is mostly likely to be true, and make decision based on it, ignoring others”, I think some philosophers actually have defended views along those lines: https://brian.weatherson.org/RRM.pdf
It’s pretty crucial how much less weight you place on future people, right? If you weight there lives at say 1/1000 saving the life of a current person, and there are in expectation going to be 1 million x more people in the future than exist currently, then most of the value of preventing extinction will still come from the fact that it allows future people to come into existence.
I buy that. One way of putting it would be to say that if you use a parliamentary method of resolving moral uncertainty, the “non-totalist population ethics rep” and the “non-longtermist rep” should both say that farmed animal welfare as greater in scale than biorisk. Does that seem more useful?
I don’t know enough about moral uncertainty and the parliamentary model to say.
It’s worth saying that although in EA, people favour approaches to moral uncertainty that reject “just pick the theory that you think is mostly likely to be true, and make decision based on it, ignoring others”, I think some philosophers actually have defended views along those lines: https://brian.weatherson.org/RRM.pdf