I think you’re using a stronger assumption in your ethical theories that situations are even comparable, if you ignore when they occur
Hm I wouldn’t endorse that assumption. I avoided specifying “when”s to communicate more quickly, but I had them in mind something like your examples—agree the times matter.
trying to get the first thing to happen (evolve to stable society) instead of second or third is worth doing if it were the only thing we could do in 2021
Agreed but only if we add another condition/caveat: that trying to get the first thing to happen also didn’t trade off against the probability of very good scenarios not covered in these three scenarios (which it would mathematically have to do, under some assumptions). As an oversimplistic example with made-up numbers, suppose we were facing these probabilities of possible futures:
20% -- your first scenario (tech stagnation) (10 goodness points)
5% -- your second scenario (mass suffering) (-1,000,000 goodness points)
20%--your third scenario (extinction) (-10 goodness points)
55%--Status quo in 2021 evolving to technologically sophisticated utopia by 2100 (1,000,000 goodness points)
And suppose the only action we could take in 2021 would change the above probabilities to the following:
100% -- your first scenario (tech stagnation) (10 goodness points)
0% -- your second scenario (mass suffering) (-1,000,000 goodness points)
0%--your third scenario (extinction) (-10 goodness points)
0%--Status quo in 2021 evolving to technologically sophisticated utopia by 2100 (1,000,000 goodness points)
Then the expected value of not taking the action is 500,000 goodness points, while the expected value of taking the action is 10 goodness points, so taking the action would be very bad / not worthwhile (despite how technically the action falls under your description of “trying to get the first thing to happen (evolve to stable society) instead of second or third [...] if it were the only thing we could do in 2021”).
Hm I wouldn’t endorse that assumption. I avoided specifying “when”s to communicate more quickly, but I had them in mind something like your examples—agree the times matter.
Agreed but only if we add another condition/caveat: that trying to get the first thing to happen also didn’t trade off against the probability of very good scenarios not covered in these three scenarios (which it would mathematically have to do, under some assumptions). As an oversimplistic example with made-up numbers, suppose we were facing these probabilities of possible futures:
20% -- your first scenario (tech stagnation) (10 goodness points)
5% -- your second scenario (mass suffering) (-1,000,000 goodness points)
20%--your third scenario (extinction) (-10 goodness points)
55%--Status quo in 2021 evolving to technologically sophisticated utopia by 2100 (1,000,000 goodness points)
And suppose the only action we could take in 2021 would change the above probabilities to the following:
100% -- your first scenario (tech stagnation) (10 goodness points)
0% -- your second scenario (mass suffering) (-1,000,000 goodness points)
0%--your third scenario (extinction) (-10 goodness points)
0%--Status quo in 2021 evolving to technologically sophisticated utopia by 2100 (1,000,000 goodness points)
Then the expected value of not taking the action is 500,000 goodness points, while the expected value of taking the action is 10 goodness points, so taking the action would be very bad / not worthwhile (despite how technically the action falls under your description of “trying to get the first thing to happen (evolve to stable society) instead of second or third [...] if it were the only thing we could do in 2021”).