It could, a priori, be the case that improving animal rights now increases the probability that attitudes will be humane (to a given degree) in the far future at all; there’s some concern with value lock-in with AGI, for example. Some changes could also be hard to reverse even if attitudes improve, e.g. spreading self-propagating suffering animals or artificially sentient beings into space.
And if human influence continues to increase over time (e.g. as we spread in space), then a delay in the progress of moral circle expansion could have effects that add up over time, too. To illustrate, suppose we have two (finite or infinite) sequences representing the amount of suffering in our sphere of influence at each point in time, but we make earlier progress on moral circle expansion in one so the amount of suffering in our sphere of influence is reduced by 1 at each step in that sequence compared to the other; or, the other sequence is a shift of the one with earlier moral circle expansion. With some choice of units, the sequences could look like 1, 2, 3, 4, 5, …, n and 2, 3, 4, 5, …, n+1, with the last value of each sequence appearing at time n. The sum of the differences between the two sequences is 1 + 1 + 1 + 1 + … + 1 = n, which grows without bound as a function of n, so it could end up very large.
It’s also not crucial that the sum of the differences grow without bound, just that it’s large.
I don’t know that these are the scenarios they have in mind at 80,000 Hours, though.
To illustrate, suppose we have two (finite or infinite) sequences representing the amount of suffering in our sphere of influence at each point in time, but we make earlier progress on moral circle expansion in one so the amount of suffering in our sphere of influence is reduced by 1 at each step in that sequence compared to the other;
Just to say I really liked this point, which I think applies equally to focusing on the correct account of value (and opposed to who the value-bearers are, which is this point)
It could, a priori, be the case that improving animal rights now increases the probability that attitudes will be humane (to a given degree) in the far future at all; there’s some concern with value lock-in with AGI, for example. Some changes could also be hard to reverse even if attitudes improve, e.g. spreading self-propagating suffering animals or artificially sentient beings into space.
And if human influence continues to increase over time (e.g. as we spread in space), then a delay in the progress of moral circle expansion could have effects that add up over time, too. To illustrate, suppose we have two (finite or infinite) sequences representing the amount of suffering in our sphere of influence at each point in time, but we make earlier progress on moral circle expansion in one so the amount of suffering in our sphere of influence is reduced by 1 at each step in that sequence compared to the other; or, the other sequence is a shift of the one with earlier moral circle expansion. With some choice of units, the sequences could look like 1, 2, 3, 4, 5, …, n and 2, 3, 4, 5, …, n+1, with the last value of each sequence appearing at time n. The sum of the differences between the two sequences is 1 + 1 + 1 + 1 + … + 1 = n, which grows without bound as a function of n, so it could end up very large.
It’s also not crucial that the sum of the differences grow without bound, just that it’s large.
I don’t know that these are the scenarios they have in mind at 80,000 Hours, though.
Just to say I really liked this point, which I think applies equally to focusing on the correct account of value (and opposed to who the value-bearers are, which is this point)