But is that upper limit relevant? If we talk about all the combinations of all atoms in the universe, we certainly cannot conclude that it is a small effect in a long term view of mankind, given how huge that number is.
A more relevant argument would be : if we manage to go 100 years faster, the long-term impact (say in year 10000) would be the difference in welfare between living in 1800 and in 10000 for a population présent in 100years.
(For mathematicians, the integral of marginal improvment for the 100years of advance)
Compared to reducing an existential risk, that seems a lower impact, since this impact would be in all the welfare for all the future generations. (The integral of x% of all the welfare)
So the longer in time we look at (assuming we’ll manage in the future to survive), the more important it is to “not screw up” compared to going faster right now, without assuming capping in potential growth.
But is that upper limit relevant? If we talk about all the combinations of all atoms in the universe, we certainly cannot conclude that it is a small effect in a long term view of mankind, given how huge that number is.
A more relevant argument would be : if we manage to go 100 years faster, the long-term impact (say in year 10000) would be the difference in welfare between living in 1800 and in 10000 for a population présent in 100years. (For mathematicians, the integral of marginal improvment for the 100years of advance)
Compared to reducing an existential risk, that seems a lower impact, since this impact would be in all the welfare for all the future generations. (The integral of x% of all the welfare)
So the longer in time we look at (assuming we’ll manage in the future to survive), the more important it is to “not screw up” compared to going faster right now, without assuming capping in potential growth.