Software engineer and recent graduate from UC Berkeley with degrees in Computer Science and Economics. I write a blog at https://ohmurphy.substack.com/.
Owen Murphy
That’s an interesting point. I’m a bit skeptical of modeling risk as constant per unit volume since almost all of the volume bordered by civilizations will be empty and not contributing to survival. I think a better model would just use the number of independent/disconnected planets colonized. I also expect colonies on other planets to be more precarious than civilization on Earth since the basic condition of most planets is that they are uninhabitable. That said, I do take the point that an interstellar civilization should be more resilient than a non-interstellar one (all else equal).
Sensitive assumptions in longtermist modeling
I would also note that the number of jobs being created is not independent of the number of people unemployed (even structurally unemployed). If there are significant job losses as a result of automation, it seems likely that the government (either through fiscal or monetary policy) would try to stimulate the economy (increase aggregate demand) in order to create more jobs. The reason that the government does not always just increase total jobs is because such policies can cause inflation when unemployment is pushed too low. However, I don’t think that would be a concern in this case. The reallocation of labor across industries is not perfect and can be quite harmful to individuals, but I seriously doubt that people will become unemployable.
I am a bit confused by the framing of this problem as being about GDP. In the situation I think you are describing (in which automation eliminates jobs and lowers costs), I believe that real GDP would increase. It is possible that nominal GDP would decrease if there were sufficient decreases in production costs, but that would generally be phrased as a form of deflation rather than as a decrease in “GDP”.
Kidney donation might be an option. There are a few different posts about it on the forum (it has its own tag).
The point about accounting for uncertainty is very well taken. I had not considered possible asymmetries in the effects of uncertainty when writing this.
On longtermism generally, I think my language in the post was probably less favorable to longtermism than I would ultimately endorse. As you say, the value of the future remains exceptionally large even after reduction by a few orders of magnitude, a fact that should hopefully be clear from the units (trillions of life-years) used in the graphs above.
If I have time in future, I may try to create new graphs for sensitivity and value that take into account uncertainty.