Thank you for your answer and for the links to the other forum posts.
Frank_R
How would you answer the following arguments?
-
Existential risk reduction is much more important than life extension since it is possible to solve aging a few generations later, whereas humankinds potential, which could be enormous, is lost after an extinction event.
-
From a utilitarian perspective it does not matter if there are ten generations of people living 70 years or one generation of people living 700 years as long as they are happy. Therefore the moral value of life extension is neutral.
I am not wholly convinced of the second argument myself, but I do not see where exactly the logic goes wrong. Moreover, I want to play the devils advocate and I am curious for your answer.
-
Maybe you are interested in the following paper, which deals with similar questions as yours:
My question was mainly the first one. (Are 20 insects happier than one human?) Of course similar problems arise if you compare the welfare of humans. (Are 20 people whose living standard is slightly above subsistence happier than one millionaire ?)
The reason why I have chosen interspecies comparison as an example is that it is much harder to compare the welfare of members of different species. At least you can ask humans to rate their happiness on a scale from 1 to 10. Moreover, the moral consequences of different choices for the function f are potentially greater.
The forum post seems to be what I have asked for, but I need some time to read through the literature. Thank you very much!
You mention that the ability to create digital people could lead to dystopian outcomes or a Malthusian race to the bottom. In my humble opinion bad outcomes could only be avoided if there is a world government that monitors what happens on every computer that is capable to run digital people. Of course, such a powerful governerment is a risk of its own.
Moreover I think that a benevolent world goverment can be realised only several centuries in the future, while mind uploading could be possible at the end of this century. Therefore I believe that bad outcomes are much more likely than good ones. I would be glad to hear if you have some arguments why this line of reasoning could be wrong.
I had similar thoughts , too. My scenario was that at a certain point in the future all technologies that are easy to build will have been discovered and that you need multi-generational projects to develop further technologies. Just to name an example, you can think of a Dyson sphere. If the sun was enclosed by a Dyson sphere, each individual would have a lot more energy available or there would be enough room for many additional individuals. Obivously you need a lot of money before you get the first non-zero payoff and the potential payoff could be large.
Does this mean that effective altruists should prioritise building a Dyson sphere? There are at least three objections:
According to some ethical theories (person-affecting views, certain brands of suffering-focused ethics) it may not be desirable to build a Dyson sphere.
It is not clear if it is possible to improve existing technologies piecewisely such that you a obtain a Dyson sphere in the end. Maybe you start with space tourism, then hotels in the orbit, then giant solar plants in space etc. It could even be the case that each intermediate step is profitable such that market forces will lead to a Dyson sphere without the EA movement spending ressources.
If effective altruism becomes too much associated with speculative ideas, it could be negative for the growth of the movement.
Please do not misunderstand me. I am very sympathetic towards your proposal, but the difficulties should not be underestimated and much more research is necessary before you can say with high enough certainty that the EA movement as a whole should prioritise some kind of high-hanging fruit.
Thank you for sharing your thoughts. What do you think of the following scenario?
In world A the risk for an existential catastrophe is fairly low and most currently existing people are happy.
In world B the existential risk is slightly lower. In expectation there will live 100 billion additional people (compared to A) in the far future whose lifes are better than those of the people today. However, this reduction of risk is so costly that most of the currently existing people have miserable lifes.
Your theory probably favours option B. Is this intended ?
Hi,
maybe you find this overview of longtermism interesting if you have not already found it:
Hello! As long as I can remember, I have been interested in the long term future and have asked myself if there is any possibility to direct the future of humankind in a positive direction. Every once in a while I searched the internet for a community of like-minded people. A few month ago I discovered that many effective altruists are interested in longtermism.
Since then, I often take a look at this forum and have read ‘The Precipice’ by Toby Ord. I am not quite sure if I agree with every belief that is common among EAs. Nevertheless, I think that we can agree on many things.
My highest priorities are avoiding existential risks and improving decision making. Moreover, I think about the consequences of technological stagnation and the question if there are possible events far in the future that can only be influenced positively if we start working soon. At the moment my time is very constrained, but I hope that I will be able to participate in the discussion.
Thank you for your detailed answer. I expect that other people here have similar questions in mind. Therefore, it is nice to see your arguments written up.