Hi,
maybe you find this overview of longtermism interesting if you have not already found it:
Hi,
maybe you find this overview of longtermism interesting if you have not already found it:
Thank you for sharing your thoughts. What do you think of the following scenario?
In world A the risk for an existential catastrophe is fairly low and most currently existing people are happy.
In world B the existential risk is slightly lower. In expectation there will live 100 billion additional people (compared to A) in the far future whose lifes are better than those of the people today. However, this reduction of risk is so costly that most of the currently existing people have miserable lifes.
Your theory probably favours option B. Is this intended ?
I had similar thoughts , too. My scenario was that at a certain point in the future all technologies that are easy to build will have been discovered and that you need multi-generational projects to develop further technologies. Just to name an example, you can think of a Dyson sphere. If the sun was enclosed by a Dyson sphere, each individual would have a lot more energy available or there would be enough room for many additional individuals. Obivously you need a lot of money before you get the first non-zero payoff and the potential payoff could be large.
Does this mean that effective altruists should prioritise building a Dyson sphere? There are at least three objections:
According to some ethical theories (person-affecting views, certain brands of suffering-focused ethics) it may not be desirable to build a Dyson sphere.
It is not clear if it is possible to improve existing technologies piecewisely such that you a obtain a Dyson sphere in the end. Maybe you start with space tourism, then hotels in the orbit, then giant solar plants in space etc. It could even be the case that each intermediate step is profitable such that market forces will lead to a Dyson sphere without the EA movement spending ressources.
If effective altruism becomes too much associated with speculative ideas, it could be negative for the growth of the movement.
Please do not misunderstand me. I am very sympathetic towards your proposal, but the difficulties should not be underestimated and much more research is necessary before you can say with high enough certainty that the EA movement as a whole should prioritise some kind of high-hanging fruit.
You mention that the ability to create digital people could lead to dystopian outcomes or a Malthusian race to the bottom. In my humble opinion bad outcomes could only be avoided if there is a world government that monitors what happens on every computer that is capable to run digital people. Of course, such a powerful governerment is a risk of its own.
Moreover I think that a benevolent world goverment can be realised only several centuries in the future, while mind uploading could be possible at the end of this century. Therefore I believe that bad outcomes are much more likely than good ones. I would be glad to hear if you have some arguments why this line of reasoning could be wrong.
Maybe you are interested in the following paper, which deals with similar questions as yours:
How would you answer the following arguments?
Existential risk reduction is much more important than life extension since it is possible to solve aging a few generations later, whereas humankinds potential, which could be enormous, is lost after an extinction event.
From a utilitarian perspective it does not matter if there are ten generations of people living 70 years or one generation of people living 700 years as long as they are happy. Therefore the moral value of life extension is neutral.
I am not wholly convinced of the second argument myself, but I do not see where exactly the logic goes wrong. Moreover, I want to play the devils advocate and I am curious for your answer.
Thank you for your answer and for the links to the other forum posts.
Thank you for your detailed answer. I expect that other people here have similar questions in mind. Therefore, it is nice to see your arguments written up.
I think that the case for longtermism gets stronger if you consider truly irreversible catastrophic risks, for example human extinction. Lets say that there is a chance of 10% for the extinction of humankind. Suppose you suggest some policy that reduces this risk by 2%, but introduces a new extinction risk with a probability of 1%. Then it would be wise to enact this policy.
This kind of reasoning would be probably wrong if you have a chance of 2% for a very good outcome such as unlimited cheap energy, but an additional extinction risk of 1%.
Moreover, you cannot argue that everything will be OK several thousand years in the future if humankind is eradicated instead of “just” reduced to a much smaller population size.
Your forum and your blog post contain many interesting thoughts and I think that the role of high variations in longtermism is indeed underexplored. Nevertheless, I think that even if everything that you have written is correct, it would still be sensible to limit global warming and care for extinction risks.
Thank you very much for sharing your paper. I have heard somewhere that Thorium reactors could be a big deal against climate change. The advantage would be that there are greater Thorium reserves than Uranium reserves and that you cannot use Thorium to build nuclear weapons. Do you have an opinion if the technology can be developed fast enough and deployed worldwide?
I think that it is possible that whole brain emulation (WBE) will be developed before AGI and that there are s-risks associated with WBE. It seems to me that most people in the s-risk community work on AI risks.
Do you know of any research that deals specifically with the prevention of s-risks from WBE? Since an emulated mind should resemble the original person, it should be difficult to tweak the code of the emulation such that extreme suffering is impossible. Although this may work for AGI, you need probably a different strategy for emulated minds.
I think that it is not possible to delay technological progress if there are strong near-term and/or egoistical reasons to accelerate the development of new technologies.
As an example, let us assume that it is possible to stop biological aging within a timeframe of 100 years. Of course, you can argue that this is an irreversible change, which may or may not be good for humankinds longterm future. But I do not think that it is realistic to say “Let’s fund Alzheimer’s research and senolytics, but everything that prolongs life expectancy beyond 120 years will be forbidden for the next millenia until we have figured out if we want to have a society of ageless people.”
On the other hand my argument does not rule out that it is possible to delay technologies which are very expensive to develop and which have no clear value from an egoistic point of view.
There is a short piece on longtermism in Spiegel Online, which is probably the biggest news site in Germany:
Longtermism: Was ist das—Rettung oder Gefahr? - Kolumne—DER SPIEGEL
Google Translate:
Longtermism: Was ist das—Rettung oder Gefahr? - Kolumne—DER SPIEGEL (www-spiegel-de.translate.goog)
As far as I know, this is the first time that longtermism is mentioned in a mayor German news outlet. The author mentions some key ideas and acknowledges that shorttime thinking is a big problem in society, but he is rather critical of the longtermist movement. For example, he thinks that climate change is neglected within longtermism and he cites Phil Torres article on Aeon.
I probably will not find the time to comment each of the points in the article and I do not know if this would be the most productive thing to do, but maybe some of you find the article interesting.
I agree with Linchs comment, but I want to mention a further point. Let us suppose that the well-being of all non-human animals between now and the death of the sun is the most important value. This idea can be justified since there are much more animals than humans.
Let us suppose furthermore that the future of human civilization has no impact on the lives of animals in the far future. [I disagree with this point since it might be possible that future humans abolish wild animal suffering or in the bad case they take wild animals with them when they colonize the stars and thus extend wild animal suffering.] Nevertheless, let us assume that we cannot have any impact on animals in the far future.
In my opinion, the most logical thing would be to focus on the things that we can change (x-risks, animal suffering today etc.) and to develop a stoic attitude towards the things we cannot change.
Thank you for writing this piece! I think that there should be a serious discussion if crypto is net positive or negative for the world.
In my opinion, there are a few more ways how crypto could contribute to existential risk. Since you can accept donations in monero, it is much easier to make a living by spreading dangerous ideologies (human extinction is a worthy goal, political measures against existential risk are totalitarian, etc.) Of course, you can also support an atheist blogger in Iran or a whistleblower in China by crypto, but it is very hard to tell if the advantages or disadvantages weigh more.
Moreover, crypto can be used to fund more dangerous stuff than “just” artificial pathogens. Think of AGIs that are build for a criminal purpose, imperfect mind uploads that suffer, or perfect mind uploads that are build with the intention to torture them, underground companies that avoid AI regulation in order to cut costs etc.
These scenarios do not prove that cryptocurrencies are net negative; especially since it may be possible to build DAOs that will solve some coordination problems. Nevertheless, I would be happy if more smart people were thinking hard about these issues.
Thank you for writing this post. I want to point out that your conclusions are highly dependent on your ethical and empirical assumptions. Here are some thoughts about what could change your conclusion:
If you donate to the top charities that are recommended by Founders Pledge, you can probably do much better than 30$/ton. I have not been able to find the precise numbers quickly, but if I remember correctly, 1$/ton is possible under reasonable assumptions. This would change your average estimate to $25,000 per life saved.
Let us assume that the maximal number of happy human beings that could live on earth is reduced by 1 billion by rising sea levels, loss of agricultural land area etc. Let us further assume that this consequences of global warming persist for 100,000 years and that there is a probability of 1% that no game-changing technology such as advanced geo-engineering will be developed. This would mean that 10^12 QALYs are lost and the effectiveness of a dollar would rise. Of course, this argument relies on your rate of temporal accounting.
Climate change could also increase other existential risks. For example, there could be a war about ressources that is fought by nuclear weapons, synthetic pathogens or malevolent AIs.
The message I want to send is not that your analysis is wrong, but that evaluating longtermist interventions is a huge mess since different reasonable assumptions lead to wildly diverging answers.
In my opinion “the most controversial billionaire” is either Peter Thiel or Donald Trump. Otherwise, I agree with what you have written.
I agree strongly with what you have written. Especially, since in my opinion it is unlikely that there will be a liberal and/or pro-western government in Russia, even if Putin will be replaced.
Do you have any suggestion what an average person in a western country can do? Of course, you can write to your representative that the borders should be opened for Russian emigrants. Unfortunately, I do not know if this is really effective since politicians get probably tons of mail.
Hello! As long as I can remember, I have been interested in the long term future and have asked myself if there is any possibility to direct the future of humankind in a positive direction. Every once in a while I searched the internet for a community of like-minded people. A few month ago I discovered that many effective altruists are interested in longtermism.
Since then, I often take a look at this forum and have read ‘The Precipice’ by Toby Ord. I am not quite sure if I agree with every belief that is common among EAs. Nevertheless, I think that we can agree on many things.
My highest priorities are avoiding existential risks and improving decision making. Moreover, I think about the consequences of technological stagnation and the question if there are possible events far in the future that can only be influenced positively if we start working soon. At the moment my time is very constrained, but I hope that I will be able to participate in the discussion.