Potential ways around this that come to mind:
Good ideas. I have a few more,
Have a feature that allows people to charge fees to people who submit work. This would potentially compensate the arbitrator who would have to review the work, and would discourage people from submitting bad work in the hopes that they can fool people into awarding them the bounty.
Instead of awarding the bounty to whoever gives a summary/investigation, award the bounty to the person who provides the best summary/investigation, at the end of some time period. That way, if someone thinks that the current submissions are omitting important information, or are badly written, then they can take the prize for themselves by submitting a better one.
Similar to your first suggestion: have a feature that restricts people from submitting answers unless they pass certain basic criteria. E.g. “You aren’t eligible unless you are verified to have at least 50 karma on the Effective Altruist Forum or Lesswrong.” This would ensure that only people from within the community can contribute to certain questions.
Use adversarial meta-bounties: at the end of a contest, offer a bounty to anyone who can convince the judge/arbitrator to change their mind on the decision they have made.
What is the likely market size for this platform?
I’m not sure, but I just opened a Metaculus question about this, and we should begin getting forecasts within a few days.
Eliezer Yudkowsky wrote a sequence on ethical injunctions where he argued why things like this were wrong (from his own, longtermist perspective).
And it feels terribly convenient for the longtermist to argue they are in the moral right while making no effort to counteract or at least not participate in what they recognize as moral wrongs.
This is only convenient for the longtermist if they do not have equivalently demanding obligations to the longterm. Otherwise we could turn it around and say that it’s “terribly convenient” for a shorttermist to ignore the longterm future too.
Regarding the section on estimating the probability of AI extinction, I think a useful framing is to focus on disjunctive scenarios where AI ends up being used. If we imagine a highly detailed scenario where a single artificial intelligence goes rougue, then of course these types of things will seem unlikely.
However, my guess is that AI will gradually become more capable and integrated into the world economy, and there won’t be a discrete point where we can say “now the AI was invented.” Over the broad course of history, we have witnessed numerous instances of populations displacing other populations eg. species displacements in ecosystems, and humans populations displacing other humans. If we think about AI as displacing humanity’s seat of power in this abstract way, then an AI takeover doesn’t seem implausible anymore, and indeed I find it quite likely in the long run.
A trip to Mars that brought back human passengers also has the chance of bringing back microbial Martian passengers. This could be an existential risk if microbes from Mars harm our biosphere in a severe and irreparable manner.
From Carl Sagan in 1973, “Precisely because Mars is an environment of great potential biological interest, it is possible that on Mars there are pathogens, organisms which, if transported to the terrestrial environment, might do enormous biological damage—a Martian plague, the twist in the plot of H. G. Wells’ War of the Worlds, but in reverse.”
Note that the microbes would not need to have independently arisen on Mars. It could be that they were transported to Mars from Earth billions of years ago (or the reverse occurred). While this issue has been studied by some, my impression is that effective altruists have not looked into this issue as a potential source of existential risk.
A line of inquiry to launch could be to determine whether there are any historical parallels on Earth that could give us insight into whether a Mars-to-Earth contamination would be harmful. The introduction of an invasive species into some region loosely mirrors this scenario, but much tighter parallels might still exist.
Since Mars missions are planned for the 2030s, this risk could arrive earlier than essentially all the other existential risks that EAs normally talk about.
See this Wikipedia page for more information: https://en.wikipedia.org/wiki/Planetary_protection
I recommend the paper The Case for Strong Longtermism, as it covers and responds to many of these arguments in a precise philosophical framework.
It seems to me that there’s a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.
If this is true, is there a post that expands on this argument, or is it something left implicit?
I’ve since added a constraint into my innovation acceleration efforts, and now am basically focused on “asymmetric, wisdom-constrained innovation.”
I think Bostrom has talked about something similar: namely, differential technological development (he talks about technology rather than economic growth, but the two are very related). The idea is that fast innovation in some fields is preferable to fast innovation in others, and we should try to find which areas to speed up the most.
Growth will have flowthrough effects on existential risk.
This makes sense as an assumption, but the post itself didn’t argue for this thesis at all.
If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you’d expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making.
This is something very close to my personal view on what I’m working on.
Can you go more into detail? I’m also very interested in how increased economic growth impacts existential risk. This is a very important question because it could determine the influence from accelerating economic-growth inducing technologies such as AI and anti-aging.
I’m confused what type of EA would primarily be interested in strategies for increasing economic growth. Perhaps someone can help me understand this argument better.
The reason presented for why we should care about economic growth seemed to be a long-termist one. That is, economic growth has large payoffs in the long-run, and if we care about future lives equally to current lives, then we should invest in growth. However, Nick Bostrom argued in 2003 that a longtermist utilitarian should primarily care about minimizing existential risk, rather than increasing economic growth. Therefore, accepting this post requires you to both be a longtermist, but simultanously reject Bostrom’s argument. Am I correct in that assumption? If it’s true, then what arguments are there for rejecting his thesis?
I have now posted as a comment on Lesswrong my summary of some recent economic forecasts and whether they are underestimating the impact of the coronavirus. You can help me by critiquing my analysis.
I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.
Is this a prediction, or is this what you want? If it’s a prediction, I’d love to hear your reasons why you think this would happen.
My own prediction is that this won’t happen. But I’d be happy to see some reasons why I am wrong.
I hold a few core ethical ideas that are extremely unpopular: the idea that we should treat the natural suffering of animals as a grave moral catastrophe, the idea that old age and involuntary death is the number one enemy of humanity, the idea that we should treat so-called farm animals with an very high level of compassion.
Given the unpopularity of these ideas, you might be tempted to think that the reason they are unpopular is that they are exceptionally counterinuitive ones. But is that the case? Do you really need a modern education and philosphical training to understand them? Perhaps I shouldn’t blame people for not taking things seriously that which they lack the background to understand.
Yet, I claim that these ideas are not actually counterintuitive: they are the type of things you would come up on your own if you had not been conditioned by society to treat them as abnormal. A thoughtful 15 year old who was somehow educated without human culture would find no issue taking these issues seriously. Do you disagree? Let’s put my theory to the test.
In order to test my theory—that caring about wild animal suffering, aging, animal mistreatment—are the things that you would care about if you were uncorrupted by our culture, we need look no further than the bible.
It is known that the book of Genesis was written in ancient times, before anyone knew anything of modern philosophy, contemporary norms of debate, science, advanced mathematics. The writers of Genesis wrote of a perfect paradise, the one that we fell from after we were corrupted. They didn’t know what really happened, of course, so they made stuff up. What is that perfect paradise that they made up?
From Anwers In Genesis, a creationist website,
Death is a sad reality that is ever present in our world, leaving behind tremendous pain and suffering. Tragically, many people shake a fist at God when faced with the loss of a loved one and are left without adequate answers from the church as to death’s existence. Unfortunately, an assumption has crept into the church which sees death as a natural part of our existence and as something that we have to put up with as opposed to it being an enemy
Since creationists believe that humans are responsible for all the evil in the world, they do not make the usual excuse for evil that it is natural and therefore necessary. They openly call death an enemy, that which to be destroyed.
Both humans and animals were originally vegetarian, then death could not have been a part of God’s Creation. Even after the Fall the diet of Adam and Eve was vegetarian (Genesis 3:17–19). It was not until after the Flood that man was permitted to eat animals for food (Genesis 9:3). The Fall in Genesis 3 would best explain the origin of carnivorous animal behavior.
So in the garden, animals did not hurt one another. Humans did not hurt animals. But this article even goes further, and debunks the infamous “plants tho” objection to vegetarianism,
Plants neither feel pain nor die in the sense that animals and humans do as “Plants are never the subject of חָיָה ” (Gerleman 1997, p. 414). Plants are not described as “living creatures” as humans, land animals, and sea creature are (Genesis 1:20–21, 24 and 30; Genesis 2:7; Genesis 6:19–20 and Genesis 9:10–17), and the words that are used to describe their termination are more descriptive such as “wither” or “fade” (Psalm 37:2; 102:11; Isaiah 64:6).
In God’s perfect creation, the one invented by uneducated folks thousands of years ago, we can see that wild animal suffering did not exist, nor did death from old age, or mistreatment of animals.
In this article, I find something so close to my own morality, it strikes me a creationist of all people would write something so elegant,
Most animal rights groups start with an evolutionary view of mankind. They view us as the last to evolve (so far), as a blight on the earth, and the destroyers of pristine nature. Nature, they believe, is much better off without us, and we have no right to interfere with it. This is nature worship, which is a further fulfillment of the prophecy in Romans 1 in which the hearts of sinful man have traded worship of God for the worship of God’s creation.
But as people have noted for years, nature is “red in tooth and claw.”4Nature is not some kind of perfect, pristine place.
Unfortunately, it continues
And why is this? Because mankind chose to sin against a holy God.
I contend it doesn’t really take a modern education to invent these ethical notions. The truly hard step is accepting that evil is bad even if you aren’t personally responsible.
Right, I wasn’t criticizing cause priortization. I was criticizing the binary attitude people had towards anti-aging. Imagine if people dismissed AI safety research because, “It would be fruitless to ban AI research. We shouldn’t even try.” That’s what it often sounds like to me when people fail to think seriously about anti-aging research. They aren’t even considering the idea that there are other things we could do.
Now look again at your bulleted list of “big” indirect effects, and remember that you can only hasten them, not enable them. To me, this consideration make the impact we can have on them seem no more than a rounding error if compared to the impact we can have due to LEV (each year you bring LEV closer by saves 36,500,000 lives of 1000QALYS. This is a conservative estimate I made here.)
This isn’t clear to me. In Hilary Greaves and William MacAskill’s paper on strong longtermism, they argue that unless what we do now impacts a critical lock-in period, then most of the stuff we do now will “wash out” and have a low impact on the future.
If a lock-in period never comes, then there’s no compelling reason to focus on indirect effects of anti-aging, and therefore I’d agree with you that these effects are small. However, if there is a lock-in period, then the actual lives saved from ending aging could be tiny compared to the lasting billion year impact that shifting to a post-aging society lead to.
What a strong long-termist should mainly care about are these indirect effects, not merely the lives saved.
Thanks for the bullet points and thoughtful inquiry!
I’ve taken this as an opportunity to lay down some of my thoughts on the matter; this turned out to be quite long. I can expand and tidy this into a full post if people are interested, though it sounds like it would overlap somewhat with what Matthew’s been working on.
I am very interested in a full post, as right now I think this area is quite neglected and important groundwork can be completed.
My guess is that most people who think about the effects of anti-aging research don’t think very seriously about it because they are either trying to come up with reasons to instantly dismiss it, or come up with reasons to instantly dismiss objections to it. As a result, most of the “results” we have about what would happen in a post-aging world come from two sides of a very polarized arena. This is not healthy epistemologically.
In wild animal suffering research, most people assume that there are only two possible interventions: destroy nature, or preserve nature. This sort of binary thinking infects discussions about wild animal suffering, as it prevents people from thinking seriously about the vast array of possible interventions that could make wild animal lives better. I think the same is true for anti-aging research.
Most people I’ve talked to seem to think that there’s only two positions you can take on anti-aging: we should throw our whole support behind medical biogerontology, or we should abandon it entirely and focus on other cause areas. This is crazy.
In reality, there are many ways that we can make a post-aging society better. If we correctly forecast the impacts to global inequality or whatever, and we’d prefer to have inequality go down in a post-aging world, then we can start talking about ways to mitigate such effects in the future. The idea that not talking about the issue or dismissing anti-aging is the best way to make these things go away is a super common reaction that I cannot understand.
Apart from technological stagnation, the other common worry people raise about life extension is cultural stagnation: entrenchment of inequality, extension of authoritarian regimes, aborted social/moral progress, et cetera.
I’m currently writing a post about this, because I see it as one of the most important variables affecting our evaluation of the long-term impact of anti-aging. I’ll bring forward arguments both for and against what I see as “value drift” slowed by ending aging.
Overall, I see no clear arguments for either side, but I currently think that the “slower moral progress isn’t that bad” position is more promising than it first appears. I’m actually really skeptical of many of the arguments that philosophers and laypeople have brought forward about the necessary function of moral progress brought about by generational death.
And as you mention, it’s unclear why we should expect better value drift when we have an aging population, given that there is evidence that the aging process itself makes people more prejudiced and closed-minded in a number of ways.
There are more ways, yes, but I think they’re individually much less likely than the ways in which they can get better, assuming they’re somewhat guided by reflection and reason.
Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).
I expect future generations, compared to people alive today, to be less religious
I agree with that.
This is also likely. However, I’m very worried about the idea that caring about farm animals doesn’t imply an anti-speciesist mindset. Most vegans aren’t concerned about wild animal suffering, and the primary justification that most vegans give for their veganism is from an exploitation framework (or environmentalist one) rather than a harm-reduction framework. This might not robustly transfer to future sentience.
less prejudiced generally, more impartial
This isn’t clear to me. From this BBC article, “Psychologists used to believe that greater prejudice among older adults was due to the fact that older people grew up in less egalitarian times. In contrast to this view, we have gathered evidence that normal changes [ie. aging] to the brain in late adulthood can lead to greater prejudice among older adults.” Furthermore, “prejudice” is pretty vague, and I think there are many ways that young people are prejudiced without even realizing it (though of course this applies to old people too).
more consequentialist, more welfarist
I don’t really see why we should expect this personally. Could you point to some trends that show that humans have become more consequentialist over time? I tend to think that Hansonian moral drives are really hard to overcome.
because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views)
The second reason is a good one (I agree that when people stop eating meat they’ll care more about animals). The relative persuasiveness thing seems weak to me because I have a ton of moral views that I think are persuasive and yet don’t seem to be adopted by the general population. Why would we expect this to change?
I don’t expect them to be more suffering-focused (beyond what’s implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me.
It sounds like you are not as optimistic as I thought you were. Out of all the arguments you gave, I think the argument from moral circle expansion is the most convincing. I’m less sold on the idea that moral progress is driven by reason and reflection.
I also have a strong prior against positive moral progress relative to any individual parochial moral view given what looks like positive historical evidence against that view (the communists of the early 20th century probably thought that everyone would adopt their perspective by now; same for Hitler, alcohol prohibitionists, and many other movements).
Overall, I think there are no easy answers here and I could easily be wrong.
I’m a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense
Sure. There are a number of versions of moral anti-realism. It makes sense for some people to think that moral progress is a real thing. My own version of ethics says that morality doesn’t run that deep and that personal preferences are pretty arbitrary (though I do agree with some reflection).
In the same way, I think the views of future generations can end up better than my views will ever be.
Again, that makes sense. I personally don’t really share the same optimism as you.
So I don’t expect such views to be very common over the very long-term
One of the frameworks I propose in my essay that I’m writing is the perspective of value fragility. Across many independent axes, there are many more ways that your values can get worse than better. This is clear in the case of giving an artificial intelligence some utility function, but it could also (more weakly) be the case in deferring to future generations.
You point to idealized values. My hypothesis is that allowing everyone who currently lives to die and putting future generations in control is not a reliable idealization process. There are many ways that I am OK with deferring my values to someone else, but I don’t really understand how generational death is one of those.
By contrast, there are a multitude of human biases that make people have more rosy views about future generations than seems (to me) warranted by the evidence:
Status quo bias. People dying and leaving stuff to the next generations has been the natural process for millions of years. Why should we stop it now?
The relative values fallacy. This goes something like, “We can see that the historical trend is for values to get more normal over time. Each generation has gotten more like us. Therefore future generations will be even more like us, and they’ll care about all the things I care about.”
Failure to appreciate diversity of future outcomes. Robin Hanson talks about how people use a far-view when talking about the future, which means that they ignore small details and tend to focus on one really broad abstract element that they expect to show up. In practice this means that people will assume that because future generations will likely share our values across one axis (in your case, care for farm animals) that they will also share our values across all axes.
Belief in the moral arc of the universe. Moral arcs play a large role in human psychology. Religions display them prominently in the idea of apocalypses where evil is defeated in the end. Philosophers have believed in a moral arc, and since many of the supposed moral arcs contradict each other, it’s probably not a real thing. This is related to the just-world fallacy where you imagine how awful it would be that future generations could actually be so horrible, so you just sort of pretend that bad outcomes aren’t possible.
I personally think that the moral circle expansion hypothesis is highly important as a counterargument, and I want more people to study this. I am very worried that people assume that moral progress will just happen automatically, almost like a spiritual force, because well… the biases I gave above.
Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come
This makes sense if you are referring to the current generation, but I don’t see how you can possibly be aligned with future generations that don’t exist yet?