That mostly seems to be semantics to me. There could be other things that we are currently “deficient” in and we could figure that out by doing cognitive enhancement research.
As far as I know, the term “cognitive enhancement” is often used in the sense that I used it here, e.g. relating to exercise (we are currently deficient in exercise compared to our ancestors), taking melatonin (we are deficient in melatonin compared to our ancestors), and so on...
Great to hear that several people are involved with making the grant decisions. I also want to stress that my post is not at all intended as a critique of the CBG programme.
I agree that there is more to movement building than local groups and that the comparison to AI safety was not on the right level.
I still stand by my main point and think that it deserves consideration:
My main point is that there is a certain set of movement building efforts for which the CEA community building grant programme seems to be the only option. This set includes local groups and national EA networks but also other things. Some common characteristics might be that these efforts are oriented towards the earlier stages of the movement building funnel (compared to say, EAG) and can be conducted by independent movement builders.
Ideally, there should be more diverse “official” funding for this set of movement building efforts. As things currently are, private funders should at least be aware that only one major official funding source exists.
(If students running student groups can get funded by the university, that is another funding source that I wasn’t aware of before).
Love the “Grants” section
cognitive enhancement research
We wrote a bit about a related topic in part 2.1 here:
In there, we also cite a few posts by people who have thought about similar issues before. Most notably, as so often, this post by Brian Tomasik:
How I see it:
Extinction risk reduction (and other type of “direct work”) affects all future generations similarly. If the most influential century is still to come, extinction risk reduction also affects the people alive during that century (by making sure they exist). Thus, extinction risk reduction has a “punting to future generations that live in hingey times” component. However, extinction risk reduction also affects all the unhingey future generations directly, and the effects are not primarily mediated through the people alive in the most influential centuries.
(Then, by definition, if ours is not a very hingey time, direct work is not a very promising strategy for punting. The effect on people alive during the “most influential times” has to be small by definition. If direct work did strongly enable the people living in the most influential century (e.g. by strongly increasing the chance that they come into existence), it would also enable many other generations a lot. This would imply that the present was quite hingey after all, in contradiction to the assumption that the present is unhingey.)
Punting strategies, in contrast, affect future generations primarly via their effect on the people alive in the most influential centuries.
I don’t have much to add, but I still wanted to say that I really liked this:
great perspective, risk factors seem to be a really useful concept here
Very clearly written
These are all very good points. I agree that this part of the article is speculative, and you could easily come to a different conclusion.
Overall, I still think that this argument alone (part 1.2 of the article) points into the direction of extinction risk reduction being positive. Although the conclusion does depend on the “default level of welfare of sentient tools” that we are discussing in this thread, it more critically depends on whether future agents’ preferences will be aligned with ours.
But I never gave this argument (part 1.2) that much weight anyway. I think that the arguments later in that article (part 2 onwards, I listed them in my answer to Jacy’s comment) are more robust and thus more relevant. So maybe I somewhat disagree with your statement:
The expected value of the future could be extremely sensitive to beliefs about these sets (their sizes and average welfares). (And this could be a reason to prioritize moral circle expansion instead.)
To some degree this statement is, of course, true. The uncertainty gives some reason to deprioritize extinction risk reduction. But: The expected value of the future (with (post-) humanity) might be quite sensitive to these beliefs, but the expected value of extinction risk reduction efforts is not the same as the expected value of the future. You also need to consider what would happen if humanity goes extinct (non-human animals, S-risks by omission), non-extinction long-term effects of global catastrophes, option value,… (see my comments to Jacy). So the question of whether to prioritize moral circle expansion is maybe not extremely sensitive to “beliefs about these sets [of sentient tools]”.
I have written up my thoughts on all these points in the article. Here are the links.
“The universe might already be filled with suffering and post-humans might do something against it.”
“Global catastrophes, that don’t lead to extinction, might have negative long-term effects”
“Other non-human animal civilizations might be worse
The final paragraphs of each sections usually contain discussion of how relevant I think each argument is. All these sections also have some quantitative EV-estimates (linked or in the footnotes).
But you probably saw that, since it is also explained in the abstract. So I am not sure what you mean when you say:
It’d be great if at some point you could write up discussion of those other arguments,
Are we talking about the same arguments?
Regarding your second point, just a few thoughts:First of all, an important point is how you think values and morality work. If two-thirds of humanity, after thorough reflection, disagree with your values, does this give you a reason to become less certain about your values as well? Maybe adopt their values to a degree? …Secondly, I am also uncertain how coherent/convergent human values will be. There seem to be good arguments for both sides, see e.g. this blog post by Paul Christiano (and the discussion with Brian Tomasik in the comments of that post): https://rationalaltruist.com/2013/06/13/against-moral-advocacy/Third: In a situation like the one you described above, at least there would be huge room for compromise/gains from trade/… So if future humanity would be split into the three factions you suggested, they would not necessarily fight a war until only one faction remains that can then tile the universe with their preferred version. Indeed, they probably would not, as cooperation seems better for everyone in expectation.
By “in expectation random”, do you mean 0 in expectation?
Yes, that’s what we meant.
I am not sure I understand your argument. You seem to say the following:
Post-humans will put “sentient tools” into harsher conditions than the ones the tools were optimized for.
If “sentient tools” are put into these conditions, their welfare decreases (compared with the situations they were optimized for).
My answer: The complete “side-effects” (in the meaning of the article) on sentient tools comprises bringing them into existence and using them. The relevant question seems to be if this package is positive or negative, compared to the counterfactual (no sentient tools). Humanity might bring sentient tools into conditions that are worse for the tools than the conditions they were optimized for. Even these conditions might still be overall positive.
Apart from that, I am not sure if the two assumptions listed as bullet points above will actually hold for the majority of “sentient tools”. I think that we know very little about the way tools will be created and used in the far future, which was one reason for assuming “zero in expectation” side-effects.
I have seen and read your post. It was published after my internal “Oh my god, I really, really need to stop reading and integrating even more sources, the article is already way too long”-deadline, so I don’t refer to it in the article.
In general, I am more confident about the expected value of extinction risk reduction being positive, than about extinction risk reduction actually being the best thing to work on. It might well be that e.g. moral circle expansion is more promising, even if we have good reasons to believe that extinction risk reduction is positive.
I do think your “very unlikely that [human descendants] would see value exactly where we see disvalue” argument is a viable one, but I think it’s just one of many considerations, and my current impression of the evidence is that it’s outweighed.
I personally don’t think that this argument is very strong on its own. But I think there are additional strong arguments (in descending order of relevance):
“Other non-human animal civilizations might be worse”
Curious how you’re thinking about efforts that are intended to reduce x-risk but instead end up increasing it.
Uhm… Seems bad? :-)
Thanks for the comment. We added a navigable table of contents.
Hi David, thanks for your comments.
1) This seems not to engage with the questions about short-term versus long-term prioritization and discount rates. I’d think that the implicit assumptions should be made clearer.
Yes, the article does not deal with considerations for and against caring about the long-term. This is discussed elsewhere. Instead, the article assumes that we care about the long-term (e.g. that we don’t discount the value of future lives strongly), and analyses what implications follow from that view.
We tried to make that explicit. E.g., the first point under “Moral assumptions” reads:
Throughout this article, we base our considerations on two assumptions:
1. That it morally matters what happens in the billions of years to come. From this very long-term view, making sure the future plays out well is a primary moral concern.
2) It doesn’t seem obvious to me that, given the universalist assumptions about the value of animal or other non-human species, the long term future is affected nearly as much by the presence or absence of humans. Depending on uncertainties about the Fermi hypothesis and the viability of non-human animals developing sentience over long time frames, this might greatly matter.
I think this point matters. Part 2.1 of the article deals with the implications of potential future non-human animal civilizations and extraterrestrials. I think the implications are somewhat complicated and depend quite a bit on your values, so I won’t try to summarize them here.
4) S-risks are plausibly more likely if moral development is outstripped by growth in technological power over relatively short time frames, and existential catastrophe has a comparatively limited downside.
We don’t try to argue for increasing the speed of technological progress.
Apart from that, it is not clear to me that extinction has “comparatively little downside” (compared to S-risks, you probably mean). It, of course, depends on your moral values. But even from a suffering-focused perspective, it may well be that we would—with more moral and empirical insight—come to realize that the universe is already filled with suffering. I personally would not be surprised if “S-risks by omission” (*) weighed pretty heavily in the overall calculus. This topic is discussed in part 2.2.
I don’t have anything useful to say regarding your point 3).
(*) term coined by Lukas Gloor, I think.
I think that also depends on the country. In my experience, references don’t play such an important role in Germany as they do in UK/US. Especially the practice that referees have to submit their reference directly to the university is uncommon in Germany. Usually, referees would write a letter of reference for you, and then the applicant can hand it in. Also, having references tailored to the specific application (which seems to be expected in UK/US) is not common in Germany.
So, yes, I am also hesitant to ask my academic referees too often. If I knew that they would be contacted early in application processes, I would certainly apply for less positions. For example, I maybe wouldn’t apply for positions that I probably won’t get but would be great if they worked out.