I’m not completely sure if I understand what you are looking for, but:
JanBrauner
I wrote down some musings about this (including a few relevant links) in appendix 2 here.
I think I overheard Toby saying that the footnotes and appendices were dropped in the audiobook and that, yes, the footnotes and appendices (which make up 50% of the book) should be the most interesting part for people already familiar with the X-risk literature.
No
So this is my very personal impression. I might be super wrong about this, that’s why I asked this question. Also, I remember liking the main EA facebook group quite a bit in the past, so maybe I just can’t properly relate to how useful the group is for people that are newer to EA thinking.
Currently, I avoid reading the EA facebook group the same way I avoid reading comments under youtube videos. Reading the group makes me angry and sad because of the ignorance and aggression displayed in the posts and especially in the comments. I think many comments do not meet the bar for intellectual quality or epistemic standards that we should have EA associated with. That’s really no surprise, online discourse is not particularly known for high quality.
Overall, I feel like the main EA facebook group doesn’t shine a great light on the EA movement. I haven’t thought much about this, but I think I would prefer stronger moderation for quality.
[Question] How do you feel about the main EA facebook group?
I first thought that “counterproposal passed” means that a proposal very different to the one you suggested passed the ballot. But skimming the links, it seems that the counterproposals were actually similar to your original proposals?
Thanks for bringing this to my attention, I modified the title and a respective part in the post.
I didn’t have the time to check in with CEA before writing the post so I had to choose between writing the post as is or not writing it at all. That’s why the first line says (in italics) “I’m not entirely sure that there is really no other official source for local group funding. Please correct me in the comments. ”
I think I could have predicted that this is not enough to keep people from walking away with a false impression so I think I should have chosen a different headline.
That mostly seems to be semantics to me. There could be other things that we are currently “deficient” in and we could figure that out by doing cognitive enhancement research.
As far as I know, the term “cognitive enhancement” is often used in the sense that I used it here, e.g. relating to exercise (we are currently deficient in exercise compared to our ancestors), taking melatonin (we are deficient in melatonin compared to our ancestors), and so on...
Great to hear that several people are involved with making the grant decisions. I also want to stress that my post is not at all intended as a critique of the CBG programme.
I agree that there is more to movement building than local groups and that the comparison to AI safety was not on the right level.
I still stand by my main point and think that it deserves consideration:
My main point is that there is a certain set of movement building efforts for which the CEA community building grant programme seems to be the only option. This set includes local groups and national EA networks but also other things. Some common characteristics might be that these efforts are oriented towards the earlier stages of the movement building funnel (compared to say, EAG) and can be conducted by independent movement builders.
Ideally, there should be more diverse “official” funding for this set of movement building efforts. As things currently are, private funders should at least be aware that only one major official funding source exists.
(If students running student groups can get funded by the university, that is another funding source that I wasn’t aware of before).
Only a few people decide about funding for community builders world-wide
The evolutionary argument against cognitive enhancement research is weak
Love the “Grants” section
cognitive enhancement research
We wrote a bit about a related topic in part 2.1 here: https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive
In there, we also cite a few posts by people who have thought about similar issues before. Most notably, as so often, this post by Brian Tomasik:
How I see it:
Extinction risk reduction (and other type of “direct work”) affects all future generations similarly. If the most influential century is still to come, extinction risk reduction also affects the people alive during that century (by making sure they exist). Thus, extinction risk reduction has a “punting to future generations that live in hingey times” component. However, extinction risk reduction also affects all the unhingey future generations directly, and the effects are not primarily mediated through the people alive in the most influential centuries.
(Then, by definition, if ours is not a very hingey time, direct work is not a very promising strategy for punting. The effect on people alive during the “most influential times” has to be small by definition. If direct work did strongly enable the people living in the most influential century (e.g. by strongly increasing the chance that they come into existence), it would also enable many other generations a lot. This would imply that the present was quite hingey after all, in contradiction to the assumption that the present is unhingey.)
Punting strategies, in contrast, affect future generations primarly via their effect on the people alive in the most influential centuries.
- 23 Jun 2020 23:49 UTC; 12 points) 's comment on Problem areas beyond 80,000 Hours’ current priorities by (
I don’t have much to add, but I still wanted to say that I really liked this:
great perspective, risk factors seem to be a really useful concept here
Very clearly written
These are all very good points. I agree that this part of the article is speculative, and you could easily come to a different conclusion.
Overall, I still think that this argument alone (part 1.2 of the article) points into the direction of extinction risk reduction being positive. Although the conclusion does depend on the “default level of welfare of sentient tools” that we are discussing in this thread, it more critically depends on whether future agents’ preferences will be aligned with ours.
But I never gave this argument (part 1.2) that much weight anyway. I think that the arguments later in that article (part 2 onwards, I listed them in my answer to Jacy’s comment) are more robust and thus more relevant. So maybe I somewhat disagree with your statement:
The expected value of the future could be extremely sensitive to beliefs about these sets (their sizes and average welfares). (And this could be a reason to prioritize moral circle expansion instead.)
To some degree this statement is, of course, true. The uncertainty gives some reason to deprioritize extinction risk reduction. But: The expected value of the future (with (post-) humanity) might be quite sensitive to these beliefs, but the expected value of extinction risk reduction efforts is not the same as the expected value of the future. You also need to consider what would happen if humanity goes extinct (non-human animals, S-risks by omission), non-extinction long-term effects of global catastrophes, option value,… (see my comments to Jacy). So the question of whether to prioritize moral circle expansion is maybe not extremely sensitive to “beliefs about these sets [of sentient tools]”.
Hi Michael, I wrote this 2 years ago and have not worked in this area afterwards. To give a really good answer, I’d probably have to spend several hours reading the text again. But from memory, I think that most arguments don’t rest on the assumption of future agents being total utilitarians. In particular, none of the arguments requires the assumption that future agents will create lots of high welfare beings. So I guess the same conclusions follow if you assume deontologist future agents, or ones with asymmetric population ethics. This is particularly true if you think that your idealised, reflected preferences would be close to that of the future agents.