You have impressive outputs Jaime!
I would like to add that I believe Summer Research Fellowships/Internships at non-EA branded organisations may be more valuable than those at EA-branded ones. I believe there are some very high-quality programs out there, although I haven’t looked for them thoroughly. Reasons why I believe these could be better:
More dedicated training and supervision. EA-branded organizations are young and often run these programs without much prior experience.
Unique network. There are benefits to you personally and to the EA community (!) of building good professional networks outside of EA. These are especially valuable if you have academic ambitions, because EA research institutes cannot currently support PhD’s, nor would these be as well-regarded as one of the top institutes in a field.
These would be especially beneficial to people who have academic ambitions, people who are not in the top-20% of ‘self-directedness’, and to people who are relatively secure in their EA motivation (this limits the risk of value drift).
Drawbacks of researching at these non-EA institutes for a summer would be limited freedom and fewer EA-minded people around. (Although it’s probably a good opportunity to learn to work with non-EA’s while ‘remaining EA’ - a valuable and possibly rare skill!)
Must is a strong word, so that’s one reason I don’t think it’s true. What do you mean by “civilization goes extinct”? Because
1) There might be complex societies beyond Earth
2) New complex societies made up of intelligent beings can arise even after Homo Sapiens goes extinct
Upvote for using graphics to elucidate discussion on the Forum. Haven’t seen it often and it’s very helpful!
I’d like to flag that I would really like to see a more elegant term than ‘hingeyness’ become standard for referring to the ease of influence in different periods.
Some ideas: “Leverage”, “temporal leverage”, “path-dependence”, “moment” (in relation to the concept from physics), “path-criticality” (meaning how many paths are closed off by decisions in the current time). Anyone else with ideas?
Exactly! It reminds me a lot of the Polymath Project in which maths problems were solved collaboratively. I really wish EA made more use of this—I think Will’s recent choice to post his ideas to the Forum is turning out to be an excellent choice.
If anyone decides to work one this, please feel free to contact me! There is a small but non-negligible probability I’ll work on this question, and if I don’t I’d be happy to help out with some contacts I made.
This is a very cool question I hoped to think about more. Here’s the 5 I came up with (in a draft that I’m unlikely to finish for various reasons), but without further exploration how they would look like:
The size and quality of the group of people that identify as community members reduces by more than 50%
Most people identify themselves as ‘[cause area/faction] first, EA second or not at all’.
3. Plateau/stunted growth.
Influence and quality stagnates (i.e size and quality change by −50% to +100%)
4. Harmless flawed realization.
EA becomes influential without really making a decidedly positive impact
5. Harmful flawed realization.
EA becomes influential and has a significantly negative impact.
No one identifies as part of the EA community anymore
I also asked Will MacAskill for “x-risks to EA”, he said:
The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism.)A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate).Fizzle—it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion.
The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism.)
A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate).
Fizzle—it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion.
Anyway, if you want to continue with this, you could pick yours (or a combination of risks with input from the community) and run a poll asking people’s probability estimates for each risk.
Hmm, I find this a surprising result, even though it seems roughly in line with the outcomes of EAGxNetherlands 2018.
I really hope EAGx conferences will continue to be organized (in Europe and elsewhere), perhaps in an improved form. (Fewer talks, more workshops maybe? More coaching?) I am afraid these events will be cancelled when impact is i) hard to see directly, and ii) heavily skewed. For example, few people made big changes after EAGxNetherlands, but the seed was planted for the Happier Lives Institute, which might not have formed otherwise.
Hi Carl, is there any progress on this end in the past year? I’d be very interested to see x-risk relevant forecasts (currently working on a related project).
Shouldn’t the 1% be “1000 or more”?
Why do his beliefs imply extremely high confidence? Why do the higher estimates from other people not imply that? I’m curious what’s going on here epistemologically.
I think we are currently very uncertain about this, so there is a large value of information to be gained from supporting the evalution of interventions and charities in this space, like the Happier Lives Institute is doing. If you additionally believe they will do good research, supporting them is probably more impactful than supporting charities doing direct work currently.
(Disclaimer: I was involved in (only) the very early stages of setting up this institute)
When you believe current mental health charities are ineffective, you might also want to investigate supporting the founding of a new mental health charity (if you believe mental health charities can be more effective), though this is harder to do as a small donor. You could potentially support Charity Entrepreneurship if they decide to focus on mental health.
Me too! I’m quite surprised by many of them! (Not necessarily disagree, just surprised)
Do you think EA has the problem of “hero worship”? (I.e. where opinions of certain people, you included, automatically get much more support instead of people thinking for themselves) If yes, what can the “worshipped” people do about it?
I find this also interesting to answer myself, although curious to see Will’s answer.
I think in general casual EA’s have less nuanced views than those who work fulltime thinking about the issues (obviously..). For example, our certainty about the relative importance of AI compared to other x-risks is probably being overplayed in the community. In general, I find ‘casual EA’s’ to have an overly simplistic view of how the world works, while engaging more with those topics brings to surface the complexity of issues. In a complex world, precise, quantitative models are more likely to be wrong, and it’s worth pursuing a broader set of actions. I have seen multiple smart, motivated ‘casual EA’s’ basically give up on EA because “they couldn’t see themselves being an AI safety researcher”. (I’d love to see a list like “20 things to do for the long-term future without being an AI safety researcher”)
I think simplification is definitely useful to get a basic grasp of issues and make headway. In fact, this “ignorance of complexity” may actually be a big strength of EA, because people don’t get overwhelmed and demotivated by the daunting amount of complexity, and actually try to tackle issues that most of the world ignores because they’re too big. However, EA’s should expect things to become more complex, more nuanced, and less clear if they would learn more about a topic.
What is your opinion on Extinction Rebellion? (asking this question because they seem concerned about future generations, able to draw attention, and (somewhat) open to changing their mind.)
What are your top 3 “existential risks” to EA? (i.e. risks that would permanently destroy or curtail the potential of Effective Altruism—both to the community and the ideas)
What has been the biggest benefit to your well-being since getting into EA? What would you advice to the many EA’s who struggle with being happy/not burning out? (our community seems to have a higher than average rate of mental illness)
Do you have a coach? Why, or why not? (I feel they really help with stuff like “stay focused on a few topics” and keeping one accountable to those goals)
What do you think is the biggest professional mistake you made? (of the ones you can share) What is the biggest single professional ‘right choice’ you made? [Side-note: interesting we don’t have a word for the opposite of mistake, just like we don’t have one for catastrophe..]