Yeah so the first point is what I’m referring to by timelines. And we should all also discount the risk of a particular hazard by the probability of achieving invulnerability.
SiebeRozendal
Eight high-level uncertainties about global catastrophic and existential risk
Not sure why the initials are only provided. For the sake of clarity to other readers, EY = Eliezer Yudkowsky.
Thanks for the elaborate response! Allow me to ask some follow-up questions, the topic is close to my heart :)
I expect that making relatively fewer grants will leave more capacity for trying things such as exploring different mechanisms of supporting community builders and different types of projects to fund. I expect this to increase the community’s collective understanding of how to do community building more than increasing the number of grants.
Am I right to take away from this that the EA CB Grants Programme is capacity-constrained? Because I believe this would be important for other funders. I’m afraid there is a dynamic where CB-efforts have trouble finding non-CEA funding because alternative funders believe CEA has got all the good opportunities covered. I believe we should in general be skeptical that a small set of funders leads to the efficient allocation of resources. The grants programme being capacity-constrained would be evidence towards there being impactful opportunities for other funders. How does the programme approach this coordination with other funders?
Relatedly, does CEA prefer to be a large part (>50%) or a smaller part of a community’s funding? Say a community-building effort raises ~1 FTE for 1 year among their own community, would this affect the likelihood of being funded by CEA?
- 20 Nov 2019 23:14 UTC; 14 points) 's comment on Which Community Building Projects Get Funded? by (
I liked this post, but I had expected another one given the title.
The post only describes the location of the projects, but not so much what they are doing. I think it would be very valuable to see which type of projects are getting funded. What are e.g. EA Oxford and EA Geneva doing that warrants more support relative to other projects?
I have the intuition that what they are primarily being funded for is more likely to be network-building (increasing the community’s connections to influential people, including making community members more influential) than community-building (a longer-term investment into tight networks that facilitate mutual support). I am not sure about how funding is actually distributed between these two types and what the optimal allocation would be though. Without more information it’s hard to discuss.
Hi Harri, I have two questions for you.
We think that there is a large amount of variance in the impact of individual grants that we’ve made.
What makes you believe this? What kind of criteria are used to evaluate and compare the impact of individual grants?
After evaluating the grants made over the course of 2018 we also think that we now have a better understanding of which kinds of grantmaking opportunities will be most impactful.
Could you elaborate on this? Which kinds of opportunities do you think will be most impactful? This seems highly valuable information for aspiring community builders.
Furthermore, community-building seems like a long-term project, so I am quite surprised about the decision to focus so much on just a few opportunities and the confidence in which type of projects are valuable. I would think that exploration is enormously valuable in such an early stage of our international community. Is this because you believe there are large potential downsides?
Yes, there are steps to mitigate it. But community building is by its very nature location-constrained. A tech firm can move to a particular hub. A community can not.
Furthermore, if I recall correctly, the VC landscape was as efficient as it could be and VC’s were overreliant on their networks. Organizations like Y Combinator stepped into that market gap by being more approachable. This is a step that CB grantmakers can also take.
My understanding is that there is a blurry line between “community groups” and EA projects in general. And there do seem to be different approaches among groups.
The scale importance of a problem is the maximal point that the curve meets on the y-axis—the higher up the y-axis you can go, the better it is. Neglectedness tells you where you are on the x-axis at present. The other factors that bear on tractability tell you the overall shape of the curve.
I think this is the core of describing the issue and why we don’t need to talk about neglectedness as a separate factor from tractability! I have found this a useful and understandable visual interpretation of the ITN-framework.
One thing I worry about with the ITN-framework is that it seems to assume smooth curves: it seem to assume that returns diminish as more (homogenous) resources are invested. I think this is much more applicable to funding decisions than to career decisions. Dollars are more easily comparable than workers. Problems need a portfolio of skills. If I want to assess the value I could have by working on a particular problem, I’d better ask whether I can fill a gap in that area than what the overall tractability is of general, homogenous human resources.
I’d like to hear more about this if you have the time. It seems to me that it’s hard to find a non-arbitrary way splitting of players.
Say a professor and a student work together on a paper. Each of them spends 30 hours on it and the paper would counterfactually not have been written if either of them had not contributed this time. The Shapley values should not be equivalent, because the ‘relative size’ of the players’ contributions shouldn’t be measured by time input.
Similarly, in the India vaccination example, players’ contribution size is determined by their money spent. But this is sensitive to efficiency: one should not be able to get a higher Shapley value just from spending money inefficiently, right? Or should it, because this worry is addressed by Shapley cost-effectiveness?
(This issue seems structurally similar to how we should allocate credence between competing hypotheses in the absence of evidence. Just because the two logical possibilities are A and ~A, does not mean a 50⁄50 credence is non-abitrary. Cf. Principle of Indifference)
You have impressive outputs Jaime!
I would like to add that I believe Summer Research Fellowships/Internships at non-EA branded organisations may be more valuable than those at EA-branded ones. I believe there are some very high-quality programs out there, although I haven’t looked for them thoroughly. Reasons why I believe these could be better:
More dedicated training and supervision. EA-branded organizations are young and often run these programs without much prior experience.
Unique network. There are benefits to you personally and to the EA community (!) of building good professional networks outside of EA. These are especially valuable if you have academic ambitions, because EA research institutes cannot currently support PhD’s, nor would these be as well-regarded as one of the top institutes in a field.
These would be especially beneficial to people who have academic ambitions, people who are not in the top-20% of ‘self-directedness’, and to people who are relatively secure in their EA motivation (this limits the risk of value drift).
Drawbacks of researching at these non-EA institutes for a summer would be limited freedom and fewer EA-minded people around. (Although it’s probably a good opportunity to learn to work with non-EA’s while ‘remaining EA’ - a valuable and possibly rare skill!)
Must is a strong word, so that’s one reason I don’t think it’s true. What do you mean by “civilization goes extinct”? Because
1) There might be complex societies beyond Earth
2) New complex societies made up of intelligent beings can arise even after Homo Sapiens goes extinct
Upvote for using graphics to elucidate discussion on the Forum. Haven’t seen it often and it’s very helpful!
I’d like to flag that I would really like to see a more elegant term than ‘hingeyness’ become standard for referring to the ease of influence in different periods.
Some ideas: “Leverage”, “temporal leverage”, “path-dependence”, “moment” (in relation to the concept from physics), “path-criticality” (meaning how many paths are closed off by decisions in the current time). Anyone else with ideas?
Exactly! It reminds me a lot of the Polymath Project in which maths problems were solved collaboratively. I really wish EA made more use of this—I think Will’s recent choice to post his ideas to the Forum is turning out to be an excellent choice.
If anyone decides to work one this, please feel free to contact me! There is a small but non-negligible probability I’ll work on this question, and if I don’t I’d be happy to help out with some contacts I made.
This is a very cool question I hoped to think about more. Here’s the 5 I came up with (in a draft that I’m unlikely to finish for various reasons), but without further exploration how they would look like:
1. Collapse. The size and quality of the group of people that identify as community members reduces by more than 50%
2. Splintering. Most people identify themselves as ‘[cause area/faction] first, EA second or not at all’.
3. Plateau/stunted growth. Influence and quality stagnates (i.e size and quality change by −50% to +100%)
4. Harmless flawed realization. EA becomes influential without really making a decidedly positive impact
5. Harmful flawed realization. EA becomes influential and has a significantly negative impact.
6. ‘Extinction’. No one identifies as part of the EA community anymore
I also asked Will MacAskill for “x-risks to EA”, he said:
The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism.)
A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate).
Fizzle—it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion.
Anyway, if you want to continue with this, you could pick yours (or a combination of risks with input from the community) and run a poll asking people’s probability estimates for each risk.
Hmm, I find this a surprising result, even though it seems roughly in line with the outcomes of EAGxNetherlands 2018.
I really hope EAGx conferences will continue to be organized (in Europe and elsewhere), perhaps in an improved form. (Fewer talks, more workshops maybe? More coaching?) I am afraid these events will be cancelled when impact is i) hard to see directly, and ii) heavily skewed. For example, few people made big changes after EAGxNetherlands, but the seed was planted for the Happier Lives Institute, which might not have formed otherwise.
Hi Carl, is there any progress on this end in the past year? I’d be very interested to see x-risk relevant forecasts (currently working on a related project).
Yes, s-risks are definitely an important concept there! I mention them only at 7. but not because I thought they weren’t important :)