It seems to me there is some inherent tension between the goals “helping the camp participants” and “helping the organizations”. The later puts more pressure on selection, which may not be that good for learning.
In case of the group nr.1, willingness to spend 4-6 weeks on such event is likely a credible signal of dedication to EA, but plausibly may be even slightly anti-correlated with ops talent (I would expect to find some of the best people organising something, and having higher opportunity costs).
In case of the group nr.2 and 4 days event the whole setup seems more balanced and fair when considering interests of the participants. Many EA orgs are somewhat special and the job market is less one-sided in the senior roles side.
General comment regarding all efforts related to talent: in the whole EA talent ecosystem people should think carefully about creating strange incentive landscapes and possibly moral hazards. I recommend spending a lot of thoughts on that, not necessarily in a public way.
It seems to me the plan is based on several assumptions which actually do not hold
Effective altruism wants to grow the number of its members faster. It seems there was something like an attempt to deliberately slow down, limit outreach, international expansion, etc. based on problems with too fast expansion like coordination or diluted knowledge. The academy would likely help with such problems, but generally there is likely not such a strong urge to create new EAs now as you feel.
If EA wants to grow faster, there are cheaper ways.
There seems to be a persistent misconception in the community about how likely is it to get “initial grants” from OPP.
That said, something like EA Academy is a format which may be wort to explore some time in the future. (Other people thought about the idea before)
It’s good to see some intelligent criticisms of the argument for doing AI safety research!
Just two short remarks to this post: I generally tend to think in the log scale about probabilities and it’s possible I haven’t used the a.bcd notation with any probability smaller than 10^-3 (0.1%) for years. So it is hard to see why I should be influenced by the described effect.
With language, your distinction is correlated with the distinction between two of Dennett’s levels of abstraction (the design stance and the intentional stance). Claiming something like the design stance is more accurate or better than the intentional stance for analyzing present day systems seems too bold: it’s really a different level of description. Would you say the design stance is also more accurate when thinking e.g. about animals?
Obviously, looking on any system with intentional stance comes with …mentalizing, assuming agency. People likely utilize the different levels of abstraction not really well, but I’m not convinced they systematically over-utilize one. It seems arguable they under-utilize the intentional stance when looking at “emergent agency” in systems like the stock market.
I think the question how much to give over time in such situation is a good one, and hope someone will write a carefully considered answer.
I’d like to push back a bit on this part of the reasong
I am persuaded that there is a case for local charities or those with which I have a personal connection because these cannot be on the radar of the big charity evaluators. And if everyone did this, and all surplus money were channelled to the “best” causes as assessed by self-appointed experts, I’m not convinced the world would be a better place. A bit, or a lot, of anarchy is needed, I think. Especially if the internet tendency to encourage information monopolies kicks in and everyone consults the same oracle.
We are certainly not in a world where everyone would consult effective altruist sources. On the contrary, I think the correct view is that basically everybody gives to local/familiar charities randomly and based on emotional appeal, and just a very tiny fraction of people is influenced by any rational advice at all. If you are considering EA viewpoint, you are an exception.
To put things in scale, the UK based “Dog Trust”, just one of many charities in the UK supporting pet wellfare, had an income £106.4m in 2017. In comparison, the Anti-malaria foundation, for many years a top charity in GiveWell lists, had an annual income just $46,8m. Obviously the dogs in the UK are closer to people there than the people AMF is helping.
So to a first approximation, I think you can say that almost nobody gives effectively, and everybody gives randomly to charities based on availability heuristics and similar. Based on this reasoning I would expect if you support more anarchy, you will basically do what everybody is doing anyway, and the impact of that will be small to negligible. I would expect giving to effective charities, EA funds, and EA meta-charities to be better for almost any goal you may have.
Just wanted to add that that while I think many of the listed ideas are in my opinion useful and should eventually turn into projects, some of them are quite sensitive to the way how they are executed, or who executes them, to the extent that creating negative impact is easy.
Also, often, there is some prior work, existing knowledge, etc. in many of the listed directions. That no project is visible in some direction may mean also that someone carefully considered it and decided it is not something which should be started now.
(For example: it’s not like nobody thought about EA outreach to different demographics, Muslim, seniors / retirees, other cultures / countries. There is in part public, in part “internal” discussion about this, and the consensus seems to be this is in many cases delicate and should not be rushed.
Or: it’s not like EffectiveThesis did not considered or experimented with different intervention points in the academic chain.)
I disagree. You should not have as a central example some sort of secret, but trust. Transitivity of trust is limited, and everybody has a unique position in the trust network. Many will have interesting opportunities in their network neighborhoods. (I don’t claim to be typical, but still: I can easily list maybe a dozen of such not easily justifiable opportunities where I could send money; even if I’m somewhere on the tail on the distribution, I’d guess typical lottery winner has at leas 1 or 2 such opportunitites)
I guess Wikipedia articles on Bjørn Lomborg and CC provide some context. With a lot of simplification… public advocacy for quite reasonable policy positions on climate change is likely to get you in the middle of a heated political controversy.
Sure. Czech EAs were in contact with Bjorn, and the former chairman of the Czech EA Association is now managing a CC project to try prioritization in a developed country with ~$2M budget.
That said I don’t think it is actually useful to include CC directly under the EA umbrella/brand. There are important disagreements—CC discounts future heavily and will typically not include interventions with small probabilities and high payoffs, hence their prioritization is much more short-term. Also, due to the nature of what they are trying, CC is much more political—and highly controversial in some circles. It does not seem to be good for EA movement to become political in similar way now, and we probably also do not want to become highly controversial.
Is it possible to somehow disentangle the funding for continuing the current project with funding for scaling it up? It seems to me there is a large exploratory value in EA Hotel, and the project should totally get funding to continue, but I would guess before significant up-scaling, it may make sense to do so some careful cost-benefit analysis whether it’s better to grow the existing hotel in Blackpool, or copy the model and create similar places e.g. one somewhere in the US and one somewhere within EU. (It is possible to get to the same costs in other places).
I was not careful enough in articulating my worries, sorry—the comment was more general.
I agree that what you wrote is often not literally affected by my concerns. What I’m worried about is this: given the nebulousness of the domain and, often, lack of hard evidence, it seems inevitable people are affected by intuitions, informal models,etc. - at least in the “hypothesis generation phase” and “taste what to research”. I would expect these to be significantly guided by illustrative examples, seemingly irrelevant word choices, explicitly “toy” models or implicit clues about what is worth to emphasize.
So, for example, while I agree you did not commit to some suffering-focused theory, and neither do many others writing about the topic, I would be really surprised if the “gut feeling” prior of anyone actually working on wild animal suffering agenda was that animals on average experience more pleasure than pain, or, that wild nature is overall positive.
Similarly, while I’d assume nobody commits to the fallacy “if insects have moral weight, their overall weight will be huge” explicitly, I’m afraid something vaguely like that actually is guiding peoples intuitions.
Two short comments/worries
1. In this direction of thinking, my generic worry is, people often don’t consider possibility that the “morally significant” quantity is some non-linear transformation of some obvious quantity. For example: if the moral weight is roughly exp() of the number of neurons, moral weight of all the ants in the world is still negligible compared to one human. If the moral weight scales with second power of the number of neurons, one human can still have more weight than trillion ants. Etc.
(It seems some informal heuristic is “because these entities are so numerous, if they have any moral weight, their total moral weight would be huge”. This intuition is in my opinion rooted in lack of intuitive understanding for exponentials/logarithms.)
2. It seems unfortunate to me this whole discussion is suffering-focused by framing. While I agree the problem of moral value of animals is important, I’m really concerned a whole complex of memes is being spread together with it, containing also negative utilitarianism, moral anti-realist views, some intuitions about wild animal suffering, etc.
That would be fine—my worry was about the 2017 report pushing standards / expectations in a direction which I think will lead to less impact long-term.
This worry is not entirely hypothetical: note for example the comment by aarongertler, an EA forum moderator
This is fantastic! I hope we’ll see reports like this from the winners of any future donor lotteries run within the community.
(To be clear, I also want to add that Adam did a great job carefully reviewing the organizations and approaching the problem with IMO similar level of rigor as an established funding organization, and I admire the work. I just want to avoid this approach becoming something which is expected.)
It’s hard to estimate.
Winning the lottery likely amplifies the voice of the winner, but the effect may be conditional on how much credibility the winner had beforehand. So far, the lottery winners were highly trusted people working in central organizations.
Overall, I would estimate the indirect effect on giving by other individual donors is with 90% confidence within 3x the size of the direct effect, with an unclear sign. There is a significant competition for the attention (and money) of individual donors
My guess is reports in Adam’s style are likely net negative, because they will nudge the lottery winners toward donations which are publicly justifiable, and away from supporting things which are really new, small, high-risk, or their funding depends on knowledge which isn’t public or easily shareable.
Institutional funding sources are biased toward conservatism, big grants, and either established, or at least started by established individuals.
Lottery winners are in a good position to counter that bias (ca in the style of a pop-up foundation advocated for by Tyler Cowen). My guess is, donor lottery winners can have ~2-10x more impact that way in comparison to funding projects which more traditional funding sources would also fund. Hence, any pressure toward “be more like institutional funders” and away from “pop-up” is likely net negative.
Are you considering European structures as well, or is it limited to US?
I have a draft describing something very similar to what you propose (in non-public google form version, gathering comments), and would like to talk about it. Probably the main points of disagreement are in the way how you propose to do evaluations, tying it with funding, and also it may be possible “EA Angel Group” is not the best place in the organizational landscape to run such project.
(In summary I think apart from potential to create large positive impact, your project as proposed has also the risk of creating large negative impact by taking up the space and making it difficult to create possibly better versions of the idea. So I would recommend not launching any MVPs without consulting a lot)
I’m curious how do you think this is evaluated in practice. I’d expect this to map mostly to homophily&trust networks based hiring and risk-aversion on the org side. So my hypothesis is the pool is not narrowed down by value-alignment and good judgment per se, but by difficulties in signalling these qualities.
For what it’s worth, my estimate of total current funding gap in the sector of “small and new projects motivated by long-term”, counting only “what has robustly positive EV by my estimate”, is >$1M.
In general, I think the ecosystem suffered from the spread of several over-simplified memes. One of them is “the field is talent constrained”, the other one “now with the OpenPhil money...“.
One way how to think about it* is projecting the space along two axes: “project size” and “risks/establishedness”. Relative abundance of funding in the [“medium to large”, “low risks / established”] sector does not imply much about the funding situation in the [“small”,“not established”] sector, or [“medium size”,“unproven/risky”] sector.
The [“medium to large”, “low risks / established”] sector is often constrained by a mix of structural limits how fast can organizations grow without large negative side-effects, bottlenecks in hiring, and yes, sometimes, by very specific talent needs. Much less by funding.
On the opposite side, the [“small”,“not established”] sector is probably funding constrained, plus constrained by a lack of advisors and similar support, and inadequacies in the trust network structure.
Long Term Future fund moving to fill part of the funding gap seems great news.
(*I have this from analysis how an x-risk funding organization can work by Karl Koch & strategy team at AI Safety Camp 1, non-public)
Just wanted to note that the use of “worst case” in the mission statement
The fund’s mission is to address worst-case risks (s-risks) from artificial intelligence.
is highly non-intuitive for people with different axiology. Quoting from the s-risk explanation
For instance, an event leading to a future containing 10^35 happy individuals and 10^25 unhappy ones, would constitute an s-risk
At least for me, this would be a pretty amazing outcome, and not something which should be prevented.
In this context
We aim to differentially support alignment approaches where the risks are lowest. Work that ensures comparatively benign outcomes in the case of failure is particularly valuable from our perspective
sounds worrisome: do I interpret it correctly that in the ethical system held by the fund human extinction is comparatively benign outcome in comparison with risks like creation of 10^25 unhappy minds even if they are offset by much larger number of happy minds?
Somewhat controversial personal opinion: when thinking about this space, my null hypothesis is talented people are actually abundant, and the bottleneck is on the side of EA organizations and possibly in culture.
Reasons could be various: for example, I can imagine
organizations are ops bottlenecked, and hiring is another ops task
historically, founder’s effect & difficulty of finding/identifying ops people if you have very different mindset
homophily&trust networks based hiring
misaligned filtering (potentially great ops people being filtered early on criteria like not having CVs impressive in the right way, in a later stage people with the most impressive CVs actually not being good fit, or not that interested in ops)
risk-aversion on org side