Along with my co-founder, Marcus A. Davis, I run Rethink Priorities. Previously, I was a professional data scientist.
is not of type
Worth flagging that we at Rethink Priorities have had no trouble finding many well-qualified candidates when we do our operations hiring.
Strong middletermism suggests that the best actions are exclusively contained within the set of actions that aim to influence how the next 137 years go (and not a year longer!)
We know that compromising between smart people is a good decision procedure (see “Aumann’s agreement theorem” also see how ensemble models generally outperform any individual models). Given that many smart people support near-term causes and many smart people support longtermist causes, I suggest that the highest impact causes will be found in what I call middletermism.
Another important issue is that our predictive track record gets worse as a function of time—increasing time means increasing error. Insofar as we are trying to balance expected impact and robustness of impact calculations, this suggests a time at which error will balance out impact. In my calculations, this occurs exactly 137 years from now. Thus middletermism only focuses on these 137 years.
Rethink Priorities is pretty close to this! We’ve done message testing now for many orgs across cause areas… Centre for Effective Altruism, Will MacAskill, Open Phil, the Centre for the Study of Existential Risk, Humane Society for the United States, The Humane League, Mercy for Animals, and various EA-aligned lobbyists. We have a lot of skills and resources to do this well and already have a well-built pipeline for producing this kind of work.
We’d be happy to consider doing more work for other people in EA and the EA movement as a whole!
Is the Global Health and Development Fund still going to be just Elie for the foreseeable future? (Not that there’s anything wrong with that.)
Why the secrecy around the identity of the guest managers?
I doubt it will ever be a standard procedure in every opinion piece.
Meaning you think there is a 95% chance that within five years, it won’t be the case that The New York Times, The Atlantic, and The Washington Post will include a quantitative, testable forecast in at least one fifth of their collective articles?
...Just kidding. Thanks for the well-written and illuminating answer.
Why don’t more journalists make concrete, verifiable, quantitative forecasts and then retrospectively assess their own accuracy, like you did here (also see more examples)? Is there anything that could be done to encourage you and other journalists to do more of that?
Similar to “Effective Altruism is Not a Competition”
What is wrong with recording the audio?
That’s great to hear—I did not know that
Do you feel that the numbers I’m using are misrepresentative? I will do my best to address limitations below.
You might be able to use donation data from the EA Survey, to better capture individual EA giving.
One issue is that a lot of these areas have very large individual donors that aren’t captured by these statistics or even in the EA Survey—for example, there is an individual donor who gives about the same annual amount to animal welfare as all of OpenPhil. (But then of course, there is also the question of who counts as “EA”.)
Do you disagree in general with the strategy of allocating my personal donations on the basis of where I expect to differ the most from the community regarding #1?
I imagine your personal views about the difference in the value of cause areas will dominate this, given that causes might be 10x different whereas these gaps are only 5x at most.Also I think the choice of what you are funding within each cause also matters a lot.
I think this approach makes sense from a neglectedness standpoint, though I am worried that it wouldn’t account for neglectedness outside of EA and neglectedness within cause. I’m not sure if this makes sense from a donor collaboration/coordination/cooperation standpoint, given that it seems like you are deliberately offsetting other people’s donations.
“Cause area” is also a pretty weird/arbitrary unit of analysis if you think about it.
“Is EA Growing? EA Growth Metrics for 2018” has some data on this, and I look forward to doing it again for 2019-2020
How do you feel about there being very few large institutional donors in effective altruism? This seems like it could be a good thing as it allows specialization and coordination, but also could be bad because it means if a particular person doesn’t like you, you may just be straight up dead for funding. It also may be bad for organizations to have >80% of their funding come from one or two sources.
Don’t forget the 2018 EA Survey analysis that suggests a ~40% EA drop out rate after 4-5 years.
I do not yet know of any research that is the result of your recent hiring that actually seems useful to me (which is not very surprising, it’s not been very long!).
Yes, naturally that would take more than two months to produce!
I also think Rethink Priorities is tapping into a talent funnel that was built by other people, and is very much not buying talent “on the open market” so to speak.
I’d dispute that on two counts:1.) I do think we have been able to acquire talent that would not have been otherwise counterfactually acquired by other organizations. For the clearest example, Luisa Rodriguez applied to a fair number of EA organizations and was turned down—she was then hired by us, and now has gone on to work with Will Macaskill and will soon be working for 80,000 Hours. Other examples are also available though I’d avoid going into too much detail publicly to respect the privacy of my employees. We also are continuing to invest on further developing talent pipelines across cause areas and think our upcoming internship program will be a big push in this direction.
2.) Even if we concede that we are using a talent funnel created by other people, I don’t think it is a bad thing. There still is a massive oversupply of junior researchers who could potentially do good work, and a massive undersupply of open roles with available mentorship and management. I think anything Rethink Priorities could be doing to open more slots for researchers is a huge benefit to the talent pipeline even if we aren’t developing the earlier part of the recruitment funnel from scratch (though I do think we are working on that to some extent).
I think Rethink Priorities is a very clear counterexample.
We were able to spend money to “buy” many longtermist researchers, some of which would not have counterfactually worked in the area. Plus our hiring round data indicates that there are many more such people out there that we could hire, if only we weren’t funding constrained.
Yes, I think all the things you mentioned are projects that are “within the scope” of RP (not that we would necessarily do them). We see our scope as being very broad so that we can always do the highest impact projects.
Yeah, our broader theory of change is mostly (but not entirely) based on improving the output of the EA movement, and having the EA movement push out from there.