I’m co-founding the Atlas Fellowship, a program that experiments with scholarships, camps, and online content for high schoolers in the US, India, and elsewhere, with Ashley and Sydney.
Previously, I ran EA Funds and the Center on Long-Term Risk. My background is in medicine (BMed) and economics (MSc). See my LinkedIn.
You can best reach me at email@example.com.
I appreciate honest and direct feedback: https://admonymous.co/vollmer
Unless explicitly stated otherwise, opinions are my own, not my employer’s. (I think this is generally how everyone uses the EA Forum; others who don’t have such a disclaimer likely think about it similarly.)
The EAIF funds many of the things you listed and Peter Wildeford has been especially interested in making them happen! Also, the Open Phil GHW team is expanding a lot and has been funding several excellent grants in these areas.
That said, I agree with the overall sentiment you expressed and definitely think there’s something there.
One effect is also: there’s not so much proactive encouragement to apply for funding with neartermist projects, which results in fewer things getting funded, with results in people assuming that there’s no funding, even though sometimes funders are quite open to funding the kinds of things you mention.
I do think there are opportunities that GiveWell is missing, but then again I’ve found it hard to find grantmakers who would actually do better than them.
I added a response to the other post.
I think some of the worst failures are mediocre projects that go sort-of okay and therefore continue to eat up talent for a much longer time than needed; cases where ambitious projects fail to “fail fast”. It takes a lot of judgment ability and self-honesty to tell that it’s a failure relative to what one could have worked on otherwise.
One example is Raising for Effective Giving, a poker fundraising project that I helped found and run. It showed a lot of promise in terms of $ raised per $ spent over the years it was operating, and actually raised $25m for EA charities. But it looks a lot less high-impact once you draw comparisons to GWWC and Longview, or once you account for the small market size of the poker industry, lack of scalability, the expected future funding inflows into EA, and compensation from top Earning To Give opportunities. $25 million is really not much compared to the billions others raised through billionaire fundraising and entrepreneurship.
I personally failed to admit to myself that the project was showing mediocre but not amazing results, and only my successor (Stefan) then discontinued the project, which in hindsight seems like the correct judgment call.
Regarding Harming quality of thought, my main worry is a more subtle one:
It is not that people might end up with different priorities than they would otherwise have, but that they might end up with the same priorities but worse reasoning.
I.e. before there was a lot funding, they thought “Oh I should really think about what to work on. After thinking about it really careful, X seems most important”.
Now they think “Oh X seems important and also what I will get funded for, so I’ll look into that first. After looking into it, I agree with funders that this seems most important.”
This is still for the same X, and their conclusions are still the same. But their reasoning about X has now become worse because they investigated important claims less thoroughly.
I think the “passive impact” framing encourages us too much to start lots of things and delegate/automate them. I prefer “maximize (active or passive) impact (e.g. by building a massively scalable organization)”. This includes the strategy “build a really excellent org and obsessively keep working on it until it’s amazing”, which doesn’t pattern-match “passive impact” and seems superior to me because a lot of the impact is often unlocked in the tail-end scenarios.
You might argue that excellent orgs often rely on a great deal of delegation and automation, and I would wholeheartedly agree with that. But I think the “passive impact” framing tends to encourage a thinking pattern that’s less like “building massively scalable systems” and more like “quickly automate something”, and I think that’s worse.
Here’s a provocative take on your experience that I don’t really endorse, but I’d be interested in hearing your reaction to:
Finding unusually cost-effective global health charities isn’t actually a wicked problem. You just look into the existing literature on global health prioritization, apply a bunch of quick heuristics to find the top interventions, find charities implementing them, and then see which ones will get more done with more funding. In fact, Giving What We Can independently started recommending the Against Malaria Foundation through a process that was much faster than the above. Peter Singer also came up with donation recommendations that seem not much worse than current GiveWell top recommendations based on fairly limited research.
In response to such a comment, I might say that GiveWell actually had much more reason to think AMF was indeed one of the most cost-effective charities than GWWC, that Peter Singer’s recommendations were good but substantially less cost-effective (and that improvement is clearly worth it), and that the above illustration of the wicked problem experience is useful because it applies more strongly in other areas (e.g. AI forecasting). But I’m curious about your response.
Personally I think going for something like 50k doesn’t make sense, as I expect that the 5k (or even 500) most engaged people will have a much higher impact than the others.
Also, my guess of how CEA/FTX are thinking about this is actually that they assume an even smaller number (perhaps 2k or so?) because they’re aiming for highly engaged people, and don’t pay as much attention to how many less engaged people they’re causing.
Yeah I fully agree with this; that’s partly why I wrote “gestures”. Probably should have flagged it more explicitly from the beginning.
Are you curating the spreadsheet in any way? In particular, do you have a mechanism for removing entries submitted by people who have in the past made unwanted sexual advances or otherwise a track record of not respecting community members’ boundaries?
I’d personally be pretty excited to see well-run analyses of this type, and would be excited for you or anyone who upvoted this to go for it. I think the reason why it hasn’t happened is simply that it’s always vastly easier to say that other people should do something than to actually do it yourself.
I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.
This still sounds like a strong understatement to me – it seems that some people will have vastly more impact. Quick example that gestures in this direction: assuming that there are 5000 EAs, Sam Bankman-Fried is donating $20 billion, and all other 1999 4999 EAs have no impact whatsoever, the mean impact of EAs is $4 million, not $126k. That’s a factor of 30x, so a framing like “likely vastly more valuable” would seem more appropriate to me.
Most EAs I’ve met over the years don’t seem to value their time enough, so I worry that the frugal option would often cost people more impact in terms of time spent (e.g. cooking), and it would implicitly encourage frugality norms beyond what actually maximizes altruistic impact.
That said, I like options and norms that discourage fancy options that don’t come with clear productivity benefits. E.g. it could make sense to pay more for a fancier hotel if it has substantially better Wi-Fi and the person might do some work in the room, but it typically doesn’t make sense to pay extra for a nice room.
FWIW I think superlinear returns are plausible even for research problems with long timelines, I’d just guess that the returns are less superlinear, and that it’s harder to increase the number of work hours for deep intellectual work. So I quite strongly agree with your original point.
It would be convenient to have the specific questions that people give probabilities for (e.g. I think “timelines” refers to the year 2070?)
Similarly, speed matters in quant trading not primarily because of real-world influence on the markets, but because you’re competing for speed with other traders.
The examples you give fit my notion of speed—you’re trying to make things happen faster than the people with whom you’re competing for seniority/reputation.
A key question for whether there are strongly superlinear returns seems to be the speed at which reality moves. For quant trading and crypto exchanges in particular, this effect seems really strong, and FTX’s speed is arguably part of why it was so successful. This likely also applies to the early stages of a novel pandemic, or AI crunch time. In other areas (perhaps, research that’s mainly useful for long AI timelines), it may apply less strongly.
I sent a DM to the author asking if they could share examples. If you know of any, please DM me!
Atlas Fellowship cofounder here. Just saw this article. Currently running a workshop, so may get back with a response in a few days.
For now, I wanted to point out that the $50,000 scholarship is for educational purposes only. (If it says otherwise anywhere, let me know.)