Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
That said, if OpenPhilanthropy is pursuing this grant under a hits-based approach, it might be less controversial if they were to acknowledge this.
In this case — and many, actually — I think it’s fair to assume they are. OP is pretty explicit about taking a hits-based giving approach.
I would like to know why. I found the post insightful.
Yeah, I’m also similarly sceptical that a highly publicised/discussed portion of one of the most hyped industries — one that borders on a buzzword at times — has not captured the attention or consideration of the market. Seems hard to imagine given the remarkably salient progress we’ve seen in 2022.
That phrasing is better, IMO. Thanks Michael.
I think the debate between HLI and GW is great. I’ve certainly learned a lot, and have slightly updated my views about where I should give. I agree that competition between charities (and charity evaluators) is something to strive for, and I hope HLI keeps challenging GiveWell in this regard.
Thanks for the post Michael — these sorts of posts have been very helpful for making me a more informed donor. I just want to point out one minor thing though.
I appreciate you and your team’s work and plan on donating part of my giving season donations to either your organisation, Strongminds, or a combination of both. But I did find the title of this post a bit unnecessarily adversarial to GiveWell (although it’s clever, I must admit).
I’ve admired the fruitful, polite, and productive interactions between GW and HLI in the past and therefore I somewhat dislike the tone struck here.
+1
I also think another similar bonus is that prizes can sometimes get people to do EA things who otherwise wouldn’t have done EA things counterfactually.
E.g., some prize on alignment work could plausibly be done by computer scientists who otherwise would be doing other things.
This could signal boost EA/the cause area more generally, which is good.
Great meme
I feel like this question is so much more fun if we can include dead people, so I’m gonna do just that.
Off the top of my head:
Isaac Newton
John Forbes Nash
John von Neumann
Alan Turing
Amos Tversky
Ada Lovelace
Leonhard Euler
Terence Tao
John Stuart Mill
Eliezer Yudkowsky
Herbert Simon
This is a very cool model and I would absolutely be thrilled to see someone write up a post about it!
It seems like there is a quality and quantity trade-off where you could grow EA faster by expecting less engagement or commitment. I think there’s a lot of value in thinking about how to make EA massively scale. For example, if we wanted to grow EA to millions of people maybe we could lower the barrier to entry somehow by having a small number of core ideas or advertising low-commitment actions such as earning to give. I think scaling up the number of people massively would benefit the most scalable charities such as GiveDirectly.
I suppose this mostly has to do with growing the size of the “EA community”, whereas I’m mostly thinking about growing the size of “people doing effectively altruistic things”. There’s a big difference in the composition of those groups. I also think there is a trade-off in terms of how community building resources are spent, but the thing about trying to encourage influence is that it doesn’t need to trade-off with highly engaged EAs. One analogy is that encouraging people to donate 10% doesn’t mean that someone like SBF can’t pledge 99%.
The counterargument is that impact per person tends to be long-tailed. For example, the net worth of Sam Bankman Fried is ~100,000 higher than a typical person. Therefore, who is in EA might matter as much or more as how many EAs there are.
Yup, agreed. This is my model as well. That being said, I wouldn’t be surprised if the impact of influence also follows a long-tailed distribution: imagine if we manage to influence 1,000 people about the importance of AI-related x-risk, and one of them actually ends up being the one to push for some highly impactful policy change.
It’s not clear to me whether quality or quantity is more important because some of the benefits are hard to quantify. One easily measurable metric is donations: adding a sufficiently large number of average donators should have the same financial value as adding a single billionaire.
Agreed. I’m similarly fuzzy on this and would really appreciate if someone did more analysis on this rather than deferring to the meme that EA is growing too fast/slow.
I think that the value is going to vary hugely by the cause area and the exact ask.
For global poverty, anyone can donate money to buy malaria net, though it’s worth remembering that Dustin Moskovitz is worth a crazy number of low-value donors.
For AI Safety, it’s actually surprisingly tricky to find robustly net-positive actions we can pursue. Unfortunately it would be very easy to lobby a politician to pass legislation, which then makes the situation worse. Or to persuade voters this is an important issue, but then have them voting for things that sound good rather than things that solve the issue.
For global health & development, I think it is still quite useful to have influence over things like research and policy prioritisation (what topics academics should research, and what areas of policy think tanks should focus on), government foreign aid budgets, vaccine r&d, etc. This is tangential, but even if Dustin is worth a large number of low-value donors (he is), the marginal donation to effective global poverty charities is still very impactful.
For AI, I agree that it is tricky to find robustly net-positive actions, as of right now at least. I expect this to change over the next few years, and I hope people in relevant positions to implement these actions will be ready to do so once we have more clarity about which ones are good. Whether or not they’re highly engaged EAs doesn’t seem to matter inasmuch as they actually do the things, IMO.
Thank you for the work you and your team do, Julia. Many of these situations are incredibly tricky to handle, and I’m very grateful the EA community has people working on them.
Here is a first stab I took at organising some pieces of content that would be good to test your fit for this kind of work. I tried to balance it as much as I could with respect to length, difficulty, format, and cause area.
+1 — the wiki is awesome! Though I’d love to see specific distillations of standalone written works, in addition to topic-style distillations seen on the wiki.
I’m going to write out a list of ~10-15 pieces of content I think would be good to distill, and I’ll share it here once I’m finished.
(1) — I think there is probably a correlation between good distillers and good researchers, but it isn’t one-to-one. Distillers probably have a stronger comparative advantage in communication and simplification, whereas researchers probably would be better at creativity and diving deep into specific focus areas. It seems like a lot of great academics struggle with simplifying and broadcasting their core ideas to a level of abstraction that a general audience can understand.
(2) — completely agree, I think it would a great skill signal.
I love the fellowship idea as well!
Ah I totally forgot to include a footnote about the Nonlinear library! For me, it’s helpful, but I sometimes find the text-to-speech a bit hard to focus on because it isn’t quite natural. But maybe I’m just a pedant.
Maybe, but I think it would be good if someone built a really strong comparative advantage with this. Describing, and then evaluating the success criteria of bounties could have some slightly burdensome overhead as well.
Also +1 that having hubs in US and UK is sub-optimal.
To your knowledge, have there been any efforts to systematically compare different hub candidates? I’d be curious to see the reasoning behind why location A might be more preferable than B, C, D, etc.
I think I was just reading all of those claims together and trying to subjectively guess how likely I find them all to be. So to split them up, in order of each claim: 90%, 90%, 80%.