Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
Seems right on priors
I’m sorry to hear that you’re stressed and anxious about AI. You’re certainly not alone here, and what you’re feeling is absolutely valid.
More generally, I’d suggest checking out resources from the Mental Health Navigator service. Some of them might be helpful for coping with these feelings.
More specifically, maybe I can offer a take on this events that’s potentially worth considering. One off-the-cuff reaction I’ve had to Bing’s weird, aggressive replies is that they might be good for raising awareness and making the concerns about AI risk much more salient. I’m far more scared about worlds where systems’ bad behaviour is hidden until things get really bad, such that the world is lulled into a false sense of complacency up until that point. Having a very prominent system exhibit odd behaviour could be helpful for galvanising action.
I’m appreciative for Shakeel Hashim. Comms roles seem hard in general. Comms roles for EA seem even harder than that. Comms roles for EA during the last 3 months sound unbelievably hard and stressful.
(Note: Shakeel is a personal friend of mine, but I don’t think that has much influence on how appreciative I am of the work he’s doing, and everyone else managing these crises).
Yeah, fair point. When I wrote this, I roughly followed this process:
Write article
Summarize overall takes in bullet points
Add some probabilities to show roughly how certain I am of those bullet points, where this process was something like “okay I’ll re-read this and see how confident I am that each bullet is true”
I think it would’ve been more informative if I wrote the bullet points with an explicit aim to add probabilities to them, rather than writing them and thinking after “ah yeah, I should more clearly express my certainty with these”.
I think I was just reading all of those claims together and trying to subjectively guess how likely I find them all to be. So to split them up, in order of each claim: 90%, 90%, 80%.
That said, if OpenPhilanthropy is pursuing this grant under a hits-based approach, it might be less controversial if they were to acknowledge this.
In this case — and many, actually — I think it’s fair to assume they are. OP is pretty explicit about taking a hits-based giving approach.
I would like to know why. I found the post insightful.
Yeah, I’m also similarly sceptical that a highly publicised/discussed portion of one of the most hyped industries — one that borders on a buzzword at times — has not captured the attention or consideration of the market. Seems hard to imagine given the remarkably salient progress we’ve seen in 2022.
That phrasing is better, IMO. Thanks Michael.
I think the debate between HLI and GW is great. I’ve certainly learned a lot, and have slightly updated my views about where I should give. I agree that competition between charities (and charity evaluators) is something to strive for, and I hope HLI keeps challenging GiveWell in this regard.
Thanks for the post Michael — these sorts of posts have been very helpful for making me a more informed donor. I just want to point out one minor thing though.
I appreciate you and your team’s work and plan on donating part of my giving season donations to either your organisation, Strongminds, or a combination of both. But I did find the title of this post a bit unnecessarily adversarial to GiveWell (although it’s clever, I must admit).
I’ve admired the fruitful, polite, and productive interactions between GW and HLI in the past and therefore I somewhat dislike the tone struck here.
+1
I also think another similar bonus is that prizes can sometimes get people to do EA things who otherwise wouldn’t have done EA things counterfactually.
E.g., some prize on alignment work could plausibly be done by computer scientists who otherwise would be doing other things.
This could signal boost EA/the cause area more generally, which is good.
Great meme
I feel like this question is so much more fun if we can include dead people, so I’m gonna do just that.
Off the top of my head:
Isaac Newton
John Forbes Nash
John von Neumann
Alan Turing
Amos Tversky
Ada Lovelace
Leonhard Euler
Terence Tao
John Stuart Mill
Eliezer Yudkowsky
Herbert Simon
This is a very cool model and I would absolutely be thrilled to see someone write up a post about it!
It seems like there is a quality and quantity trade-off where you could grow EA faster by expecting less engagement or commitment. I think there’s a lot of value in thinking about how to make EA massively scale. For example, if we wanted to grow EA to millions of people maybe we could lower the barrier to entry somehow by having a small number of core ideas or advertising low-commitment actions such as earning to give. I think scaling up the number of people massively would benefit the most scalable charities such as GiveDirectly.
I suppose this mostly has to do with growing the size of the “EA community”, whereas I’m mostly thinking about growing the size of “people doing effectively altruistic things”. There’s a big difference in the composition of those groups. I also think there is a trade-off in terms of how community building resources are spent, but the thing about trying to encourage influence is that it doesn’t need to trade-off with highly engaged EAs. One analogy is that encouraging people to donate 10% doesn’t mean that someone like SBF can’t pledge 99%.
The counterargument is that impact per person tends to be long-tailed. For example, the net worth of Sam Bankman Fried is ~100,000 higher than a typical person. Therefore, who is in EA might matter as much or more as how many EAs there are.
Yup, agreed. This is my model as well. That being said, I wouldn’t be surprised if the impact of influence also follows a long-tailed distribution: imagine if we manage to influence 1,000 people about the importance of AI-related x-risk, and one of them actually ends up being the one to push for some highly impactful policy change.
It’s not clear to me whether quality or quantity is more important because some of the benefits are hard to quantify. One easily measurable metric is donations: adding a sufficiently large number of average donators should have the same financial value as adding a single billionaire.
Agreed. I’m similarly fuzzy on this and would really appreciate if someone did more analysis on this rather than deferring to the meme that EA is growing too fast/slow.
Thanks for taking the time to write up your views on this. I’d be keen on reading more posts like this from other folks with backgrounds in ML — particularly those who aren’t already already in the EA/LessWrong/AIS sphere.