I lead the DeepMind mechanistic interpretability team
Neel Nanda
I disagree. I think it’s an important principle of EA that it’s socially acceptable to explore the implications of weird ideas, even if they feel uncomfortable, and to try to understand the perspective of those you disagree with. I want this forum to be a place where posts like this can exist.
The EA community still donates far more to global health causes than animal welfare—I think the meat eater problem discourse seems like a much bigger deal than it actually is in the community. I personally think it’s all kinda silly and significantly prioritise saving human lives
I strong downvoted because the title is unnecessarily provocative and in my opinion gives a misleading impression. I would rather not have this kind of thing on my forum feed
Interesting idea!
-
I recommend a different name, when I saw this I assumed it was about pledging around left wing causes
-
I feel like the spirit of the pledge would be to increase the 10% part with inflation? If you get a pay raise in line with inflation it seems silly to have to give half of that, since your real take home pay is unchanged. Even the further pledge is inflation linked
-
Would value drift be mitigated by donating to a DAF and investing there? Or are you afraid your views on where to donate might also shift
I feel pretty ok with a very mild and bounded commitment? Especially with an awareness that forcing yourself to be miserable is rarely the way to be just effective yourself. I think it’s pretty valid for someone’s college age self to say that impact does matter to them, and they do care about this, and don’t want to totally forget about it even if it becomes inconvenient, so long as they avoid ways this is psychological even by light of those values
I’ve only upvoted Habryka , to reward good formatting
It seems that we’re even afraid of them. I will never forget that just a week before I arrived at an org I was to be the manager of, they turned away an Economist reporter at their door...
Fwiw, I think being afraid of journalists is extremely healthy and correct, unless you really know what you’re doing or have very good reason to believe they’re friendly. The Economist is probably better than most, but I think being wary is still very reasonable.
Glad to hear it!
I commit to using my skills, time, and opportunities to maximize my ability to make a meaningful difference
I find the word maximise pretty scary here, for similar reasons to here. Analogous how GWWC is about giving 10%, a bounded amount, not “as much as you can possibly spare while surviving and earning money”
To me, taking a pledge to maximise seriously (especially in a naive conception where “I will get sick of this and break the pledge” or “I will burn out” aren’t considerations) is a terrible idea, and I recommend that people take pledges with something more like “heavily prioritise” or “keep as one of my top prioritise” or “actually put a sincere, consistent effort into, eg by spending at least an hour per month reflecting on whether I’m having the impact I want”. Of course, in practice, a pledge to maximise generally means one of those things, since people always have multiple priorities, but I like pledges to be something that could be realistically kept.
Thanks for sharing the list!
I notice most of these don’t have arguments for why individual donations are better than OpenPhil just funding the org for now (beyond the implicit argument that diverse donor base is good maybe). I’m curious if any of them have good arguments there? Without it, it feels like a donor’s money is just funging with OpenPhil’s last dollar—this is great, but I strive to do better.
I appreciated the clear discussion of this in the AI governance section and find opportunities there particularly compelling
Thanks for clarifying! I somewhat disagree with your premises, but agree this is a reasonable position given your premises
Thanks for the post, it seems like you’re doing valuable work!
I’m curious how you’d compare One Acre Fund’s work to the baseline of just directly giving the farmers cash to spend as they see fit? And if you did this, do you expect they would spend it on the kind of things One Acre Fund is providing?
Based on this post, possible arguments I see:
You believe that loans make the approach more efficient as money is often paid back
You can provide expertise and teaching which is hard to purchase, or people may not value correctly
Thanks for the post! This seems broadly reasonable to me and I’m glad for the role LTFF plays in the ecosystem, you’re my default place to donate to if I don’t find a great specific opportunity.
I’m curious how you see your early career/transition stuff (including MATS) compared to OpenPhil’s early career/transition grant making? In theory, it seems to me like that should ideally be mostly left to OpenPhil, and LTFF be left to explore stuff OpenPhil is unwilling to fund, or otherwise to LTFF’s comparative advantage (eg speed maybe?)
Is there a difference in philosophy, setup, approach etc between the two funds?
I do have a lot of respect for the Open Phil team I just think they are making some critical mistakes, which is fully compatible with respectability
Sorry, my intention wasn’t to imply that you didn’t respect them, I agree that it is consistent to both respect and disagree.
Re the rest of your comment, my understanding of what you meant is as follows:
You think the most effective strategies for reducing AI x risk are explicitly black listed by OpenPhil. Therefore OpenPhil funding an org is strong evidence they don’t follow those strategies. This doesn’t necessarily mean that the orgs work is neutral or negative impact, but it’s evidence against being one of your top things. Further, this is a heuristic rather than a confident rule, and you made the time for a shallow investigation into some orgs funded by OpenPhil anyway, at which point heuristics are screened off and can be ignored anyway.
Is this a correct summary?
As a rule of thumb, I don’t want to fund anything Open Philanthropy has funded. Not because it means they don’t have room for more funding, but because I believe (credence: 80%) that Open Philanthropy has bad judgment on AI policy (as explained in this comment by Oliver Habryka and reply by Akash—I have similar beliefs, but they explain it better than I do).
This seems like an bizarre position to me. Sure, maybe you disagree with them (I personally have a fair amount of respect for the OpenPhil team and their judgement, but whatever, I can see valid reasons to criticise), but to consider their judgement not just irrelevant, but actively such strong negative evidence as to make an org not worth donating to, seems kinda wild. Why do you believe this? Reversed stupidity is not intelligence. Is the implicit model that all of x risk focused AI policy is pushing on some 1D spectrum such that EVERY org in the two camps is actively working against the other camp? That doesn’t seem true to me.
I would have a lot more sympathy with an argument that eg other kinds of policy work is comparatively neglected, so OpenPhil funding it is a sign that it’s less neglected.
In fairness, I wrote my post because I saw lots of people making arguments for a far stronger claim than necessary, and was annoyed by this
Community seems the right categorisation to me—the main reason to care about this is understanding the existing funding landscape in AI safety, and how much to defer to them/trust their decisions. And I would consider basically all the large funders in AI Safety to also be in the EA space, even if they wouldn’t technically identify as EA.
More abstractly, a post about conflicts of interest and other personal factors, in a specific community of interest, seems to fit this category
Being categorised as community doesn’t mean the post is bad, of course!
Speaking as an IMO medalist who partially got into AI safety because of reading HPMOR 10 years ago, I think this plan is extremely reasonable