What definition of AGI are you using?
Roddy MacSween
What’s the gap you’re referring to? Philosophy undergrads?
The uncertainty of GiveDirectly and other GiveWell supported charities is not actually that high (about an order of magnitude for GiveDirectly, I expect over 2-3 orders of magnitude for the others).
That seems pretty high to me! When I’ve seen GiveDirectly used as a point of comparison for other global health/poverty charities, they’re usually described as 1-10x more effective (i.e. people care about distinctions within one order of magnitude).
Why does this need charitable funding rather than existing profit incentives being sufficient? Is the assumption that non-pandemic use wouldn’t be profitable enough?
A model where you think x-risk in the next few decades is a very important problem but that donating to non-x-risk charities now is the most impactful use of money seems weird to me. Even if x-risk work isn’t constrained by money at the moment, it seems likely that could change between now and the global catastrophe. For example, unless you are confident in a fast AI takeoff there will probably be a time in the future where it’s much more effective to lobby for regulation than it is now (because it will be easier to do and easier to know what regulation is helpful).
I think it would be interesting to have various groups (e.g. EAs who are skeptical vs worried about AI risk) rank these arguments and see how their lists of the top ones compare.
I’m in a similar position (donate to global poverty but care enough about x-risk to plan my career around it). I think the signalling value of donating to easy-to-pitch causes is pretty significant (probably some people find x-risk easier/more effective to pitch but I don’t personally). aogara’s first point also resonates with me. Donating to obviously good causes also seems like it would be psychologically valuable if I end up changing my mind about the importance of x-risk in the future.
I think most people should be thinking about the optics of their donations in terms of how it affects them personally pitching EA, not in terms of how community-wide approaches to donation would affect optics of the community. It seems plausible that the optics of your donations could be anywhere from basically irrelevant to much more important than the direct good they do, depending on the nature/number of conversations about EA you have with non-EA people.
Thanks! For anyone else reading this, this guidance seems relevant.
Note that using a trust has downsides. With a trust, I would recommend only funding individuals and non-charities with extreme caution.
Could you elaborate on this? I’m interested in setting up a trust to do microgrants, taking advantage of the fact that I can tolerate greater risks with my own money than an EA org (I’d also be happy to let other people use the trust as a vehicle for that). The main disadvantage of trusts I’m aware of is that trustees are personally liable, but that doesn’t seem like a big risk if it’s just making grants.
You say your estimate for when alignment will be solved is from a “Gaussian distribution… used for illustration purposes only”. How do you intend the graphs/numbers based on this estimate to be interpreted?