Hi Luke! Looks like this role could be a good fit: https://www.forethought.org/ra-exa
MvK🔸
Hi Chukwubuikem! I might be misunderstanding your point but if you are referring to the Effective Giving charities, this is by design! We should expect most prospective donors to be in countries with high income levels, and so a charity whose main objective is to grow the pool of donors and the amount of funds committed to charitable causes should focus on countries like France, Australia, the UK, or Germany!
If you look at the direct intervention charities that Joey is talking about here (and that have been incubated by CE so far), you can see that many of them operate in and focus on African countries! (e.g. LEEP or FEM)
Hi Diego!
Thanks for your reply. I don’t know your financial situation so I don’t want to make assumptions but I think saving for retirement or building some general runway is important, and I would never want you to think that you aren’t doing enough, especially if you are donating 63% (!) of your income. That’s fantastic! And there will always be someone who donates more than you 😉 it’s not a race! 63% might be what works for you, and that’s great.
I should also note that while I have always been frugal, I was only able to donate this much because for a large part of the year, I didn’t have to pay anything for housing (and, sometimes, meals), and I didn’t count “immediate” donations of the kind mentioned above as income, so this maybe explains the high percentage. In 2024, I will likely move, and other changes in my personal life will likely mean I will get a lot closer to ~50% or less.
They have: https://www.charityentrepreneurship.com/center-for-alcohol-policy-solutions#:~:text=The Center for Alcohol Policy,thus saving millions of lives.
Doesn’t go much into probabilities or extinction and may therefore not be what you are looking for, but I’ve found Dan Hendrycks’ overview/introduction to AI risks to be a pretty comprehensive collection. (https://arxiv.org/abs/2306.12001)
(I for one, would love to see someone critique this, although the FAQ at the end is already a good start to some counterarguments and possible responses to those)
In 2023, I donated 46,645 $, which represents ~75% of my post-tax income.
[My donations are a bit messy, since I often asked employers or clients to donate my “salary” to a charity instead of having it paid out to me and only then re-directing it. Sometimes this limited my choices of where to donate, and not all employers offered this. I work on AI Safety and so my donations go towards GHD which reflects my hedging and has some other philosophical reasons I am happy to share if you reach out to me.]
My 2023 donations will be split as follows:GiveWell (Top Charities Fund): 26,655 $
GiveDirectly: 6,000 $
Malaria Consortium: 3,000 $
CEA [1]: 3,150 $
Misc [2]: ~2,340 $
Effektiv Spenden (as gift vouchers): ~5,500 $
Going forward, I will re-evaluate whether to include or even prioritize animal welfare in my giving (I had previously decided against that, but I’m now questioning my reasoning behind that decision).
[Edited to include new donations that I just made]- ^
This is compensation for contracting work that I negotiated an hourly salary for which I then never claimed.
- ^
This mostly covers smaller amounts of work test compensation that wasn’t claimed or re-directed immediately, or work that I was offered money for but ultimately turned down the payment. I don’t want to name any organization here but am happy to share details if you are curious.
- Mar 17, 2024, 8:45 AM; 14 points) 's comment on Ashamed of wealth by (
Garrison Lovely’s podcast comes to mind as a starting point on overlap and disagreements between the two communities: https://forum.effectivealtruism.org/posts/6NnnPvzCzxWpWzAb8/podcast-the-left-and-effective-altruism-with-habiba-islam
Hi Zed! Thanks for your post. A couple of responses:
“As critics of the long-termist viewpoint have noted, the base-rate for human extinction is zero.”
Yes, but this is tautologically true: Only in worlds where humanity hasn’t gone extinct could you make that observation in the first place. (For a discussion of this and some tentative probabilities, see https://www.nature.com/articles/s41598-019-47540-7)
“Instead of outlandish ideas of a new global government capable of unilaterally curtailing compute power or some other factor through force, we should focus on what is practically achievable today. Encouraging firms like OpenAI to red-team their models before release, for example, is practical and limits negative externalities.”
Why are the two mutually exclusive? I think you’re opening a false dichotomy—as far as I know, x-risk oriented folks are amongst the leading voices calling for red teams or even engaging in this work themselves. (See also: https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different)
“Let’s assume for a moment that domain experts who warn of imminent threats to humanity’s survival from AI are acting in good faith and are sincere in their convictions.”
The way you phrase this makes it sound like we have reason to doubt their sincerity. I’d love to hear what makes you think we do!
“For example, a global pause in model training that many advocated for made no reference to the idea’s inherent weakness—that is, it sets up a prisoner’s dilemma in which the more AI firms voluntarily agree to pause research, the greater the incentive for any one group to defect from the agreement and gain a competitive edge. It makes no mention of practical implementation, nor does it explain how it arrived on its pause time-duration; nor does it recognize the improbability of enforcing a global treaty on AI.”
My understanding is that even strong advocates of a pause are aware of its shortcomings and communicate these uncertainties rather transparently—I have yet to meet someone who sees them as a panacea. Granted, the questions you ask need to be answered, but the fact that an idea is thorny and potentially difficult to implement doesn’t make it a bad one per sé.
“A strict international regime dedicated to preventing proliferation still failed to prevent India, Israel, Pakistan, North Korea, and, likely, Iran from acquiring weapons.”
Are you talking about the NPT or the IAEA here? My expertise on this is limited (~90 hours of engagement), but I authored a case study on IAEA safeguards this summer and my overall takeaway was that domain experts like Carl Robichaud still consider these regimes success stories. I’d be curious to hear where you disagree! :)
This is from 2016, but worth looking into if you’re curious how this works:
“At least 50% of each program officer’s grantmaking should be such that Holden and Cari understand and are on board with the case for each grant. At least 90% of the program officer’s grantmaking should be such that Holden and Cari could easily imagine being on board with the grant if they knew more, but may not be persuaded that the grant is a good idea. (When taking the previous bullet point into account, this leaves room for up to 40% of the portfolio to fall in this bucket.) Up to 10% of the program officer’s grantmaking can be done without meeting either of the above two criteria, though there are some basic checks in place to avoid grantmaking that creates risks for Open Philanthropy. We call this “discretionary” grantmaking. Grants in this category generally follow a different, substantially abbreviated approval process. Some examples of discretionary grants are here and here.”
(https://www.openphilanthropy.org/research/our-grantmaking-so-far-approach-and-process/)
Could you share where you donate? I’ve always found it fascinating when people like you (leading a—dare I say—successful effective nonprofit) donate.
-
If you don’t donate to ALLFED, why is that? (Are you hedging, are you actually not convinced it’s the best giving opportunity out there...)
-
If you donate to ALLFED, what’s the case for not just taking a lower salary? (Or is that what you do?)
-
-
I am not convinced that “having a bigger public presence in media” is a reliable way to get democratic buy-in. (There is also some “damned if you, damned if you don’t” dynamic going on here—if OP was constantly engaging in media interactions, they’d probably be accused of “unduly influencing the discourse/the media landscape”) Could you describe what a more democratic OP would look like?
-
You mention “less billionaire funding”—OP was built on the idea of giving away Dustin’s and Cari’s money in the most effective way. OP is not fundraising, it is grantmaking! So how could it, as you put it, “rely on a more diverse pool of funding”? (also: https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money) I also suspect we would see the same dynamic as above: If OP did actively try to secure additional money in the forms of government grants, they’d be maligned for absorbing public resources in spite of their own wealth.
-
I think a blanket condemnation of political lobbying or the suggestion to “do less” of it is not helpful. Advocating for better policies (in animal welfare, GHD, pandemic preparedness etc.) is in my view one of the most impactful things you can do. I fear we are throwing the baby out with the bathwater here.
-
https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different (It’s been a while since I read this so I’m not sure it is what you are looking for, but Gideon Futerman had some ideas for what “bridge building” might look like.)
Hi Stan!
Some of this has been discussed before, maybe a good starting point would be this post (https://forum.effectivealtruism.org/posts/Rts8vKvbxkngPbFh7/should-ea-shift-away-a-bit-from-elite-universities) or this one (https://forum.effectivealtruism.org/posts/LCfQCvtFyAEnxCnMf/a-slightly-i-think-different-slant-on-why-ea-elitism-bias).
Both pieces take a more critical view of “elitism” so might not be what you are looking for in terms of steelmanning but hope it helps nonetheless! :)
Supporting a global EA community is expensive—e.g flying people to conferences in the US and UK from places like South Africa and India is often ~4X the price of local attendees travel costs; we have to sponsor travel and work visas.
Well, it is, but only as long as you assume that all conferences should be held in the US and the UK in the first place (for discussions on this, see this and this).
That works!
It can be difficult to construct and maintain co-leadership roles.
This might generally be true, but some of the more prominent EA organisations have successfully pulled this off, with Rethink Priorities having both Peter and Marcus as Co-CEOs or Open Philanthropy with their temporary Co-CEO split.
Counterpoints:
- limited data available (Co-CEOs still in the minority, few successful case studies of Co-CEO partnerships that lasted decades, not just years)
- RP splits their portfolio and so does OP, so a split in executive leadership seems reasonable—I’m unsure what such a split might look like for CEA
I know we use the term “Fellowship” for anything and everything in the EA community, but wouldn’t a program that charges tuition etc. be more accurately described as a “Course”, “Class”, “Training” or “Bootcamp”?
Or, to be less adversarial: How did you decide on the name for this program? :)
I still get this: “The private share link you tried to reach is not available. The owner of this base may have unshared or deleted it. Please contact them if you need access.”
https://www.yieldandspread.org/ comes to mind :)
You’ve probably seen this (or have already applied) but this role seems like a potentially good fit: https://www.forethought.org/ra-exa