Speaking for myself, this is the sort of thing that would make me more excited to sign a pledge.
Ian Turner
Thanks for posting this.
Do we have an intuition for how to apply shapely values in typical EA scenarios, for example: • How much credit goes to donors, vs charity evaluators, vs object level charities? • How much credit goes to charity founders/executives, vs other employees/contractors? • How much credit goes to meta vs object organizations?
Hmm, I would argue than an AI which, when asked, causes human extinction is not aligned, even if it did exactly what it was told.
Is your question how we should think about meta vs object level work, excluding considerations of personal fit? Because, at least in this example, I would expect fit considerations to dominate.
Basically, it seems to me that for any given worker, these career options would have pretty different levels of expected productivity, influenced by things like aptitude and excitement/motivation. And my prior is that in most cases, these productivity differences should swamp the sort of structural considerations you bring up here.
Have you already solicited funding from government funders such as NIH or CDC, or philanthropic funders such as Open Philanthropy? If so, what did they say about this?
This seems bad, if true. Has anyone considered reaching out to these people?
Did it? My sense was only that (a) the amount of money from six-figure donations was nonetheless dwarfed by Open Philanthropy, and (b) as the number of professionals in EA has increased, the percentage of the community focused on donations has been diluted somewhat. But we’re still around!
Don’t forget that maximizing is perilous.
Maybe just bet on v-dem, or regimes of the world? There is already one market for that: https://manifold.markets/Siebe/if-trump-is-elected-will-the-us-sti?play=true
I stopped reading at the end of the first paragraph when he said colonizing Mars was a “principal obsession” of EA advocates.
I’m not necessarily disputing the idea that donating to these sorts of fundraising organizations is a good use of money; but we also need to be careful about double-counting. It’s tempting to try to take credit for one’s own meta donations while object-level donors are also taking full credit for the programs they fund.
My practice, perhaps adjacent but not identical to the one proposed here, is to give 15% of a donation to the charity evaluator or facilitator that introduced me to the main charity or program. In recent years that’s been GiveWell, and the fact that they have an excess funds regranting policy makes this an even easier decision.
Why is this better than actually talking to someone with the opposing viewpoint?
Aren’t both AMF and GiveDirectly examples of charities that became more cost effective after scaling into the $millions?
People may not be aware that GiveWell provides this service, in the global health and development space. They are happy to work with donors to find grant opportunities that are a good fit.
Are there even allegations of selective moderation of political content?
was asking for randomized control trials (or other methods) to demonstrate effectiveness really shockingly revolutionary
EA didn’t invent RCTs, or even popularize them within the social sciences, but their introduction was indeed a major change in thinking. Abhijit Banerjee, Esther Duflo and Michael Kremer won the Nobel prize in economics largely for demonstrating the experimental approach to the study of development.
Speaking for myself, the main reason I don’t get involved in AI stuff is because I feel clueless about what the correct action might be (and how valuable it might be, in expectation). I think there is a pretty strong argument that EA involvement in AI risk has made things worse, not better, and I wouldn’t want to make things even worse.
What country are you in? That will make a big difference as to the answer.
According to ZeroGPT, this comment was 70% AI-generated.
Here’s a GiveWell blog post from 2009 that engages with this question.