How much of the money raised by effectiv-spenden, etc. is a essentially pass through to Givewell? (I know Israel now has a similar initiative, but is in large part passing the money to the same orgs.)
Davidmanheim
ALTER Israel End-of-2024 Update
I’m cheating a bit, because both of these are well on their way, but two big current goals:
Get Israel to iodize its salt!
Run an expert elicitation on Biorisk with RAND and publish it.
Not predictions as such, but lots of current work on AI safety and steering is based pretty directly on paradigms from Yudkowsky and Christiano—from Anthropic’s constitutional AI to ARIA’s Safeguarded AI program. There is also OpenAI’s Superalignment reserach, which was attempting to build AI that could solve agent foundations—that is, explicitly do the work that theoretical AI safety research identified. (I’m unclear whether the last is ongoing or not, given that they managed to alienate most of the people involved.)
I strongly agree that you need to put your own needs first, and think that your level of comfort with your savings and ability to withstand foreseeable challenges is a key input. My go-to in general, is that the standard advice of keeping 3-6 months of expenses is a reasonable goal—so you can and should give, but until you have saved that much, you should at least be splitting your excess funds between savings and charity. (And the reason most people don’t manage this has a lot to do with lifestyle choices and failure to manage their spending—not just not having enough income. Normal people never have enough money to do everything they’d like to; set your expectations clearly and work to avoid the hedonic treadmill!)
To follow on to your point, as it relates to my personal views, (in case anyone is interested,) it’s worth quoting the code of Jewish law. It introduces its discussion of Tzedakah by asking how much one is required to give. “The amount, if one has sufficient ability, is giving enough to fulfill the needs of the poor. But if you do not have enough, the most praiseworthy version is to give one fifth, the normal amount is to give a tenth, and less than that is a poor sign.” And I note that this was written in the 1500s, where local charity was the majority of what was practical; today’s situation is one where the needs are clearly beyond any one person’s ability—so the latter clauses are the relevant ones.
So I think that, in a religion that prides itself on exacting standards and exhaustive rules for the performance of mitzvot, this is endorsing exactly your point: while giving might be a standard, and norms and community behavior is helpful in guiding behavior, the amount to give is always a personal and pragmatic decision, not a general rule.
Exploring Cooperation: The Path to Utopia
You seem to be framing this as if deontology is just side constraints with a base of utilitarianism. That’s not how deontology works—it’s an entire class of ethical frameworks on its own.
Moderately Skeptical of “Risks of Mirror Biology”
Deontology doesn’t require you not to have any utilitarian calculations, just that the rules to follow are not justified solely on the basis of outcomes. A deontologist can believe they have a moral obligation to give 10% of their income to the most effective charity as judged by their expected outcomes, for example, making them in some real sense a strictly EA deontologist.
You seem to be generally conflating EA and utilitarianism. If nothing else, there are plenty of deontologist EAs. (Especially if we’re being accurate with terminology!)
There’s a new post or two discussing this: https://www.lesswrong.com/posts/GdBwsYWGytXrkniSy/miri-s-june-2024-newsletter https://www.lesswrong.com/posts/cqF9dDTmWAxcAEfgf/communications-in-hard-mode-my-new-job-at-miri
And a older one from last year: https://www.lesswrong.com/posts/NjtHt55nFbw3gehzY/announcing-miri-s-new-ceo-and-leadership-team
Agreed, this shouldn’t be an update for anyone paying attention. Of course, lots of people skeptical of AI risks aren’t paying attention, so that the actual level of capabilities is still being dismissed as impossible Sci-Fi; it’s probably good for them to notice.
I don’t think that people making mild bounded commitments is bad—I’m more concerned about the community dynamics of selecting people who make these commitments and stick with them, and the impact it has on the rest of the community.
I agree with most of what you wrote here, but think that the pledge, as a specific high resolution effort, is not helpful. You’re confusing what zero-sum does and does not mean—I agree with the point that a community that acts the way the EA community has is unfortunately exclusionary, but I also think that making more pledges does the opposite of remove those dynamics. I also think that looking at the outcomes for those who made pledges and stuck around is selecting on the outcome variable; the damage that high expectations have may be on-net worthwhile, but it would be unreasonable to come to that conclusion on the basis of talking to who stuck around.
I strongly agree.
It seems that living in the Bay Area as an EA has a huge impact, and the dynamics are healthier elsewhere. (The fact that a higher concentration of EAs is worse, of course, is at least indicative of a big problem.)
This seems like a reasonable mistake for younger EAs to make, and I’ve seen similar mindsets frequently—but in the community, I am very happy to see that many other members are providing a voice of encouragement, but also signficantly more moderation.
But as I said in another comment, and expanded on in a reply, I’m much more concerned than you seem to be about people committing to something even more mild for their entire careers—especially if doing so as college students. Many people don’t find work in the area they hope to. Even among those that do find jobs in EA orgs and similar, which is a small proportion of those who want to, some don’t enjoy the things they would view as most impactful, and find they are unhappy and/or ineffective; having made a commitment to do whatever is most impactful seems unlikely to work well for a large fraction of those who would make such a pledge.
I think it’s a problem overall, and I’ve talked about this a bit in two of the articles I linked to. To expand on the concerns, I’m concerned on a number of levels, starting from community dynamics that seem to dismiss anyone not doing direct work as insufficiently EA, to the idea that we should be a community that encourages making often already unhealthy levels of commitment by young adults into pledges to continue that level of dedication for their entire careers.
As someone who has spent most of a decade working in EA, I think this is worrying, even for people deciding on their own to commit themselves. People should be OK with prioritizing themselves to a significant extent, and while deciding to work on global priorities is laudable *if you can find something that fits your abilities and skill set*, but committing to do so for your entire career, which may not follow the path you are hoping for, seems at best unwise. Suggesting that others do so seems very bad.
So again, I applaud the intent, and think it was a reasonable idea to propose and get feedback about, but I also strongly think it should be dropped and you should move to something else.
I’m more concerned that the actual survey language is “avert” not “save”—and obviously, we shouldn’t do any projects which avert DALYs.
I don’t think multiperson disagreements are in general a tractable problem for one hour sessions. It sounds like you need someone in charge to enable disagree then commit, rather than a better way to argue.