You seem to be generally conflating EA and utilitarianism. If nothing else, there are plenty of deontologist EAs. (Especially if we’re being accurate with terminology!)
Davidmanheim
There’s a new post or two discussing this: https://www.lesswrong.com/posts/GdBwsYWGytXrkniSy/miri-s-june-2024-newsletter https://www.lesswrong.com/posts/cqF9dDTmWAxcAEfgf/communications-in-hard-mode-my-new-job-at-miri
And a older one from last year: https://www.lesswrong.com/posts/NjtHt55nFbw3gehzY/announcing-miri-s-new-ceo-and-leadership-team
Agreed, this shouldn’t be an update for anyone paying attention. Of course, lots of people skeptical of AI risks aren’t paying attention, so that the actual level of capabilities is still being dismissed as impossible Sci-Fi; it’s probably good for them to notice.
I don’t think that people making mild bounded commitments is bad—I’m more concerned about the community dynamics of selecting people who make these commitments and stick with them, and the impact it has on the rest of the community.
I agree with most of what you wrote here, but think that the pledge, as a specific high resolution effort, is not helpful. You’re confusing what zero-sum does and does not mean—I agree with the point that a community that acts the way the EA community has is unfortunately exclusionary, but I also think that making more pledges does the opposite of remove those dynamics. I also think that looking at the outcomes for those who made pledges and stuck around is selecting on the outcome variable; the damage that high expectations have may be on-net worthwhile, but it would be unreasonable to come to that conclusion on the basis of talking to who stuck around.
I strongly agree.
It seems that living in the Bay Area as an EA has a huge impact, and the dynamics are healthier elsewhere. (The fact that a higher concentration of EAs is worse, of course, is at least indicative of a big problem.)
This seems like a reasonable mistake for younger EAs to make, and I’ve seen similar mindsets frequently—but in the community, I am very happy to see that many other members are providing a voice of encouragement, but also signficantly more moderation.
But as I said in another comment, and expanded on in a reply, I’m much more concerned than you seem to be about people committing to something even more mild for their entire careers—especially if doing so as college students. Many people don’t find work in the area they hope to. Even among those that do find jobs in EA orgs and similar, which is a small proportion of those who want to, some don’t enjoy the things they would view as most impactful, and find they are unhappy and/or ineffective; having made a commitment to do whatever is most impactful seems unlikely to work well for a large fraction of those who would make such a pledge.
I think it’s a problem overall, and I’ve talked about this a bit in two of the articles I linked to. To expand on the concerns, I’m concerned on a number of levels, starting from community dynamics that seem to dismiss anyone not doing direct work as insufficiently EA, to the idea that we should be a community that encourages making often already unhealthy levels of commitment by young adults into pledges to continue that level of dedication for their entire careers.
As someone who has spent most of a decade working in EA, I think this is worrying, even for people deciding on their own to commit themselves. People should be OK with prioritizing themselves to a significant extent, and while deciding to work on global priorities is laudable *if you can find something that fits your abilities and skill set*, but committing to do so for your entire career, which may not follow the path you are hoping for, seems at best unwise. Suggesting that others do so seems very bad.
So again, I applaud the intent, and think it was a reasonable idea to propose and get feedback about, but I also strongly think it should be dropped and you should move to something else.
I’m more concerned that the actual survey language is “avert” not “save”—and obviously, we shouldn’t do any projects which avert DALYs.
Good post, though I think the digression bashing the Democrats was unhelpfully divisive.
Looks like it checks out: “Act as if what you do makes a difference. It does.” Correspondence with Helen Keller, 1908, in The Correspondence of William James: April 1908–August 1910, Vol. 12, Charlottesville: University of Virginia Press, 2004, page 135, as cited in: Academics in Action!: A Model for Community-engaged Research, Teaching, and Service (New York: Fordham University Press, 2016, page 71) https://archive.org/details/academicsinactio0000unse/page/1/mode/1up
“Neither of which current LLMs appear to be capable of.”
If o1 pro isn’t able to both hack and get money yet, it’s shockingly close. (Instruction tuning for safety makes accessing that capability very difficult.)
My tentative take is that this is on-net bad, and should not be encouraged. I give this a 10⁄10 for good intent, but a 2⁄10 for planning and avoiding foreseeable issues, including the unilateralists curse, the likely object level impacts of the pledge, and the reputational and community impacts of promoting the idea.
It is not psychologically healthy to optimize or maximize your life towards a single goal, much less commit to doing so. That isn’t the EA ideal. Promising to “maximize my ability to make a meaningful difference” is an unlimited and worryingly cult-like commitment, builds in no feedback from others who have a broader perspective about what is or is not important or useful. It implicitly requires pledgers to prioritize impact over personal health and psychological wellbeing. (The claim that it’s usually the case that burnout reduces impact is a contingent one, and seems very likely to lead many people to overcommit and do damaging things.) It leads to unhealthy competitive dynamics, and excludes most people, especially the psychologically well adjusted.
I will contrast this to the giving pledge, which is very explicitly a partial pledge, requiring 10% of your income. This is achievable without extreme measures, or giving up having a normal life. The pledge was built via consultation with and advice from a variety of individuals, especially including those who were more experienced, which also seems to sharply contrast with this one.
I’ve referred to this latter point as candy bar extinction; using fixed discount rates, a candy bar is better than preventing extinction with certainty after some number of years. (And with moderately high discount rates, the number of years isn’t even absurdly high!)
Thanks—this is helpful as a term, and closely related to privileging the hypothesis; https://www.lesswrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis The general solution, of course,is expensive but necessary; https://secondenumerations.blogspot.com/2017/03/episode-6-method-of-multiple-working.html
Worth noting that a number of 1DaySooner research projects I worked on or ran have paid some undergraduates, grad students, and medical school students for supervised work on research projects, which is effectively very similar to a paid internship—but as you mentioned, it’s very hard to do so outside a well scoped project.
I’ve written about this here, where I said, among other things:
Obviously, charity is a deeply personal decision—but it’s also a key way to impact the world, and an expression of religious belief, and both are important to me. Partly due to my experience, I think it’s important to dedicate money to giving thoughtfully and in advance, rather than doing so on an ad-hoc basis—and I have done this since before hearing about Effective Altruism. But inspired by Effective Altruism and organizations like Givewell, I now dedicate 10% of my income to charities that have been evaluated for effectiveness, and which are aligned with my beliefs about charitable giving.
In contrast to the norm in effective altruism, I only partially embrace cause neutrality. I think it’s an incomplete expression of how my charity should impact the world. For that reason, I split my charitable giving between effective charities which I personally view as valuable, and deference to cause-neutral experts on the most impactful opportunities. Everyone needs to find their own balance, and I have tremendous respect for people who donate more, but I’ve been happy with my decision to limit my effective charitable giving at 10%, and beyond that, I still feel free to donate to other causes, including those that can’t be classified as effective at all.
As suggested above, community is an important part of my budget. A conclusion I came to after reflecting on the question, and grappling with effective altruism, is that separate from charitable giving, I think it’s important to pay for public goods you benefit from, both narrow ones like community organizations, and broader ones. I think it’s worth helping to fund community centers, and why I paid for NPR membership when I lived in the US, and why I pay to offset carbon emissions to reduce the harms of climate change
This is the wrong thing to try to figure out; most of the probability of existential risk is likely not to make a clear or intelligible story. Quoting Nick Bostrom:
Suppose our intuitions about which future scenarios are “plausible and realistic” are shaped by what we see on TV and in movies and what we read in novels. (After all, a large part of the discourse about the future that people encounter is in the form of fiction and other recreational contexts.) We should then, when thinking critically, suspect our intuitions of being biased in the direction of overestimating the probability of those scenarios that make for a good story, since such scenarios will seem much more familiar and more “real”. This Good-story bias could be quite powerful. When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced by some other civilization)? While this scenario may be much more probable than a scenario in which human heroes successfully repel an invasion of monsters or robot warriors, it wouldn’t be much fun to watch. So we don’t see many stories of that kind. If we are not careful, we can be mislead into believing that the boring scenario is too farfetched to be worth taking seriously. In general, if we think there is a Good-story bias, we may upon reflection want to increase our credence in boring hypotheses and decrease our credence in interesting, dramatic hypotheses. The net effect would be to redistribute probability among existential risks in favor of those that seem to harder to fit into a selling narrative, and possibly to increase the probability of the existential risks as a group.
Deontology doesn’t require you not to have any utilitarian calculations, just that the rules to follow are not justified solely on the basis of outcomes. A deontologist can believe they have a moral obligation to give 10% of their income to the most effective charity as judged by their expected outcomes, for example, making them in some real sense a strictly EA deontologist.