Hey there,
It seems you embrace a pretty intense version of consequentialism. Few consequentialists would agree that someone struggling with a chronic disease that renders donation risky/​harmful would still have a duty to donate a kidney. And at least among scholars of utilitarianism, most reject he most straightforward forms of act consequentialism that you seem to have in mind. On straightforward act consequentialism, you are typically inherently failing at what you should do—because there always will be a better way to bring about consequences. Often this will cause quite some psychological distress and undermine our overall goals to do good.
All that is to say, I think it’s cool you are thinking so openly about some important choices! It might be useful for you to read some other consequentialist texts that try to square consequentialism with real-life challenges, including of the psychological sort. I think a good starting point is Peter Railton, for example his paper on Consequentialism, Demandingness and Alienation https://​​www.jstor.org/​​stable/​​pdf/​​2265273.pdf?casa_token=Wg-Rr0UWeeAAAAAA:4d-0rBSuoPWANdrEqIC2vC7x5UTJ0dm4SBnz2Yx7LxUvyE9FtaFs_oNl1zG2vCYLZzqvceKabMH4DZZRiR4SlbAfaGVkRbmtCt4ggTK-b0GZHvZgpOv2
mhendric🔸
I’m happy to see engagement with this article, and I think you make interesting points.
One bigger-picture consideration that I think you are neglecting is that even if your arguments go through (which is plausible), the argument for longtermism/​xrisk shifts significantly.
Originally, the claim is something like
There is really bad risky tech
There is a ton of people in the future
Risky tech will prevent these people from having (positive) lives
________________________________
Reduce tech risk
On the dialectic you sketch, the claim is something like
There is a lot of really bad risky tech
This tech, if wielded well, can reduce the risk of all other tech to zero
There is a small chance of a ton of people in the future
If we wield the tech well and get a ton of people in the future, thats great
_________________________________________
Reduce tech risk (and, presumably, make it powerful enough to eliminate all risk and start having kids)
I think the extra assumptions we need for your arguments against Thorstadt to go through are ones that make longtermism much less attractive to many people, including funders. They also make x-risk unattractive for people who disagree with p2 (i.e., people who do not believe in superintelligence).
I think people are aware that this makes longtermism much less attractive—I typically don’t see x-risk work being motivated in this more assumption-heavy way. And, as Thorstad usefullly points out, there is virtually no serious e(v) calculus for longtermist intervention that does a decent job at accounting for these complexities. That’s a shame, because EA at least originally seemed to be very dilligent about providing explicit, high-quality e(v) models instead of going by vibes and philosophical argument alone.
Another Philosophers Against Malaria Fundraiser has begun: https://​​www.againstmalaria.com/​​FundraiserGroup.aspx?FundraiserID=9418
In the last years, we got ca $65.000 in donations. Early donations are especially helpful, as they populate the page and give a sense of dynamism!
Any share with philosophers or university patriots that you know would be especially welcome. The fundraiser is a ‘competition’ between departments that aggregates donations; the winner is announced on the popular philosophy blog ‘DailyNous’. Last year, the good folks at Delaware won. Before that, Michigan took the crown. Ohio State and Villanova lie in shambles.Any help much appreciated! These are easy to run—if you are interested in starting one for your discipline, please reach out.
Hey Emmannaemeka,
Thank you for writing this! I have little insight as to which EA roles you might or might not be a good fit for. But I wanted to chime in on ways of fitting into the EA community, as opposed to EA orgs. I am in academia, too, and do not myself strive to get a job with an EA org. I do not think this makes me ‘less EA’. There are many really good ways to contribute to the overall EA project that are not at EA organizations.
I find one of the privileges that come with academia is teaching ambitious, talented students. Many students enter university with a burning zeal to change the world and bring about positive change. I think as teachers, we can have a real impact by guiding such students towards realizing their values and going into positions where they can effectively make the world a better place. I am naturally biased in my assessment of this, but I think its plausible that teaching can have a bigger impact than direct work—it is a realistic aim to get multiple students that you can help grow into direct roles in EA-style organizations. I often think that many of these students are ‘better fits’ than I myself would be in such roles.It strikes me that as a faculty member in a genuinely meaningful and important field, you’d be in a premier position to have impact through your teaching.
I recommend reading Railton, P. (1984). Alienation, consequentialism, and the demands of morality. Philosophy & Public Affairs, 134-171.
I agree that we shouldn’t use e2g as a shorthand for skillmaxing.
I am less optimistic about the ‘fit’ vs raw competence point. It’s not clear to me that a good fit for the work position can easily be gleaned by work tests—a very competent person may be able to acquire that ‘fit’ within a few weeks on the job, for example, once they have more context for the kind of work the organization wants. So even if the candidates at the point of hiring looked very different, their comparison may differ unless we imagine both in an applied job context, having learned things they did not know at the time of hiring.
I am more broadly worried about ‘fit’ in EA hiring contexts, because as opposed to markers of raw competence, ‘fit’ provides a lot of flexibility for selecting traits that are relatively tangential to work performance and often unreliable. For example, value-fit might select for hiring likeminded folks who have read the same stuff the hiring manager has, and reduce epistemic diversity. A fit for similar research interests reduces epistemic diversity and locks in certain research agendas for a long time. A vibe-fit may select simply for friends and those who have internalized norms. A worktest that is on an explicitly EA project may select for those already more familiar with EA, even if it would be easy for an outsider candidate to pick up on basic EA knowledge quickly if they got the job.My impression is that overall, EA does have a noticeable suboptimal tendency to hire likeminded folks and folks in overlapping social circles (i.e. friends; friends of friends). Insofar as ‘fit’ makes it easier to justify this tendency internally and externally, I worry that it will lead to suboptimal hiring. I acknowledge we may have very different kinds of ‘fit’ in mind here. I do think the examples I provide above do exist in EA hiring decisions.
I haven’t done hiring rounds for EA, so I may be completely wrong—maybe your experience has been that after a few worktests it becomes abundantly clear who the right candidate is.
This is a cool list. I am unsure if this one is very useful:
* There aren’t many salient examples of people doing direct work that I want to switch to e2g.
This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g. My understanding is that many extremely talented EAs are having trouble finding jobs within EA, and that many of them are capable of work at the quality that current EA employees do.
This reason I think bites both ways:* E2g is often less well optimised for learning useful object-level knowledge and skills than direct work.
My understanding is that many non-EA jobs provide useful knowledge and skills that are underrepresented in current EA organizations, albeit my impression is that this is improving as EA organizations professionalize. For example, I wouldn’t be surprised if on average, a highly talented undergrad would likely become a more effective employee of an EA organization if they spent 2 years ETG at anonymous corporation before they started doing direct work. And if we’re lucky, such experiences outside EA would promote epistemic diversity and reduce the risk of groupthink in EA organizations.
My understanding is that competition for EA jobs is extremely high, and that roles that are being posted attract sufficient numbers of outstanding candidates. This seems to be strong evidence to me that a fair share of people applying to EA jobs should consider ETG unless they have reason to believe that they specifically outshine other applicants for EA jobs (i.e., that the job would not be filled by an equally competent person).
Regarding skeptical optimism, how about
Cautious Optimism
Safety-conscious optimism
Lighthearted skepticism
Happy Skepticism
Happy Worries
Curious Optimism
Positive Skepticism
Worried Optimism
Careful Optimism
Vigilant Optimism
Vigilant Enthusiasm
Guarded Optimism
Guarded Enthusiasm
Mindful Optimism
Mindful Enthusiasm
Just throwing a bunch of suggestions out in case one of them sounds good to your ear.
To AMF, as part of this yearly fundraiser I run https://​​www.againstmalaria.com/​​FundraiserGroup.aspx?FundraiserID=9191
I really liked this post, and found the second half of it especially insightful.
I love your blog and reliably find it to provide the highest-quality EA criticism I have found. I shifted my view on a handful of issues based on it.
It may be helpful for non-philosophy readers to know that the journals these paper are published in are very impressive. For example, Ethics (Mistakes in Moral Math of Longtermism paper) is the most well-regarded ethics journal I know of in our discipline, akin to e.g. what Science or Nature would be for a natural scientist.I am somewhat disheartened that those papers did not gain visible uptake from key players in the EA space (e.g. 80K, Openphil), especially since it was published when most EA organizations strike me as moving strongly towards longtermism/​AI risk. My sense is that it was briefly acknowledged, then simply ignored. I don’t think that the same would have happened with e.g. a Science or Nature paper.
To stick with the Mistakes in Moral Math paper, for example: I think it puts forward a very strong argument against the very few explicit numerical models of EV calculations for longtermist causes. A natural longtermist response would be to either adjust models or present new models, incorporating factors such as background risk that are currently not factored in. I have not seen any such models. Rather, I feel like longtermist pitches often get very handwavey when pressed on explicit EV models that compare their interventions to e.g. AMF or Give Directly. I take it to be a central pitch of your paper that it is very bad that we have almost no explicit numerical models, and that those we have neglect crucial factors_. To me, it seems like that very valid criticism went largely unheard. I have not seen new numerical EV calculations for longtermist causes since publication. This may of course be a me problem—please send me any such comparative analyses you know!
I don’t want to end on such a gloomy note—even if I were right that these criticisms are valid, and that EA fails to update on them, I am very happy that you do this work. Other critics often strike me as arguing in bad faith or being fundamentally misinformed—it is good to have a good-faith, quality critique to discuss with people. And in my EA-adjacent house, we often discuss your work over beers and food and greatly enjoy it haha. Please keep it coming!
I am organizing a fundraising competition between Philosophy Departments for AMF.
You can find it here: https://​​www.againstmalaria.com/​​FundraiserGroup.aspx?FundraiserID=9191
Previous editions have netted (badum-tschak) roughly $40.000:
https://​​www.againstmalaria.com/​​FundraiserGroup.aspx?FundraiserID=9189
Any contributions are very welcome, as is sharing the fundraiser. A more official-looking announcement is on Dailynous, a central blog of academic philosophy: people found this ideal for sharing via e.g. department listservs.
https://​​dailynous.com/​​2024/​​12/​​02/​​philosophers-against-malaria-a-fundraising-competition/​​
These are relatively low-effort to set up—I spend maybe 10-20h a year on them. If you are interested in setting up a similar thing for your discipline/​social circles, feel very welcome to reach out for help.
mhenÂdric’s Quick takes
I don’t find this convincing. It seems to me that updating that one line on your website should not take longer than e.g. writing this comment. Why would you think it has a significant tradeoff?
Are you familiar with Probably Good and their 1on1 career advising? This seems like a natural fit!
Thanks both, that’s exactly what I meant to be asking.
I understand! Out of curiosity, does whether the organization want to stay anonymous factor into the decision in any way?
Great to hear the second round was successful. Given an anonymous AI org is taking up half of the budget, I wonder what the overall approach of the org is, what makes you think you’re the best-suited funder for it, or what reasons led to granting anonymity to the organization. If there’s anything you’d be willing to share on any of these, it’d be greatly appreciated!
Is there any reason to believe this is pretty common? My understanding is that backing down from a pledge is exceedingly rare (New Report: 92% of Global Cage-Free Egg Commitments Fulfilled, Signaling a Tipping Point for Farm Animal Welfare | Ethical Marketing News). Of course, the above-mentioned news is tragic given the size of the companies involved.