Another Philosophers Against Malaria Fundraiser has begun: https://​​www.againstmalaria.com/​​FundraiserGroup.aspx?FundraiserID=9418
In the last years, we got ca $65.000 in donations. Early donations are especially helpful, as they populate the page and give a sense of dynamism!
Any share with philosophers or university patriots that you know would be especially welcome. The fundraiser is a ‘competition’ between departments that aggregates donations; the winner is announced on the popular philosophy blog ‘DailyNous’. Last year, the good folks at Delaware won. Before that, Michigan took the crown. Ohio State and Villanova lie in shambles.
Any help much appreciated! These are easy to run—if you are interested in starting one for your discipline, please reach out.
I’m happy to see engagement with this article, and I think you make interesting points.
One bigger-picture consideration that I think you are neglecting is that even if your arguments go through (which is plausible), the argument for longtermism/​xrisk shifts significantly.
Originally, the claim is something like
There is really bad risky tech
There is a ton of people in the future
Risky tech will prevent these people from having (positive) lives
________________________________
Reduce tech risk
On the dialectic you sketch, the claim is something like
There is a lot of really bad risky tech
This tech, if wielded well, can reduce the risk of all other tech to zero
There is a small chance of a ton of people in the future
If we wield the tech well and get a ton of people in the future, thats great
_________________________________________
Reduce tech risk (and, presumably, make it powerful enough to eliminate all risk and start having kids)
I think the extra assumptions we need for your arguments against Thorstadt to go through are ones that make longtermism much less attractive to many people, including funders. They also make x-risk unattractive for people who disagree with p2 (i.e., people who do not believe in superintelligence).
I think people are aware that this makes longtermism much less attractive—I typically don’t see x-risk work being motivated in this more assumption-heavy way. And, as Thorstad usefullly points out, there is virtually no serious e(v) calculus for longtermist intervention that does a decent job at accounting for these complexities. That’s a shame, because EA at least originally seemed to be very dilligent about providing explicit, high-quality e(v) models instead of going by vibes and philosophical argument alone.