I wanted to write briefly to say a sincere thanks for writing this. I wrote a previous EA forum piece on kidney donation (which I think was linked here), have worked on the issue for a number of years, and answered some questions about kidney donation for Scott Alexander (whose post I believe prompted this one). I haven’t been working on kidney donation for my day job for a while, and I wish I had the time to comment on this piece more thoughtfully. But I did just want to chime in and say I really appreciate your writing this. I don’t agree with all the points, but I believe many of them are cogent and are worth serious consideration by people exploring kidney donation. I know it’s not easy to publicly critique arguments made my influential people in the EA community, and I think it’s a very valuable thing and you should be commended for it.
joshcmorrison
I’ve really liked the EA Forum summary bot, and it’d be cool if that could be used here (or just be a standard thing for any post beyond a certain length)
Why 1Day Sooner Needs EAs to Sign Up for Hep C Challenge Studies
Help Needed in Push for a Rapid Malaria Vaccine Rollout
Do you think that if GiveWell hadn’t recommended bednets/effective altruists hadn’t endorsed bednets it would have led to more investment in vaccine development/gene drives etc.? That doesn’t seem intuitive to me.
To me GiveWell fit a particular demand, which was for charitable donations that would have reliably high marginal impact. Or maybe to be more precise, for charitable donations recommended by an entity that made a good faith effort without obvious mistakes to find the highest reliable marginal impact donation. Scientific research does not have that structure since the outcomes are unpredictable.
Maybe I’m misunderstanding your point, but the two malaria vaccine that were recently approved (RTS,S and R21/Matrix M) are not mRNA vaccines. They’re both protein-based.
If anyone’s interested, 1Day’s hosting a brainstorming session this Friday at noon eastern to share information and identify possible tactics for accelerating the R21 rollout. Message me or reply here if you’re interested in joining or want to be kept updated.
But then what’s the path to creating a sense of urgency?
I can kind of picture a celebrity fundraiser drawing attention and the money purchasing speed. But I don’t have a great vision of what an advocacy campaign that changes the distribution timeline might look like. I don’t understand the decision-making structure we’d be trying to influence very well though so very open to alternatives.
I wonder if a Hollywood fundraiser for this could be interesting to try (some contacts to try would be Mike Schur, Damon Lindelof, and Mr Beast). If that were attractive, would be good to tie it to a GiveWell estimate if possible.
Agree with this and also with the point below that the EA angle is kind of too complicated to be super compelling for a broad audience. I thought this New Yorker piece’s discussion (which involved EA a decent amount in a way I thought was quite fair—https://www.newyorker.com/magazine/2023/10/02/inside-sam-bankman-frieds-family-bubble) might give a sense of magnitude (though the NYer audience is going to be more interested in these sort of nuances than most.
The other factors I think are: 1. to what extent there are vivid new tidbits or revelations in Lewis’s book that relate to EA and 2. the drama around Caroline Ellison and other witnesses at trial and the extent to which that is connected to EA; my guess is the drama around the cooperating witnesses will seem very interesting on a human level, though I don’t necessarily think that will point towards the effective altruism community specifically.
Yeah I should have clarified that I knew you’re not a native speaker and understand why that motivates your argument, but the harm of being exclusionary stems in part because not every reader will know that. (Though I think even if every reader did know that you were a non-native speaker, it still does create a negative effect (via this exclusionary channel) albeit a smaller one).
Also I didn’t take your claim to be “investigations should not only take place in cases where their results will be made public.” (Which seems to be the implication of your reply above but maybe I’m misunderstanding). I don’t think “public exposes are useful” implies that you need to necessarily conduct the work needed for a public expose in cases where you suspect wrongdoing.
Should also say as your friend that I recognize it sucks to be criticized especially when it feels like a group pile-on, and I appreciate your making controversial claims even if I don’t agree with them.
Linch, surprised you felt like titotal wasn’t reading your comment properly, since I feel like they make a version of the basically right argument here which is around deterrence and the benefits of public knowledge of wrongdoing outside the specific case. Any sort of investigatory/punitive process (e.g. in most legal contexts) will often have resources devoted to it that are very significant compared to the actual potential wrongdoing being interrogated. But having a system that reliably identifies wrongdoing is quite valuable (and even a patchwork system is probably also quite valuable). Plus there are a whole bunch of diffuse positive externalities to information (e.g. not requiring each actor in the system to spend the effort making a private judgment that has a decent chance of being wrong).
I think the broader problem with your argument here is it’s an example of consequentialism struggling to deal with collective action problems/the value of institutions. The idea that all acts can be cashed out into utility (i.e. “world is burning” above) struggles to engage with cases where broader institutions are necessary for an ecosystem to function. To use an example from outside this case, if one evaluates public statements on their individual utility (rather than descriptive accuracy), it can stymie free inquiry and lead to poorer decision-making. (Not saying this can never be accounted for through a consequentialist or primarily consequentialist theory but I think it’s a persistent and difficult problem).
I think “you didn’t seem to read my comment, which frustrates me” is a better thing to say to someone than “are you a native english speaker?” since it seems to get at the problem more directly and isn’t exclusionary to non-native speakers (which is rude, even if that’s not the intention). I also think the instant case should give pause about the way you’re attempting to deal with bad faith critics, since labeling a critic mentally as poorly comprehending or in bad faith can be a subconscious crutch to miss the thrust of their argument.
- Sep 22, 2023, 11:43 AM; 12 points) 's comment on Linch’s Quick takes by (
EA isn’t unitary so people should individually just try cooperating with them on stuff and being like “actually you’re right and AIs not being racist is important” or should try to make inroads on the actors’ strike/writer’s strike AI issues. Generally saying “hey I think you are right” is usually fairly ingratiating.
For what it’s worth, a friend of mine had an idea to do Harberger taxes on AI frontier models, which I thought was cool and was a place where you might be able to find common ground with more leftist perspectives on AI
This is really interesting. Thanks for sharing!
I think:
If you have a lot of influence, articles like this are inevitable.
EAs in AI should really try to make nice with the AI ethics crowd (i.e. help accomplish their goals). That’s where the most criticism is coming from. From my perspective their concerns are useful angles of attack into the broader AI safety problem, and if EA policy does not meet the salient needs of present-day people it will be politically unpopular and lose influence (a challenge for the political longtermism agenda more broadly).
I agree about EAs needing to cast a wider net, in really every sense of the term. We also need to be flexible to changing circumstances, particularly in something like AI that is so rapidly moving and where the technology and social consequences are likely to be far different in crucial respects to earlier predictions of them (even if the predictions are mostly true—this is a very hard dynamic to manage).
The article underscores the dangers to a movement so deeply connected to one foundation, and I expect we’ll see Open Phil becoming more politically controversial (and very possible perceived as more Soros-esque) fairly soon.
EA is also vulnerable to criticism as an elitist movement, and its interconnection with the AI industry will make it seem biased.
EA is not a unitary actor and EAs will often have opposing views on things. This makes any sort of reputation management quite challenging.
The most natural precedent to EA are the Freemasons and people hated them.
Thanks for writing this! Some quick thoughts on possibilities for CEA to consider:
Moving to a Membership Model: I think Open Phil’s status as the main customer of CEA (raised above) is a problem and that a move to CEA as a membership organization (with board elected by the membership) could help with this. Membership could be anyone who provides evidence of giving >5% of money to charity (maybe excluding other religious groups) who chooses to register as a member. (You could also create some sort of application process for people outside the 5% donors—that number just seems to be a useful commitment mechanism).
Rotating Annual Presidents: One way to get broader buy-in and legitimacy would be to do what professional societies do and have the public face of the organization (the president) rotate each year (or on some regular basis) and then have an executive director who manages the organization’s operations. This could also help organize how CEA’s board should function (since often professional societies structure their board around the transition from past to future presidents, where the board is made up of next year’s president, the current president, the past year’s president, and a few other potential candidates for the next year’s president).
Dissociate from FTX: It would probably be good for people who worked at FTX/FTX Foundation to leave the EV/CEA board prior to the Sam Bankman Fried trial.
Also a direction for CEA that would interest me would be to search for, evaluate, and highlight historical or current effective altruist projects in the world (i.e. things that are plausibly altruistic and come from outside the “effective altruist” community but are likely to fall within 1/10th the GiveWell bar).
Will flag that I think EA should move towards a much more decentralized community/community-building apparatus (e.g. split up EV into separate nonprofits that may contract with the same entity for certain back-office functions). I also think EA community building should be cause neutral/individual centric and not community/cause-centric (i.e. support people who want to be effectively altruistic in their attempt to live a meaningful life rather than drive energy towards effective causes). I think the attempt to sort of be utilitarian all the way down and use the community-building arm to drive towards the most effective goals creates harmful epistemic and political dynamics—a more neutral and member-empowering approach would be better.
Air Safety to Combat Global Catastrophic Biorisks [REVISED]
Indoor air quality and the frontiers of advance market commitments
Thanks, Howie for posting this. Glad to see an experienced and trustworthy hand at the wheel during a difficult time.
A bleg I have would be for some EA with a bit of time on their hands to take a look at the publicly available UK charitable inquiry incident reports to see what % result in regulatory action (and/or findings of wrongdoing) as well as other useful details as precedent. I think this would be helpful in giving a sense of what to expect for EV UK going forward and what steps should be taken in advance. Based on my very quick and rough perusal of the first five reports listed on the site, it looks like all five inquiries identified misconduct and resulted in regulatory action.
It looks like the Commission does have an ability not to publish finished reports, so it’s possible those are an unrepresentative sample of inquiries, but (on a very very preliminary glance) the outlook does not seem especially promising.
Thanks for writing this post! I think it’s thoughtful and well-reasoned, and I think public criticism of OP (and leading institutions in effective altruism in general) is good and undersupplied, so I feel ike this writeup is commendable. I work at a global health nonprofit funded by OP, so I should say I’m strongly biased against moving lots of the money to animal welfare.
An argument I’ve heard in the past (not the point of your post I know) is that because humans (often) eat factory-farmed animals, expanding human lifespan is net negative from a welfarist perspective (because it increases the net amount of suffering in the world). 1. Is this argument implausible (i.e. is there a good way to disprove it?) and 2. If the argument were true, would it imply OP should not fund global health work at all (or restrict it very seriously)?