You can give me anonymous feedback here: https://forms.gle/q65w9Nauaw5e7t2Z9
Charles He
As someone dedicated to ultra near termism, I question the alignment of the author.
This post is suspiciously long and erudite. I am skeptical that this output is consistent with the preferences and abilities (constrained by investment) of an ultra-neartermist.
Is this a good faith effort to garner attention to our cause, or is it an attempt to steer our (minute) resources to other causes?
I’ll now elaborate my concerns, within the necessary constraints:
I think that
in particular in the combination with the top comment by Ryan Carey gives me a really quite bad vibe.
I think you are interpreting RyanCarey’s comment as silencing of dissent. This seems unfair to me.
I thought RyanCarey’s comment was sort of specifically wincing about people saying specific weird things, like speculating about certain kinds of coordination or suggesting certain faculty with politics.
Given how snippets can be used unfairly (see /r/sneerclub) and also considering whatever is going on in American politics, this concern seems valid.
The comment seems orthogonal to frowning on dissent about the candidate or supporting elections in general.
I think writing a caution can be difficult. You don’t want to be get specific, and sounding overly worried is counterproductive.
Ok, this is a little complicated, but there’s an open PA spot for Liv and Igor.
So mentioning this because I guess the idea is that you can have impact by making these people more effective. You can also have influence in more than a nominal way. If you have worked with exec assistants in business, supporting roles often take on a lot of responsibility and have strong relationship with exec judgement.
Also, there might be big upsides being in an environment that probably lacks process and might have strong pressure to grow quickly. Or that could be a nightmare.
The ad reads a bit like a personal, personal assistant to these people and not EA work per se.
I guess fit is important and if you like Liv and Igor.
Thanks to Parth Ahya for originally sharing.
Hey, I got an email with a code and then I entered it.
What does it do? Is there a prize?
This account has some of the densest and most informative writing in the forum, here’s another comment
(The comment describes CEA in a previous era. It seems the current CEA has different leadership and should be empowered and supported).
It seems many of the downsides of giving feedback would also apply to this.
I think lower resolution feedback introduces new issues too. For example, people might become aware of the schema and over-index on getting a “1. Reject” versus getting a “2. Revise and resubmit”.
A major consideration is that I think some models of very strong projects and founders says that these people wouldn’t be harmed by rejections.
Further considerations related to this (that are a little sensitive) is that there are other ways of getting feedback, and that extremely impactful granting and funding is relationship based, not based on an instance of one proposal or project. This makes sense once you consider that grantees are EAs and should have very high knowledge of their domains in EA cause areas.
I’m not sure this comment is helping, but I don’t agree with this post.
Separate from any individual grant, a small number of grant makers have unity and serve critical coordination purposes, especially in design of EA and meta projects, but in other areas as well.
Most of the hardest decisions in grant making require common culture and communicating private, sensitive information. Ideas are worth less than execution, well-aligned competent grantees are critical. Also, success of the project is only one consideration (deploying money has effects on the EA space and also into the outside world, maybe with lasting effects that can be hard to see).
Once you solve the above problems, which benefit from a small number of grant makers, there are classes of projects where you can deploy a lot of money into (AMF, big science grants, or CSET).
The above response doesn’t cover all kinds of EA projects, like the development of people, or nascent smaller projects that are important. To address this, outreach is a focus and grant makers are often generous with small grants.
Grant makers aren’t just passively gatekeeping money, just saying yes or no to proposals. There’s an extremely important and demanding role that grant makers perform (that might be unique to EA) where they develop whole new fields and programmes. So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.
The post doesn’t mention how advisers and peripheral experts, in and outside of EA, are used. Basically, key information to inform grant making decisions is outsourced, in the best sense, to a diverse group of people. This probably expands grant making capacity many, many, times. (Of course this can be poorly executed, capture, etc. is possible, but someone I know is perceptive and hasn’t seen evidence of this.)
I’m not sure I’m wording this well, but inferential distance can be vast. I find it difficult to even “see” how better people are better than me. It’s hard to understand this, you sort of have to experience it. To give an analogy, an Elo 1800 chess player can beat me, and an Elo 2400 chess player can beat that person. In turn, an Elo 2800 player can effortlessly beat those people. When being outplayed in this way, communication is literally impossible, I wouldn’t understand what is going on in a game between me and the Elo 1800, even if they explained everything, move by move. In the same way, the very best experts in a field have deep and broad understanding, so they can make large, correct inferential leaps very quickly. I think this should be appreciated. I don’t think it’s unreasonable that EA can get the very best experts in the world and they have insights like this. This puts constraints on the nature and number of grant makers who need to communicate and coordinate with these experts, and grantmakers themselves may have these qualities.
I think someone might see a large amount of money and see a small amount of people deciding where it goes. They might feel that seems wrong.
But what if the causal story is the exact opposite of this intuition? The people who have donated this money seem to be competent, and have specifically set up these systems. We’ve seen two instances of this now. The reason why there is money at all is because these structures have been setup successfully.
I’m qualified and well positioned to give the perspective above. I’m someone who has benefitted and gotten direct insights from grant makers, and probably seen large funding offered. At the same time, I don’t have this money. Due to the consequences of my actions,
I’ve removed myself from the EA projects gene pool. I’m sort of an EA Darwin award holder. So I have no personal financial/project motivation to defend this thing if I thought it was bad.
In your post, I think your concerns are in two categories:
Issue A. Not tracking the effects of recipients (or more likely, initially trying to track but not finding no positive statistical effects and dropping data collection).
Indeed, AMF had a plan to monitor malaria case rates before and after distributions to prove their effectiveness. However, when they actually collected the data they concluded the data was of poor quality and so abandoned this plan...I find this very worrying. Maybe the data was of poor quality, but that is a reason for working harder in this area rather than abandoning it altogether. In general, if we only have poor quality data about malaria in a region, doesn’t that mean we do not know how effective a bednet distribution will be?
Issue B. At the country level (not monitoring recipients of AMF nets but malaria levels in countries), there is no/limited/mixed evidence for malaria reduction:
Taking a step back from the Against Malaria Foundation to look at the malaria problem more generally, there is mixed evidence that bed net distributions reduce malaria case rates. GiveWell has a macro review of the evidence which shows at the nation-level you cannot demonstrate any impact from all malaria control initiatives.
...Malaria rates in Benin, DRC, Ghana, Mali & Sierra Leone increased as net coverage increased, which is more evidence that the malaria data being used is not great. In central Africa malaria was trending downwards before bednet coverage was scaled up, further muddying the waters when trying to measure impact.
“Available data and studies appear to show some cases of apparent malaria control success, and also seem to indicate that the overall burden of malaria in Africa is more likely to be falling than rising. However, in most cases it is difficult to link changes in the burden of malaria to particular malaria control measures, or to malaria control in general, and the data remains quite limited and incomplete, such that we cannot confidently say that the burden of malaria has been falling on average.”
What you wrote and the reasoning is a complete and well reasoned line of thought from careful study of the AMF website.
However, this is not sufficient evidence for strong updates against AMF.
For me, it’s not even enough evidence that would cause me to investigate this issue further.
The root issue/crux is that the “causal inference”/”causal identification” or the information you can get from statistics you collected here is very low, and far from a model of impact or finding the Truth.
Some perspectives:
Issue A: For the first issue, where tracking recipients was ineffective (or as you suggest and I would also find plausible, they found no statistical effect and then data collection was dropped), I don’t know more than what you wrote, but finding no effects is plausible, even common, in highly successful interventions.
The statistical power may be very low. To get intuition for this, remember that a life saved costs $5000 in expectation and a bednet costs ~$2. In some real statistical sense, you literally need thousands of bednets to get an “observation” of a death or life saved. So you may need many, hundreds of thousands, or really millions of bednets to get enough observations for statistical power. But that’s just one layer of the difficulty and assumes perfectly balanced groups of treatment/control, demographics—you may need an order of magnitude more observations to do a proper observational study. Even generously, that’s a large fraction of all the bednets distributed in a year. From this problem alone, my prior would be to find no effect and also I would expect it to impose large operational costs that many donors would find unacceptable (I would).
The above implies a pretty clean, controlled test environment. E.g. two villages, one with bednets or one without, or really, two children in the same household, where one gets a treatment with one bednet and one without. This isn’t going to happen in the actual program and the effects are wildly different if not controlled.
Examples of random stories that’s going to mess up inference: a principled bednet distributor might give nets to poorer families, families that have sicker children and adults. Since everyone probably knows bednets are effective, wealthier families might get their own (which is good, AMF can give to the really poor), and these wealthy families might get more premium bednets and treatments (e.g. $10 instead of $2), so you don’t have a comparison group.
There’s even more pathological stories that mess up your inference: if you were a skilled implementor, working in this program on the group for many years, and you know you only have 100 bednets for 1000 people (maybe because the EAs got captured by the AI/futurist memes which diverted all the billionaire funds), it’s possible that you know, working on the ground, who gets the bednets is very important, like by a factor of 2 or 4. That is, if you give the right bednets to the right people you can increase cost effectiveness by 200-400%. By definition, this skill isn’t legible by some survey. So your very skill in giving bednets to the worst families, more afflicted by malaria, means that someone looking at the data will go “hey when we collect data for recipients of malaria nets, these families don’t look better worse off, let’s cancel this.”
Issue B: Cross country effects
The cross country sort of examinations suffers from all of the issues above, but is even weaker. For example, climate trends, poverty, institutional change are all going forces that will mess up results, and even this is description is a crude gesture at the realities of what is going on. What about other ways malaria can be contracted, besides sleeping in an bednet eligible bed?
These confounding effects mean that nation studies might never find an effect at all, even with very effective interventions. One new major crux is how much coverage of bednets there is in a country. Again, I don’t know anything about this more than reading your post, but if bednet distribution is 10% or even 30%, that is may not be enough to find an effect even if bednets were 100% effective.
That’s assuming that bednets were 100% effective. If bednets were even 1% effective (which by the way still makes them completely worth it and is consistent with the CEA of $5000 per life for ~$2 bednet), you may never be able to find an effect from an observational study.
Basically, cross country regressions aren’t good without being embedded with a strong model/context and this domain is sort of an “also ran” in economics.
Again, what you wrote and the reasoning is a complete and well reasoned line of thought from careful study of the AMF website.
You said:
we may be ignoring evidence that the world is more complex than we thought, something which effective altruists ignore at their peril.
Like, to be clear, let’s flip the evidence another way around:
Imagine someone who came to you for money for a new project or new business. This person didn’t understand the intervention, didn’t understand the country or people. All they present is an argument they read from papers, with just country level observational data, or data from someone who they didn’t know, who collected some data giving nets to families.
If you were being asked to give money to this person, this information is not enough to trust them, (and it may even be wise to distrust them if this was the only argument they were able to present.)
- The value of small donations from a longtermist perspective by 25 Feb 2022 16:42 UTC; 102 points) (
- 18 Oct 2021 4:35 UTC; 38 points) 's comment on Concerns about AMF from GiveWell reading—Part 1 by (
I think a downvoters view is that:
It packs powerful claims that really need to be unpacked (“unsustainable...massive suffering”), with a backhand against the community (“actually care...claim to”) with extraordinary, vague demands (“large economic transition”), all in a single sentence.
It’s hard to be generous, since it’s so vague. If you tried to riff some “steelman” off it, you could work in almost any argument critical of capitalism or even EA in general, which isn’t a good sign.
I think it is difficult to intervene in an emergency and extremely cost inefficient. Also, the situation has maximum attention.
It is difficult to watch the events and realize many people similar to us will suffer and die.
There were similar posts around the Afghan collapse in 2021, which got some attention, although the suffering was relatively small.
Now, months later, some say we could see the deaths of millions of people.
But this isn’t in the news anymore.
This isn’t completely directly related, but this is a comment given in the EA subreddit to this post. This comment was well upvoted.
This person appears to be a staff engineer and hiring manager at Google, and has worked over 10 years there.
I struggle with this, at least as framed. The problem, right now, is not the lack of qualified causes. The problem is talent-cause-org alignment.
I work for FAANG as a staff software engineer, am EtG and, at the risk of sounding arrogant, am—most would say—objectively good at my job. If I wanted to leave to work on an important cause area, there are so many barriers to that happening.
There’s a great deal of science and energy that has to go into running any organization that many smaller, early stage orgs don’t have expertise in. Running HR, managing people, and understanding how to foster and grow a successful human capital is a really, really hard problem. Every time I interact with folks in EA working at these companies, I see systematic organization management failures at every level.
Many early-stage orgs are unstable with real risk of failure. To accept an offer at one, I would want to interview the founders as much as they would want to interview me. I would want at least 5 hours with a founder. Of course, the problem is that there’s 100s of applicants for any role and there’s a scaling problem: they cannot give me that kind of time. And so, I would fundamentally lack the information that I would need to make a confident decision to walk away from the golden handcuffs.
At least in the US, our government is dysfunctional. I suspect that Social Security will not exist for part of my retirement and our safety net is particularly bad. Any employer must offer me retirement security though retirement investment account funding. The fear of ending up in a warehouse retirement home getting no medical care is very real.
There is a lot going on in this comment. I think the content is important. It talks about institutional competence, culture fit with founders, operations (which might be underrated and this comment one reason—talent can smell org competence) and economic security, that seem to apply to rare talent.
Counterintuitively, I see the underlying issues as tending to favoring EA. I don’t want to write a giant manifesto about it unless there is demand.
One example is the presence of staff that monitor all interactions in order to enforce certain norms. I’ve heard that they can seem a bit intimidating at times.
I agree that transparency to the public is really lacking. I happen to know there is an internal justification for this opaqueness, but still believe that there are a lot more details they could be making public without jeopardizing their objectives.
The content in this comment seem really false to me, both in the actual statement and the “color” this comment has. It seems like it could mislead others who are less familiar with actual EAG events and other EA activities.
Below is object level content pushing back on the above thoughts.
Basically, it’s almost physically impossible to monitor a large number of interactions, much less all interactions at EAG:
Most meetings are 1on1s that are privately arranged, and there’s many thousands of these meetings at every conference. Some meetings occur in scheduled events (e.g. speed meetings for people interested in a certain topic).
It’s not possible that CEA staff could physically hover over in all person meetings, I don’t think there’s enough staff to cover all centrally organized events (trained volunteers are used instead).
Also, if someone tried to eavesdrop in this way, it would be immediately obvious (and seem sort of clownishly absurd).
In all venues, there is “great diversity” of the physical environments people could meet.
This includes large, open standing areas, rooms of small or medium size, booths, courtyards.
This includes the ability to walk the streets surrounding the venue (which can be useful for sensitive conversations).
By the way, providing this diversity is intentionally done by the organizers.
CEA staff do not control/own the conference venue (they rent and deal with venue staff, who generally are present constantly).
It seems absurd to write this, but covert monitoring of private conversations is illegal, and there’s literally hundreds of technical people at EA conferences, and I don’t think this would go undetected for long.
While less direct, here are anecdotes about EAG or CEA that seems to suggest an open, normal culture, or something:
At one EAGx, the literal conference organizers and leader(s) of the country/city EA group were longtime EAs, who actively expressed dislike of CEA, due to its bad “pre-Dalton era” existence (before 2019)
The fact that they communicated their views openly and still lead a EAGx and enjoy large amounts of CEA funding/support, seems healthy and open.
Someone I know has been approached multiple times at EA conferences by people who are basically “intra-EA activists”, for example, who want different financing and organizing structures, and are trying to build momentum.
The way they approached seemed pretty open, e.g. the place they wanted to meet was public and they spoke reasonably loudly and directly
By the way, some of these people are employed by the canonical EA organizations or think tanks, e.g. they literally have physical offices not too far next to some of the major, major EA figures.
These people shared many details and anecdotes, some of which are hilarious.
Everything about these interactions and the existence of these people suggests openness in EA in general
On various matters, CEA staff don’t agree with other CEA staff, like all normal, healthy organizations with productive activities and capable staff. The fact these disputes exists sort of “interrogates the contours” of the culture at CEA and seems healthy.
By its reputation, output, and the quality and character of management and staff, Rethink Priorities seems like an extraordinarily good EA org.
Do you have any insights that explain your success and quality, especially that might inform other organizations or founders?
Alternatively, is your success due to intrinsically high founder quality, which is harder to explain?
The main issue and the reason why I’m commenting is that I’m concerned about the voting patterns.
I’m not sure why this comment is downvoted:
I’m not sure why the top comment is sitting at +1 and has 5 votes.
I’m not sure why an EA CEO has strong upvoted themselves in a thread involving mis/inaction of their org.
So, the “smell” of this voting is sort of intense.
Entirely setting aside this particular event, or the particular people involved, I think it’s reasonable to be concerned about setting examples or norms of behavior that involve control over EA institutions.
Like, funding is growing, and there’s incentive for would be “CEOs” or “EDs” to basically take the “outer-product” of the set of cause areas and set of obvious EA meta institutions, take an element from the resulting matrix and instantiate it.
In particular, people might do this because inputs/performance is hard to observe for “CEOs”, once starting it’s hard to dislodge, and in these meta orgs, existence or demand is conflated with the EA brand (allowing failure upwards).
So, in this situation, it’s already “quickdraw”.
So, let’s not add the feature of having constituencies of these warlords, voting on stuff, that situation is No Bueno.
As a Democratic candidate, being “the crypto guy” is a political liability; being tied to EA, not as much
How strongly do you think this was an update in the favor of an underlying reality of “EA being easy to present”, instead of “EA getting really good draws” in one campaign?:
It seems like the Politico and WaPo articles were really fair, even good for EA. National press could have ended up being hostile as others were.
A lot of attention was on money and crypto, which take up mindshare/sound bites. Maybe this shielded discussion of more cerebral criticism of “being an EA”.
The campaign manager, much of the staff, and the candidate himself are some the best talent in EA. I guess they worked hard. It might be hard to see how many fires they put out or issues they massaged over, this could be consequential.
If the speculation in this comment is true, it might be hard to tell the difference. This is because these “draws” from this one instance have lasting effects and will set a positive tone for EA for a long time.
I guess one difference is that, if the story in this comment is right, EA can’t be confident that a median quality political effort wouldn’t create issues or erode positive sentiment.
Ok. Lark’s response seems correct.
But surely, the spirit of the original comment is correct too.
No matter which worldview you have, the value of a top leader moving into EA is overwhelmingly larger than the the social value of the same leader “rowing” in these companies.
Also, at the risk of getting into politics (and really your standard internet argument) gesturing at “free market” is really complicated. You don’t need to take the view of Matt Stoller or something to notice that the benefits of these companies can be provided by other actors. The success of these companies and their resources that allow recruitment with 7 figure campus centres probably has a root source different than pure social value.
The implication that this statement requires CEA to have a strong model of these companies seems unfair. Several senior EAs, who we won’t consider activists or ideological, have deep experiences in these or similar companies. They have opinions that are consistent with the parent comment’s statement. (Being too explicit here has downsides.)
https://80000hours.org/problem-profiles/
Animal welfare is not even in the first or second tier.
Like, literally nanotech is beating it out, as well as “malevolent actors”, and “improving governance of public goods”.
EAGs can turn into a long weekend of 1on1s for some people.
I think this is probably happening more to senior EAs.
See actual message:
Thoughts:
I guess that senior EAs (gatekeepers) tend to be more locked up in 1on1s. People not in these 1on1s—everyone else in the conference—is in a different pool of “leftovers”. Maybe the “leftovers” are missing both senior people as well as ambitious, junior people. This (completely innocent) process might lead to some sort of “adverse selection”/”attractor state” story, as more people realize that the pool of people has changed. As a result, “speed meetings” might not fix things.
It’s common to advise a new EA to prioritize meeting people over going to events (sometimes the senior people also mention that they don’t attend events at all). This raises issues: if this advice is so dominant, we should signpost it for everyone—but then we need clarity about what the scheduled events are supposed to be.
A valid perspective is that a 1on1 fest is optimal. But if this is true, maybe we can lean into it and set up another format of conference?
You could write a lot about this, but a weekend of 30 minute 1on1s has different value than a meetup. I think the openness, flexibility and dense concentration of people in a conference creates valuable interactions.
Overall, I’m not sure this is a defect, but there is probably something going on related to scaling. There is an opportunity to make things work better.
If you believed thought some of the above was happening and you wanted to edit EAG to address it, it would require care and moderate actions. Maybe we could nudge people into open events, or develop 1:2 or 1:3 formats.
This takes some coordination, so for next steps, we probably want to hear from others.
This is an extremely high effort write up and you spent a lot of time on something that you really cared about and that could make a difference.
Comments:
For long and potentially highly impactful posts like yours, I think people should consider splitting up posts. You are at the 20,000 word mark and the EA forum reports 100 minute read times. Breaking this down into multiple successive posts can increase engagement without necessarily more writing effort.
While not as knowledgeable as you, in 2019, I participated in organizations related to Extinction Rebellion, to help understand how they can be impactful. Although my knowledge is not deep, it is hard for me to construct very positive views from my experiences. I don’t know how useful it is to write a lot more right now.
I see myself named first in your acknowledgements which I don’t understand. I am grateful for your efforts and I care about new ways to be impactful and finding the truth, so I was happy to make some comments. I hope these helped. However, I don’t think I was the highest effort commentator and I am definitely not even in the top 90% the most important (Johannes Ackva comes to mind). I’m very happy for a demotion that buries me down somewhere less conspicuous.
The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.
If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause areas, that is a complete and total answer to:
“Too much” spending
billionaire funding/asking people to donate income
most “epistemic issues”, especially with success in multiple cause areas
If we have the world leaders in global health, animal welfare, pandemic prevention, AI safety, and each say, “Hey, EA has the strongest leaders and its ideas and projects are reliably important and successful”, no one will complain about how many free books are handed out.