I found this clear and reassuring. Thank you for sharing
Sanjay
I read that critique with hope, but ultimately I found it largely unconvincing.
I’m very surprised by the claim that mosquito nets keep their beneficiaries in poverty. Mosquito nets are not trying to lift people out of poverty, and yet there is some evidence that they do help lift people out of poverty to some extent. I really don’t understand how distributing nets can keep people in poverty.
Kalulu says:
if you randomly asked one of the people who themselves live in abject poverty, there is no chance that they will mention one of EA’s supported “effective” charities, as having impacted their lives more than the work of traditional global antipoverty agencies. No. That’s out of question.
To be honest, if you asked someone who had received $1000 from GiveDirectly whether it impacted their lives, I’m pretty confident they would say a hearty yes. It also allows the lived experiences of the poor to dictate what happens to the money—something which Kalulu demands.
GiveWell believes that all of the GiveWell Top Charities outperform GiveDirectly, and I think this is correct, unless you place an unusually low amount of value on saving a life. Again, GiveWell have validated whether they are imposing a Western perspective when it comes to this moral weight judgement—they have surveyed people in Africa on this question.
One area where I do agree to some extent: I think it would be good if more people from the populations which benefit from these interventions actually worked at GiveWell. I have certainly had conversations with GiveWell where we have discussed the details of models and I have invoked lived experience of spending time among the global poor, and I got the impression that GiveWell could have benefited from having more of this perspective.
Overall I still don’t feel we need to galvanise actions to improve the situation.
Some might be sceptical of a critique which could be paraphrased as: “EA is getting it wrong because it should be funding NGOs which are run by people who have lived experience of being ultra poor. By the way I have lived experience of being ultra poor and I run an NGO.” I don’t think you need to invoke this scepticism in order to find this critique unconvincing.
This is some of the finest writing I’ve seen on AI alignment which both (a) covers technical content , and (b) is accessible to a non-technical audience.
I particularly liked the fact that the content was opinionated; I think it’s easier to engage with content when the author takes a stance rather than just hedges their bets throughout.
David Moss and I recently conducted a study with about 500 participants looking at the extent to which people place moral weight on the far future.
The study found that older people give much less moral weight to the future.
The study included the following questions:
Is it better to save (A) 1 person now or (B) 1/2/1,000/1,000,000 people 500 years from now? (This is 4 different questions, one after the other, with differing numbers of people stated in option (B))
How far do you disagree or agree (on a 7-point scale) that:
“Future generations of people, who have not been born yet, are equal in moral importance to people who are already alive”
“We should morally prioritise helping people who are in need now, relative to those who have yet to be born”
There was a period around 2016-18 when I took this idea very seriously.
This led to probably around 1 year’s worth of effort spent on seeking funds from sources who didn’t understand why tackling EA issues was so important. This was mostly a waste of my time and theirs.
The formula isn’t just:
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise
Instead it’s :
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise plus the amount of extra work you get done by not having spend time seeking funding
Heartily agree with this.
For the pilot of SoGive Grants, we plan to
(1) Provide feedback as much as we can (the only reason we haven’t promised to give feedback to everyone is that this is a pilot and we don’t know whether that’s feasible for us)
(2) The application form is almost a copy and paste of the EA Funds application form, to make life easier for those who are applying to both
(BTW applications are still open and close on 22nd May)
I also want to echo pretty much every bullet point that Luke made about the value of feedback, which I think are excellent points.
Seems this is a good place to mention the EA Good Governance project.
This is useful to share, thank you.
I think it would be good if:
you shared with grant recipients which tier you think they are in (maybe you’ve already done this, but if you haven’t, I think they would find it useful feedback)
If anyone is in tier 4 and willing to have it publicly shared that they are in that tier, I think the community would find it useful
I appreciate that many people would dislike the idea of it being public that there are three tiers higher than them, but some EA org leaders are very community-spirited and might be OK with this.
- 1 Feb 2023 6:19 UTC; 7 points) 's comment on We’re no longer “pausing most new longtermist funding commitments” by (
I see some disagree votes on Ted’s comment. My guess at what they mean:
“Ted, please don’t be put off, Eliezer is being unnecessarily unkind. Your post was a useful contribution”.
A significant amount of your effort and the focus of the EA movement as a whole is on longtermism. Can you steelman arguments for why this might be a bad idea?
What a beautiful idea! De-escalating the political campaigning spend arms race and redirecting the money to high-impact charity sounds lovely! I have some thoughts, not all encouraging.
(1) I suspect your platform might not actually generate much donations
Getting donors to actually navigate to a donation platform is notoriously hard.
My intuition says that the idea is cute enough that it will get some attention (including, perhaps, from the press) but not enough to move lots of money.
However that’s just my intuition. Don’t trust it. A better guide than my intuition is if you can find a constituency who is willing to promote your concept, and who has influence over political funders. Alternatively, if you have evidence (perhaps conduct some primary research, if necessary?) that people with opposing political views often talk to each other and lament the fact that they throw so much money away in a futile manner, then maybe some press attention could spark something.
(2) To justify your spend, you probably want to generate >$1m in the near to mid term
As a rough rule of thumb, fundraising spend should generate c4x as much as the fundraising cost itself. So if you’re going to spend $250k, then you want to generate c$1m to justify the investment.
This is because you should get some reward for taking business risk.
If you believed that the political campaigning spend has some positive benefits (e.g. spreading useful information, or maybe you think that political engagement is an intrinsic good) then your threshold should be higher.
However you probably don’t believe this, and given the amount of money spent on political campaigning, I think I agree.
If you believed that the campaign spend is actually harmful, then you could justify a lower target. However note that this would be a fairly convenient belief for you to have, so aim to have really good evidence before even considering this.
(3) Find ways to lower your costs, e.g. through collaboration
If my guesses are right, you have a problem: you need to generate c$1m of donations, and I don’t think you will. So to help resolve this...
… I question the value of building your own donation platform.
There is already a plethora of donation platforms who have already spent c$250k in creating a platform. Collaborating with them could
lower your costs (and hence lower the $1m target)
allow you to expend more effort on getting donors and spreading your message
Downsides are:
you would probably have to accept some compromises about the nature of the donation platform
After all, if it hasn’t been designed with your needs in mind, it probably won’t be perfect.
However I expect that your project probably will achieve more impact through getting people to think about and talk about the problem, and less through the actual donations raised. If my expectations are right, then compromises on the details of the platform are OK.
Groups you could collaborate with:
SoGive runs a donation platform (Full disclosure: I founded and run SoGive)
Momentum might be a good fit for you (I can intro you if you wish)
(4) You want to “nudge” users to an apolitical, high-impact charity, such as AMF.
We at SoGive have seen some donors interact with this sort of campaign in the past. I suggest that you want to take the following approach:
As far as your donors are concerned, the money is going “to charity”, which means that they aren’t thinking too much about what that charity is, they will just assume that anything is good
You need to avoid anything political, because that would distract from the message. So no veterans charities, no climate change, nothing obviously political
Because your donors aren’t thinking about what the charity is, suggesting something like AMF will work just fine. Feel free to include something on your website explaining the rationale (e.g. “careful analysis, bang for buck, etc etc”). Not many people will read it.
I also suggesting making this a “nudge”; i.e. allow users to donate to any charity, but make the default AMF. Not many users will depart from the default.
Good luck, and let me know if you want to talk further!
I’m unclear on the proposal here. I’ve taken your bit in italics and adapted it to the EA context:
For three months after an EAG(x) or EA retreat, and for one month after an evening event, community organisers who organised the event, or speakers/organisers at the conference/retreat are prohibited from engaging romantically, or even hinting at engaging romantically, with attendees. The only exception is when a particular attendee and the facilitator already dated beforehand.
Is this what you had in mind? This would mean:
If an organiser of a local community organises monthly events, they wouldn’t be able to date any regular attendee of those events
People who were organising an EAG in a low-key, not-visible way would be forbidden from dating an attendee, or we would need to define a bar for visibility
Conference attendees are not prohibited from hitting on other attendees (at least not according to this specific rule)
Overall, I’d find it much easier to work out whether this is a useful proposal if I were clearer on what is being proposed.
At a time when the community has gone through so much, it’s hard to hear this.
I confess there’s a part of me which wants to disengage from this. I’m tired of worrying about whether EA culture has a problem with fraud, racism, or other things that I find offensive.
But I shouldn’t disengage.
Just because my emotional energies have been zapped by previous dramas, it doesn’t reduce the suffering experienced by victims of sexual abuse.
So first I’m going to say something which I think is obvious and uncontroversial to everyone:
Sexual abuse and harassment are wrong, and should not happen.
Secondly, I hereby take this pledge:
---
A pledge of solidarity to those who have suffered from sexual harassment or abuse
If you are upset or suffering because you have been abused or harassed, and you disclose this to me, I pledge to do the following:
I will listen and provide you with emotional support—if you’re upset, your distress will be my first priority at the outset.
I will not ask you questions to try to work out whether you are telling the truth. I would much rather trust and provide emotional support to someone who later turns out to have been lying than to question—even subtly—the legitimacy of someone who has suffered sexual abuse.
I will support you to work out the most appropriate next steps. I recognise that choices about your next steps may be complex, and I will not try to rob you of agency as you work out the best way forward.
---
In the spirit of the second bullet point of my pledge, I haven’t done any work to assess the truth or otherwise of the claims in this article. And I didn’t need to in order to feel disturbed by it.
I also don’t claim to be the best standard-bearer of opposing sexual abuse and harrassment—I don’t consider myself one of the top EA leaders, and I have no direct experience of having been a victim of sexual abuse. I’m simply one person (out of many, I believe) who think that EA should be deeply opposed to sexual abuse and harassment.
I haven’t received my copy yet, so how do we know that they are not, in fact, the same book?
Humanity enters a consumerist phase (the industrial revolution), becomes bloated, enters a cocoon (the long reflection) and emerges as a beautiful butterfly (a flourishing future for humanity).
[Epistemic status: I started this comment thinking it was a joke, now I don’t even know!]
SoGive piloted charity gift cards some years ago.
Our charity donations product worked as follows:
The gift giver loaded up the gift recipient’s account with however much money they liked
The gift recipient could choose to donate to any charity
The user journey “nudged” donors to high impact charities: the front page showed SoGive Gold-rated charities (which at the time perfectly aligned with GiveWell-recommended charities). To get to another charity required an extra click. The front page explained that those charities were there because careful research had shown these charities to be higher impact.
The successful bits:
Our user testing suggested the nudge was largely successful: users largely wanted to complete the process as quickly as possible and were happy to accept the research done by others
The less successful bits
Donor acquisition costs (i.e. the costs of online advertising to get users) was higher than the donations generated
We then experimented with the model. We tried a different product where the gift recipient receives 50% charity donation, 50% Amazon gift voucher. This was more successful, in that the amount of charity donations generated at least exceeded the advertising costs. However this was not sufficient—we had set a more demanding goal than this, and it did not reach that target.
We did not target the EA community, as we were aiming for impact, and didn’t want to target users whose counterfactuals involved donating to high impact charities anyway.
I think this does a good job of describing the problem.
The solution is hard. I’ve certainly found myself getting sucked into reading EA Forum posts about community topics and felt that my time was used poorly.
On the other hand, some of the posts were really valuable (George’s post on big-spending EA and some of the very posts in the aftermath of the FTX crisis spring to mind).
I think that means I want a UX which does allow me to see community posts, but somehow gives posts which have more substantive/subject-matter content more prominence.
I’m really very unclear about exactly what this looks like, which is why this seems hard.
Posts can achieve goals other than advancing the discourse, and I’m OK with that.
Great that you’ve looked into this Akhil! Speaking as someone with a wife and daughter (and a mother, and other female family members, and female friends...) this is close to my heart.
A key problem with all of these is how to assess effectiveness. IPV typically occurs behind closed doors, which makes it hard to know what’s really happening.
Largely because of these considerations, I predict that on further analysis, I will probably be less positive than you.
While this sounds consistent with a generalised GiveWellian sceptical prior, I say this with some sadness, because I would very much like reducing VAWG to be a high impact cause area.
Also, thank you for asking me for comments before publishing.
---
My main reason for being more pessimistic than you is that your internal and external validity adjustments seem very generous:
Source: your model
For brevity, I’ll focus on Community based social empowerment, since it’s the one you’re most positive about.
You have adjustments of 95% internal validity (aka replicability) adjustment, and 90% external validity (aka generalisability) adjustment[1]. I’d consider these numbers to be high (i.e. more prone to lead to generous cost-effectiveness evaluations)[2].
Your model’s 95% internal validity adjustment is the same internal validity adjustment that GiveWell uses for bednets. For comparison…
… malaria nets do merit a 95% internal validity adjustment. We have seen plenty of positive evidence for the effectiveness of bednets, and I’m told that there is so much evidence that it’s difficult to get ethics approval for more RCTs because ethics boards argue that it’s unethical to do studies with controls on something that is such a robustly proven intervention.
… cash transfers do merit a 95% internal validity adjustment. They are a robustly effective way of reducing poverty.
… Community Based Social Empowerment does not merit a 95% internal validity adjustment, in my view. Gathering this sort of evidence from surveys is very difficult, and I’d be surprised if the protocols are robust enough to give us the same confidence we have about the effect of malaria nets on mortality (deaths are relatively easy to count).
I also suspect the external validity adjustment is too generous. The intervention relies heavily on cultural context; several GiveWell external adjustments are high too, but human bodies are pretty consistent from one place to the next, whereas cultures vary a lot with geography.
Therefore I predict that:
in 90% of worlds where I (or someone from SoGive) sat down and reviewed this carefully, we would have validity adjustments lower than yours (i.e. lower than 95% and 90%).
in 50% of worlds where I (or someone from SoGive) sat down and reviewed this carefully, we would have validity adjustments substantially lower than yours (i.e. lower than 50%).
In summary, I think there’s a 75% chance that we conclude with a >2x worse cost-effectiveness than you, and a 25% chance of a greater than >4x worse cost-effectiveness than you for Community Based Social Empowerment.
This would be unlikely to be at the levels of cost-effectiveness where we would deem the intervention high impact.
I haven’t thought enough about the other interventions apart from Self-defence (IMPower, which has been done by No Means No). As Matt has alluded to, SoGive has done some work on this topic, and received some information which is not in the public domain. I can’t say too much about this, but I can discuss privately and guide you to the relevant researchers. SoGive’s plans are to press for permission to publish on this, and finalise within the next few months.
---
For clarity, I’ve alluded to SoGive in this comment, but this is not an official SoGive comment. Content written in a SoGive capacity has to gone through a certain level of review which has not happened here, so this is written in a personal capacity.
- ^
For those less familiar with these models, they are applied in a straightforward, intuitive way. It’s roughly equivalent to (Step 1) Calculate the benefit assuming full trust in the evidence; (Step 2) Multiply the benefit by the validity adjustments; (Step 3) divide by costs.
- ^
For those who want access to data to help them form their own view on whether these adjustment are high are not: In SoGive, we have pulled together a spreadsheet with GiveWell’s internal and external validity adjustments (we’re supposed to also add in SoGive’s own adjustments at the bottom, not just GiveWell’s, but have been less diligent at doing that). It’s meant to be a (not-rigorously vetted) internal resource, but I’m sharing it here in case it helps. It’s also probably a couple of years out of date now, but I’d from memory I don’t think there are changes material enough to matter in the last couple of years.
- 16 Jan 2023 11:36 UTC; 28 points) 's comment on What you can do to help stop violence against women and girls by (
I believe that in time EA research/analysis orgs both could and should spend > $100m pa.
There are many non-EA orgs whose staff largely sit at a desk, and who spend >$100m, and I believe an EA org could too.
Let’s consider one example. Standard & Poors (S&P) spent c.$3.8bn in 2020 (source: 2020 accounts). They produce ratings on companies, governments, etc. These ratings help answer the question: “if I lend the company money, will I get my money back”. Most major companies have a rating with S&P. (S&P also does other things like indices, however I’m sure the ratings bit alone spends >$100m p.a.)
S&P for charities?
Currently, very few analytical orgs in the EA space aim to have as broad a coverage of charities as S&P does of companies/governments/etc.
However an org which did this would have significant benefits.
They would have a broader appeal because they would be useful to so many more people; it could conceivably achieve the level of brand recognition achieved by charity evaluators such as Charity Navigator, which have high levels of brand recognition in the US (c50% with a bit of rounding).
Lots of the impact here is the illegible impact that comes from being well-known and highly influential; this could lead to more major donors being attracted to EA-style donating, or many other things.
There’s also the impact that could come from donating to higher impact things within a lower impact cause area, and the impact of influencing the charity sector to have more impact.
I find these arguments convincing enough that I founded an organisation (SoGive) to implement them.
At the margin, GiveWell is likely more cost-effective, however I’d allude to Ben’s comments about cost-effectiveness x scale in a separate comment.
S&P for companies’ impact?
Human activity, as measured by GDP (for all that measure has flaws) is split roughly 60%(ish) by for-profit companies, 30%(ish) by governments and a little bit from other things (like charities).
As I have argued elsewhere, EA has likely neglected the 60% of human activity, and should be investing more in helping companies to have more positive impact (or avoiding their negative impact)
The charity CDP spent £16.5m (c.$23m) in the year to March 2019 (source). They primarily focus on the question of how much carbon emissions are associated with each company. The bigger question of how much overall impact is associated with each company would no doubt require a substantially larger organisation, spending at least an order of magnitude more than the c$23m spent by CDP.
(Note: I haven’t thought very carefully about whether “S&P for companies’ impact” really is a high-impact project)
- What EA projects could grow to become megaprojects, eventually spending $100m per year? by 6 Aug 2021 11:24 UTC; 131 points) (
- 24 Mar 2022 20:48 UTC; 7 points) 's comment on The BEAHR: Dust off your CVs for the Big EA Hiring Round! by (
In a post this long, most people are probably going to find at least one thing they don’t like about it. I’m trying to approach this post as constructively as I can, i.e. “what I do find helpful here” rather than “how I can most effectively poke holes in this?” I think there’s enough merit in this post that the constructive approach will likely yield something positive for most people as well.