I’m just a normal, functioning member of the human race, and there’s no way anyone can prove otherwise
Matt_Sharp
I think the purpose of the ‘overall karma’ button on comments should be changed.
Currently, it asks ‘how much do you like this overall?‘. I think this should be amended to something like ‘how much do you think this is useful or important?’.
This is because I think there is still a too strong correlation between ‘liking’ a comment, and ‘agreeing’ with it.
For example, in the recent post about nonlinear, many people are downvoting comments by Kat and Emerson. Given that the post concerns their organisation, their responses should not be at risk of being hidden—their comments should be upvoted because it’s useful/important to recognise their responses, regardless of whether someone likes/agrees with the content.
This is a useful post in terms of US politics.
But to state the obvious: there are other countries, and in some of these countries there may be a much stronger case for EAs to attempt to exercise direct political power.
This is a useful analysis, and collectively I agree it suggests there has been a negative impact overall.
However, I think you may be overly confident when you say things like “FTX has had an obvious negative impact on the number of donors giving through EA Funds”, and “Pledge data from Giving What We Can shows a clear and dramatic negative impact from FTX”.
The data appears to be consistent with this, but it could be consistent with other explanations (or, more likely, a combination of explanations including FTX). For example, over the past couple of years there has been very high inflation across many countries, and a big drop in the value of many cryptocurrencies. Both might be expected to reduce the number of donors and the amount they donate.
This is a reasonable argument, and seems quite plausible for farmed animals.
I think the biggest uncertainty here—at least in terms of impact on animals—is what each additional human life means for wild animals. If wild animals typically have net negative lives, and more humans reduces the number of wild animals, then perhaps family planning charities aren’t beneficial for animals overall.
This looks like it might be a valuable post. However, it has an estimated reading time of 30 mins. Can I encourage you to add an executive summary? This may encourage greater engagement with your claims.
Yeah, and one thing that often gets lost in the ‘EA now has loads of money’ claim is the fact that it only has a relatively large amount of money compared to a few years.
Compared to total global resources, this new money going to EA causes is really rather tiny. There is huge scope to grow and improve allocation of resources.
We should be encouraging projects that could bring even more money into the influence of EA thinking.
“It’s clear that many EAs believe that there are population-level differences in average intelligence between ethnic groups”
It is not clear that many EAs believe this, unless you can point to a representative survey that shows otherwise. I would suggest changing ‘many EAs’ to ‘some commenters on the EA Forum’.
From the perspective of total utilitarianism and longtermism, two things will plausibly dominate the direct value of additional near-term people:
The impact of additional near-term people on animal suffering. This seems very likely to be net negative.
The impact of additional near-term people on the risk of extinction/longtermist issues. I’m very uncertain about this. A larger population could result in faster economic growth—which would likely be net positive. But intuitively a population that is growing too quickly—particularly in countries where women would prefer to have fewer children—could also face social issues that contribute to social and political instability (think mass unemployment/poverty of young people, mass migration into other countries), which could have consequences that increase the risk of longterm harm.
While I expect you are correct that violence against women is a much bigger issue than violence against men overall, I would be more convinced if you were able to share some comparative data. The one comparative datapoint we have here, provided by Question Mark, is that men are more likely to be homicide victims.
We’ve seen a lot, almost non-stop criticism on the EA forum after the FTX collapse following on a huge increase in criticism due to the red-teaming competition, many of which was highly upvoted.
So I don’t know, I hardly feel like we have a shortage of criticism at the moment.
Agreed.
While much of the FTX criticism/discussion is justified, and the red-teaming competition seems like a (mostly) valuable and healthy undertaking, what I find so motivating about EA is the combination of rigorous criticism alongside the examples of hugely positive impact that can be (and already have been!) achieved. If we forget to at least occasionally celebrate the latter, EA will become too miserable for anyone to want to get involved with.
UK government plan for animal welfare
It was a tough decision, but it turns out I do care more about helping sentient beings in the long term than I do about Wordle and Derry Girls
How did you identify “services that there is a high demand for but not enough supply”? Is it simply based on the “quick look” you did, or is there some other evidence?
The absence of EA services could simply be evidence of sufficient non-EA services, in which case it’s probably worth thinking about the pros and cons of having EA services.
The most obvious justification seems to be to keep money in the community, and/or to provide services at a relative discount.
However, by relying on EA services there is a risk of missing out on the highest quality services that already exist: I can’t think of any particular reason why EAs would necessarily be better than the rest of the world at providing finance, legal, or tech services. Though perhaps in many cases this doesn’t matter—maybe EA orgs merely need ‘adequate’ rather than ‘best’.
This is a useful, concise, and clearly-written overview. Thank you for sharing.
With regards to the self-defence/IMPower program, and in particular as implemented by No Means No Worldwide, there is an additional RCT by Baiocchi and Sarnquist that appears to still be unpublished (the study protocol is here). The authors shared their write-up with me, and it appears to be somewhat more methodologically robust than other studies in this specific context.
Understandably, they asked me not to share any of the findings until the paper is published—the last I heard (June 2022) they had submitted it to a journal for review.
If you haven’t done so, you (or other readers) may wish to contact the authors for further info.
(I very briefly looked into this as part of my previous role as principal analyst at SoGive. I see you’ve been in touch with Sanjay, and have noted in a footnote that SoGive may be publishing something on NMNW, which could potentially include the findings, if the paper is published by that point).
I welcome the footnote setting out the detailed cost calculation.
It is this commitment to rigour and transparency that demonstrates the intellectual and moral superiority of effective altruists compared to other humans, and, indeed, all sentient life.
I didn’t downvote either of your articles on misquoting. Skimming over the first article now, it seems reasonably well argued.
However, I agree with the following points made on this comment (which you also referred to in your second article):
There’s too much to read, so people don’t have extensive time to engage with everything. Try to be succint.
One of your post spent 22 minutes to say that people shouldn’t misquote. It’s a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.
Use examples showing why the topic is important (or even stories). It allows to link your arguments to something that exists.
You can think with purely abstract stuff—but most people are not like that. A useful point to keep in mind is you are not your audience. What works for you doesn’t work for most other people. So adapting to other reasoning types is useful.
From skimming your first misquoting article, I don’t think you’ve made the case that misquoting is a particular problem within EA. I don’t think there are any examples? In which case, some people might read it, get to the end and think “well that was a waste of 22 minutes and hardly seems relevant to EA, so I’ll downvote it to deter others from spending time reading it”.
Have you read this GiveWell page on bed nets? They state:
There is strong evidence that when large numbers of people use LLINs to protect themselves while sleeping, the burden of malaria can be reduced, resulting in a reduction in child mortality among other benefits.
Insecticide‐treated nets reduce child mortality from all causes by 17% compared to no nets (rate ratio 0.83, 95% CI 0.77 to 0.89; 5 trials, 200,833 participants, high‐certainty evidence). This corresponds to a saving of 5.6 lives (95% CI 3.6 to 7.6) each year for every 1000 children protected with ITNs. Insecticide‐treated nets also reduce the incidence of uncomplicated episodes of Plasmodium falciparum malaria by almost a half (rate ratio 0.55, 95% CI 0.48 to 0.64; 5 trials, 35,551 participants, high‐certainty evidence) and probably reduce the incidence of uncomplicated episodes of Plasmodium vivax malaria (risk ratio (RR) 0.61, 95% CI 0.48 to 0.77; 2 trials, 10,967 participants, moderate‐certainty evidence).
If the nation-level data isn’t supportive of this, then perhaps this is worthy of further investigation to understand why it may be different from the trials.
You seem to acknowledge this by saying ‘Maybe the RCT evidence is so convincing that the noise of country-level data doesn’t matter’ - but if your claim is that there is ‘no evidence of impact’ specifically at the country-level, then I’d encourage you to be clear about this with your heading. The statement that ‘when you try to measure outputs there is no evidence of impact’ doesn’t seem true.
If funds are allocated to future programs (or programs that require a long time to implement), they won’t count as being in the reserves.
April Fool’s day is a time when many individuals and companies choose to play pranks on their friends, family, and clients for a good laugh. While it can be a fun way to break the monotony of daily routines, pranking others can sometimes backfire and cause unintended consequences. This is especially true when it comes to writing a very short post as an April Fool’s day prank.
One of the primary risks involved in writing a very short post as an April Fool’s day prank is the possibility of offending or upsetting someone. If the joke is crafted in a way that targets a particular individual or group, it could be viewed as insensitive, hurtful, or even discriminatory. This could lead to hurt feelings, angry responses, and even legal repercussions in extreme cases.
Another risk associated with writing a very short post as an April Fool’s day prank is the potential for it to be misinterpreted or taken seriously. In today’s age of social media, it can be challenging to discern what is real and what is not, particularly with short posts that lack context or nuance. If someone falls for the joke and shares it with others without realizing it’s a prank, the misinformation can quickly spread and lead to confusion or even panic.
Furthermore, writing a very short post as an April Fool’s day prank can also damage one’s reputation. If the joke is inappropriate, offensive, or causes harm, it can tarnish the image and credibility of the person or company responsible for it. This could lead to a loss in trust, credibility, and even business opportunities, as clients or customers may choose to distance themselves from the offender.
Ultimately, it’s essential to weigh the risks and rewards carefully before deciding to write a very short post as an April Fool’s day prank. While it can be a fun way to engage with others and break up the monotony of daily life, it’s critical to ensure that the joke is harmless, appropriate, and doesn’t cause unintended consequences. If in doubt, it’s always better to err on the side of caution and refrain from attempting a prank altogether.
In conclusion, writing a very short post as an April Fool’s day prank can be a risky endeavor. It can offend, upset, mislead, and damage one’s reputation if not carefully crafted and executed. As such, those who choose to participate in pranking others should take care to consider the potential consequences and risks involved before acting. The responsibility of ensuring that the joke is harmless and appropriate ultimately lies with the prankster, and it’s vital to remember this when attempting to prank others.
This is a very helpful post. I’m surprised the events are so expensive, but breakdown of costs and explanations make sense.
That said, this makes me much more skeptical about the value of EAG given the alternative potential uses of funds—even just in terms of other types of events.
As suggested by Ozzie, I’d definitely like to see a comparison with the potential value of smaller events, as well as experimentation.
Spending $2k per person might be good value, but I think we could do better. Perhaps there is an analogy with cash transfers as a benchmark—what event could someone put on if they were just given that money?
For example, with $2k, I expect I could hire a pub in central London for an evening (or maybe a whole day), with perhaps around 100 people attending. So that’s $20 per person, or 1% of the cost of EAG. Would they get as much benefit from attending my event as attending EAG? No, but I’d bet they’d get more than 1% of the benefit.
Now what if 10 or 20 people pooled their $2k per person?