I am looking for work, and welcome suggestions for posts.
Vasco Grilošø
The postsā analytics include the mean reading time. Does this refer to āreading time across all readsā/āānumber of readsā, or ātotal reading time across all viewsā/āānumber of viewsā? I would clarify this in the note, and use the term āmean engagement timeā instead of āmean reading timeā, considering the time writing comments is included.
I suggested focussing on karma to measure the impact of events. However, thinking more about it, I would focus more on engagement time. People spending time reading posts and comments, and writing comments suggests these were worth it even if they do not upvote them. So I would roughly assess the impact of events estimating how much additional engagement time they caused. This is less than the engagement time across all the eventsā posts and comments because these decrease the engagement time across other posts and comments to some extent. You can show which events were happenning on the graph below with the total hours of engagement on the EA Forum, and see if the engagement time usually goes up when there is an even (or for which type of events), as you did for posts related or not to effective giving.
Since then, our usage metrics have been pretty stable, nice!
Nitpick. You included a graph of the monthly active users at the start of the post, but I tend to think the daily hours of engagement are a better metric. It accounts for both the number of active users, and the hours of engagement per active user. In any case, the daily hours of engagement have also stabilised since the middle of October.
The āCopy imageā feature below is not working for me. It copies to the clipboard the link of the dashboard instead of the image.
Thanks for the comment, Neel! I would say increasing the impact of donations is also the best strategy to maximise impact for (random) people working in the area they consider most cost-effective:
@Benjamin_Todd thinks āitās defensible to say that the best of all interventions in an area are about 10 times more effective than [as effective as] the mean, and perhaps as much as 100 timesā.
Donating 10 % more to an organisation 10 to 100 times as cost-effective as one one could join is 10 (= 0.1*10/ā0.1) to 100 (= 0.1*100/ā0.1) times as impactful as working there if the alternative hire would be 10 % less impactful.
Thanks for the post, Anthony. Sorry for repeating myself, but I want to make sure I understood the consequences of what you are proposing. Consider these 2 options for what I could do tomorrow:
Torturing my family, and friends, and then killing myself. I would never do this.
Donating 100 $ to the Shrimp Welfare Project (SWP), which I estimate would be as good as averting 6.39 k (= 639/ā10*100) human-years of disabling pain.
My understanding is that you think it is āirreducibly indeterminateā which of the above is better to increase expected impartial welfare, whereas I believe the 2nd option is clearly better. Did I get the implications of your position right?
Thanks for clarifying, Anthony.
You might think we can precisely estimate the value of these coarse outcomes ābetter than chanceā in some sense (more on this below)
Yes, I do.
You might think we can precisely estimate the value of these coarse outcomes ābetter than chanceā in some sense (more on this below), but at the part of the post youāre replying to, Iām just making this more fundamental point: āSince we lack access to possible worlds, our precise guesses donāt directly come from our value function , but from some extra model of the hypotheses weāre aware of (and unaware of).ā Do you agree with that claim?
Yes, I agree.
I donāt think anything is āobviousā when making judgments about overall welfare across the cosmos.
I think the vast majority of actions have a probability of being beneficial only slightly above 50 %, as I guess they decrease wild-animal-years, and wild animals have negative lives with a probability slightly above 50 %. However, I would still say there are actions which are robustly beneficial in expectation, such as donating to SWP. It is possible SWP is harmful, but I still think donating to it is robustly better than killing my family, friends, and myself, even in terms of increasing impartial welfare.
I recommend checking out the second post, especially these two sections, for why I donāt think this is valid.
Thanks. I will do that.
Thanks for sharing, Rakefet! I think donating more and better is the best strategy to increase impact for the vast majority of people:
Benjamin Todd thinks āitās defensible to say that the best of all interventions in an area are about 10 times more effective than [as effective as] the mean, and perhaps as much as 100 timesā.
Donating 10 % more to an organisation 10 to 100 times as cost-effective as one one could join is 10 (= 0.1*10/ā0.1) to 100 (= 0.1*100/ā0.1) times as impactful as working there if the alternative hire would be 10 % less impactful.
Thanks for the post, Anthony.
Whereas, if youāre unaware or only coarsely aware of some possible worlds, how do you tell what tradeoffs youāre making? It would be misleading to say weāre simply āuncertainā over the possible worlds contained in a given hypothesis, because we havenāt spelled out the range of worlds weāre uncertain over in the first place. The usual conception of EV is ill-defined under unawareness.
There is an astronomical number of precise outcomes of rolling a die. For example, the die may stop in an astronomical number of precise locations. So there is a sense in which I am āunaware or only coarsely awareā of not only āsome possible worldsā, but practically all possible worlds. However, I can still precisely estimate the probability of outcomes of rolling a die. Predictions about impartial welfare will be much less accurate, but still informative, at least negligibly so, as long as they are infinitesimaly better than chance. Do you agree?
If not, do you have any views on which of the following I should do tomorrow to increase expected total hedonistic welfare (across all space and time)?
Killing my family, friends, and myself. I would never do this.
Donating 100 $ to the Shrimp Welfare Project (SWP), which I estimate would avert the equivalent of 6.39 k (= 639/ā10*100) human-years of disabling pain.
It is obvious for me the 2nd option is much better.
- Jun 6, 2025, 9:22 AM; 2 points) 's comment on 2. Why inĀtuĀitive comĀparĀiĀsons of large-scale imĀpact are unjustified by (
Thanks for sharing, Aidan! Great work.
The Centre for Exploratory Altruism Research (CEARCH) estimated GWWCās marginal multiplier to be 17.6 % (= 2.18*10^6/ā(12.4*10^6)) of GWWCās multiplier. This suggests GWWCās marginal multiplier from 2023 to 2024 was 1.06 (= 0.176*6), such that donating to GWWC over that period was roughly as cost-effective as to GiveWellās top charities. A marginal multiplier of 1 may look bad, but is actually optimal in the sense GWWC should spend more for a marginal multiplier above 1, and less for one below 1.
You estimate the mean annual donations across 10 % Pledgers by year of pledge, and use this to estimate the value of future pledges. Have you considered controlling not only for the year of the pledge, but also for the year in which the pledge started? I guess pledges starting in later years are less valuable, such that you are overestimating your impact by not controlling for the year the pledge started. To do this, I would:
For each year s when pledges started from 2009 to 2022, run a linear regression of the recorded donations per pledger-year for the pledges started in year s on the year of the pledge y, respecting the equation D(s, y) = a(s) + b(s)*y.
Determine the expected recorded donations by year of the pledge from D(y) = a + b*y, where a and b are the mean intercept and slope weighted by the importance of the year the pledge started.
Here are the formulas:
a = (a(2009)*āimportance of 2009ā + a(2010)*āimportance of 2010ā + ⦠+ a(2022)*āimportance of 2022ā)/ā(āimportance of 2009ā + āimportance of 2010ā + ⦠+ āimportance of 2022ā).
b = (b(2009)*āimportance of 2009ā + b(2010)*āimportance of 2010ā + ⦠+ b(2022)*āimportance of 2022ā)/ā(āimportance of 2009ā + āimportance of 2010ā + ⦠+ āimportance of 2022ā).
I guess the importance increases linearly with the year the pledge started, and that the importance of 2022 is 10 times that of 2009.
Calculate the expected donations by year of the pledge multiplying the above by the total donations as a fraction of the recorded donations, which you estimated to be 1.14.
I would be happy to run the regression above if you shared the recorded donations by pledger and year, and the years in which each pledge started. I am also open to making or reviewing other quantitative analyses related to GWWCās work for free.
Have you considered retiring The Trial Pledge? You estimated 96 % of your impact came from The 10 % Pledge.
Which fraction of the people who started The 10 % Pledge until the end of 2024 recorded a donation in 2024? I assume significantly less than 59.4 %, which is your estimate for the fraction of 10 % Pledgers recording a donation in the 1st year of their pledge. So I think the information below on GWWCās website is somewhat misleading. Maybe only around 1ā3 (less than 59.4 %) of the members recorded a donation in 2024. In addition, you estimate only 12.3 % (= 1 ā 1/ā1.14) of your impact from 2023 to 2024 came donations which were not recorded.
Our community includes 9,840 lifetime members pledging ā„10% of their income, plus 1,117 trial pledgers, together making up our 10,957 strong giving community
I had expressed my concerns about the above to GWWC 14 months ago. I sent the email below to @Michael Townsendšø (former researcher of GWWC) and @Sjir Hoeijmakersšø (former director of research, and current CEO of GWWC) with @GraceAdamsšø (former and current director of marketing of GWWC) and @Luke Freeman šø (former CEO of GWWC) in CC on 11 April 2024. Grace, Michael, and @Alana HF (current and former research communicator at GWWC) replied to my email.
Hi Michael and Sjir,
I was surprised to note 35.2 % of GWWC pledgers (including the 2 types of pledgers) have not reported any donation until 22 February 2023 (the date referring to the data I used in this analysis). Had you realised this? I thought the fraction could decrease substantially if I excluded recent pledgers, who may not have their reported donations up to date, but this barely affected the fraction. 34.4 % of GWWC pledgers who started their pledge before 2022 have not reported any donation.
It is possible the 1ā3 of pledgers who have not reported any donation are still making significant donations. However, I have the impression it is not super clear from GWWCās comms and website that only around 2ā3 of pledgers have reported more than 0 $ of donations (and around 44 % have reported less than 100 $/āyear). So I think it is worth clarifying that somehow, and may say something about it in the mistakes page. I guess it would be good to highlight in the website and comms the number of pledgers for which there is decent evidence that the pledge is being accomplished, which I assume could be operationalised as having enough reported donations relative to the reported income. Even then, it would be worth rechecking recurrent donations, which have been a problem in the past (for large donors).
I would normally draft an EA Forum post about the above, and then share it with you a few weeks before posting so that you could give feedback, and prepare a reply, but I guess it would be better for you to clear up the matter, and for me not to post anything. This is assuming that, conditional on my numbers being right, you think there is any problem in the comms or website as is. If my numbers are right, and you think the comms and website are still fine, then I would still be interested in drafting a post.
As a reality check of my numbers, I went back to the Lorentz curve I estimated for the value generated by each pledge. I have it in my post:
I did not notice it back then, but it is indeed the case that around 40 % of pledgers generate basically 0 value, which is consistent with the numbers I mentioned above. However, both are coming from the same source, so there is a decent chance of any error in one place also being present in the other.I feel like this email is a bit too negative. Thanks for your great work!
Vasco
Makes sense. I also think tracking the monthly value of posts and comments along the lines I suggested outside events would be useful. CEAās dashboard has the number of posts with at least 2 upvotes excluding self-votes, but this accounts very little for quality (nitpick; I would account for self-votes, as I guess a post with no votes still has some value).
Thanks for the post!
In my last essay, I alluded to the fact that āwe all know the vegans are the good guys.ā
I think decreasing the consumption of animal-based foods is harmful due to effects on wild animals. I estimate School Plates in 2023, and Veganuary in 2024 harmed soil animals 5.75 k and 3.85 k times as much as they benefited farmed animals.
Thanks for the post, Toby!
I think the total karma across posts would be a better metric than the number of posts, and ones with more than 50 karma. It would combine information about quantity and quality, account for the value of posts with less than 51 karma, and account for varying quality among posts with more than 50 karma. If you believe the value of posts increases more (less) than linearly with karma, you can estimate the sum of karma^alpha across posts for alpha higher (lower) than 1. You can also include the value of comments in the same way, estimating it from the sum of k*karma^beta across comments, where k determines the value of comments compared to that of posts, and beta determine how the value of comments increases with karma (maybe it should simply be equal to alpha).
[...]
Sarah estimates that we raised between $74K and $85K for the charities that took part in our election, via platforming them (i.e. without counting the election money).
[...]
Last year we estimated costs at around $30-35K (most staff time). Iād expect this year was similar, if a little less (there were some process improvements from last year, and we benefited from experimentation on what works).
I estimate you raised 93.7 k$ (= (14.2 + 79.5)*10^3) during the last giving season, 14.2 k$ directly, and 79.5 k$ (= (74 + 85)*10^3) indirectly. For a cost equal to the lower bound of the giving season of 2023 of 30 k$, your multiplier would be 3.12 (= 93.7*10^3/ā(30*10^3)) assuming a counterfactual of no impact. So the money you influenced had to be more than 32.1 % (= 1ā3.12) as cost-effective as the counterfactual for the giving season to have counterfactually increased impact. I guess the counterfactual would mainly be making worse donations, not donating less.
Retro: Retro AI Welfare Debate Week
This links to a private doc.
- Jun 6, 2025, 11:38 AM; 2 points) 's comment on All FoĀrum events (Iāve been inĀvolved in) retrospective by (
Thanks for sharing!
Novel meat-reduction interventions
I think decreasing the consumption of animal-based foods is harmful due to effects on wild animals. I estimate School Plates in 2023, and Veganuary in 2024 harmed soil animals 5.75 k and 3.85 k times as much as they benefited farmed animals. I would be curious to know Faunalyticsā thoughts on this.
Thanks for the post!
My impression is that part of EAās pitch has been that you donāt have to do radical things to do radical amounts of good. 10% of your income is something you can survive without, but others will literally die without that money. Over time weāve shifted to be a more demanding community, taking us closer to SMAās explicit ideas of Perseverance and Action.
[...]
SMA agrees, and acknowledges that this simply cannot stand. We donāt want the world to be like this, so we should take strong action to fix it. In fact, if I Google āquit your bullshit job,ā the fourth result is Rutger Bregmanās article in The Guardian where he promotes Moral Ambition. What he means is that traditional ideas of prestige should be discarded in favor of considering whatās actually valuable for society.
I think both EA and SMA underestimate the impact of donating more and better, and that this is the best strategy to maximise impact for the vast majority of people:
Benjamin Todd thinks āitās defensible to say that the best of all interventions in an area are about 10 times more effective than [as effective as] the mean, and perhaps as much as 100 timesā.
Donating 10 % more to an organisation 10 to 100 times as cost-effective as one one could join is 10 (= 0.1*10/ā0.1) to 100 (= 0.1*100/ā0.1) times as impactful as working there if the alternative hire would be 10 % less impactful.
In concrete terms, the protein transition has become one of the biggest cause areas within SMA.
I think decreasing the consumption of animal-based foods is harmful due to effects on wild animals. I estimate School Plates in 2023, and Veganuary in 2024 harmed soil animals 5.75 k and 3.85 k times as much as they benefited farmed animals.
Thanks for all your efforts contributing to a better world, Matthew!
Thanks for the update, Sarah!
As a reminder, you can view our teamās half-quarterly OKRs via this public doc that I keep updated. I recently added our Q2.2 plans (May 20 - July 1).
I like this transparency!
Thanks for the great post, VeryJerry! Strongly upvoted. I also like to ground altruism in how one would act behind the veil of ignorance.
Thanks, Michael. For readersā reference, CLR stands for Center on Long-Term Risk.
I would say a 10^-100 chance of 10^100 QALY is as good as 1 QALY. However, even if I thought the risk of human extinction over the next 10 years was 10 % (I guess it is 10^-7), I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. 10^50 QALY), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. 10^40 QALY). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, I think the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to ābenefitsā^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to ābenefitsā*ābenefitsā^-(1 + alpha) = ābenefitsā^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.
Thanks for clarifying, Joel! That makes a lot of sense.
The postsā analytics include the number of reads. Does this refer to the number of unique users who engaged with the post at least 30 s uninterruptedly? I would clarify this in the note.