I am looking for work, and welcome suggestions for posts.
Vasco Grilošø
Thanks for clarifying, Anthony. Do you think it is also irreducibly indeterminate which of the following actions is better to increase impartial welfare neglecting effects after their end?
Torturing a person for 1 min.
Listening to music for 1 min.
I think listening to music would result in more impartial welfare than torturing a person, even considering effects across all space and time. I understand you think it is irreducibly indeterminate which leads to more impartial welfare in this case, but I wonder whether ignoring the effects after the actions have ended would break the indeterminacy for you. If so, for how long would the effects after the actions have to be taken into account for your indeterminacy to come back? Why?
Thanks, Neel. I am assuming the marginal cost-effectiveness of spending on capital and labour is the same. Organisations should move money from the least to the most cost-effective activities until the marginal cost-effectiveness of all activities is equalised. I understand organisations do not manage their resources perfectly. However, for one to argue against my assumption, one would need specific arguments about why, for example, AI safety organisations are under or overspending on compute, or are under or overpaying their employees.
I think the cost per hour of engagement is a good intuitive metric to assess the cost-effectiveness of running the EA Forum. From footnote 1, the daily cost to run the EA Forum is 1.48 k$ (= 1.3*10^6*2.5/ā6/ā365.25). There have been around 240 hours of engagement per day over the last 6 months or so. So the cost per hour of engagement has been roughly 6.17 $ (= 1.48*10^3/ā240). I suspect the engagement time would drop significantly if users had to pay 6.17 $ per hour they spend on the EA Forum. I believe this suggests the marginal cost-effectiveness of running the EA Forum is negative (trusting my guess for the usersā revealed preferences), and that there should be a reduction in the time spent running the EA Forum.
The iteration of September 2023 of The Introductory EA Program had 1.58 attendances per hour spent running the program. For 30 $ per hour spent running the program, and 4 h of engagement per attendance (3 h of preparation, plus 1 h of discussion), there would be 6.32 h (= 1.58*4) of engagement per hour spent running the program, and the cost per hour of engagement would be 4.75 $ (= 30ā6.32), 77.0 % (= 4.75/ā6.17) as much as for the EA Forum. So, considering uncertainty, it looks like the iteration of September 2023 of The Introductory EA Program had a cost per hour of engagement similar to that of the EA Forum nowadays.
You can try to make comparisons with other programs, or estimate which work you are doing which generates the most engagement time per $ (although figuring out the additional engagement time a given activity generated is tricky; for example, running an EA Forum event will of course cause engagement with the posts and comments related to the event, but will tend to decrease the engagement with other posts and comments).
@Toby Tremlettš¹, you may be interested in this comment.
Thanks for sharing! I shared it on Ambitious Impactās Slack too. You may also want to email the people who expressed interest in contracting for Rethink Prioritiesā animal welfare department. I am sure they would be happy to share the emails of the people who said they would be happy to be contacted for other similar opportunities.
I guess it would be better for you to recommend charities more like GiveWell does. Instead of having a process where you screen lots of charities, and review many in depth, I would simply recommend charities which have consistently received your movement grants, and whose marginal cost-effectiveness is expected to remain similar for the amount of additional funds caused by your recommendation.
I would be happy to review the output of this project for free before or after it is published. I might apply myself too.
I would be happy to know the total engagement time (reading the post, and reading and writing comments on the post) by post by day. As of now, one can only check the karma, comments, views, and reads by post by day on the postsā analytics.
The postsā analytics include the number of reads. Does this refer to the number of unique users who engaged with the post at least 30 s uninterruptedly? I would clarify this in the note.
The postsā analytics include the mean reading time. Does this refer to āreading time across all readsā/āānumber of readsā, or ātotal reading time across all viewsā/āānumber of viewsā? I would clarify this in the note, and use the term āmean engagement timeā instead of āmean reading timeā, considering the time writing comments is included.
I suggested focussing on karma to measure the impact of events. However, thinking more about it, I would focus more on engagement time. People spending time reading posts and comments, and writing comments suggests these were worth it even if they do not upvote them. So I would roughly assess the impact of events estimating how much additional engagement time they caused. This is less than the engagement time across all the eventsā posts and comments because these decrease the engagement time across other posts and comments to some extent. You can show which events were happenning on the graph below with the total hours of engagement on the EA Forum, and see if the engagement time usually goes up when there is an even (or for which type of events), as you did for posts related or not to effective giving.
Since then, our usage metrics have been pretty stable, nice!
Nitpick. You included a graph of the monthly active users at the start of the post, but I tend to think the daily hours of engagement are a better metric. It accounts for both the number of active users, and the hours of engagement per active user. In any case, the daily hours of engagement have also stabilised since the middle of October.
The āCopy imageā feature below is not working for me. It copies to the clipboard the link of the dashboard instead of the image.
Thanks for the comment, Neel! I would say increasing the impact of donations is also the best strategy to maximise impact for (random) people working in the area they consider most cost-effective:
@Benjamin_Todd thinks āitās defensible to say that the best of all interventions in an area are about 10 times more effective than [as effective as] the mean, and perhaps as much as 100 timesā.
Donating 10 % more to an organisation 10 to 100 times as cost-effective as one one could join is 10 (= 0.1*10/ā0.1) to 100 (= 0.1*100/ā0.1) times as impactful as working there if the alternative hire would be 10 % less impactful.
Thanks for the post, Anthony. Sorry for repeating myself, but I want to make sure I understood the consequences of what you are proposing. Consider these 2 options for what I could do tomorrow:
Torturing my family, and friends, and then killing myself. I would never do this.
Donating 100 $ to the Shrimp Welfare Project (SWP), which I estimate would be as good as averting 6.39 k (= 639/ā10*100) human-years of disabling pain.
My understanding is that you think it is āirreducibly indeterminateā which of the above is better to increase expected impartial welfare, whereas I believe the 2nd option is clearly better. Did I get the implications of your position right?
Thanks for clarifying, Anthony.
You might think we can precisely estimate the value of these coarse outcomes ābetter than chanceā in some sense (more on this below)
Yes, I do.
You might think we can precisely estimate the value of these coarse outcomes ābetter than chanceā in some sense (more on this below), but at the part of the post youāre replying to, Iām just making this more fundamental point: āSince we lack access to possible worlds, our precise guesses donāt directly come from our value function , but from some extra model of the hypotheses weāre aware of (and unaware of).ā Do you agree with that claim?
Yes, I agree.
I donāt think anything is āobviousā when making judgments about overall welfare across the cosmos.
I think the vast majority of actions have a probability of being beneficial only slightly above 50 %, as I guess they decrease wild-animal-years, and wild animals have negative lives with a probability slightly above 50 %. However, I would still say there are actions which are robustly beneficial in expectation, such as donating to SWP. It is possible SWP is harmful, but I still think donating to it is robustly better than killing my family, friends, and myself, even in terms of increasing impartial welfare.
I recommend checking out the second post, especially these two sections, for why I donāt think this is valid.
Thanks. I will do that.
Thanks for sharing, Rakefet! I think donating more and better is the best strategy to increase impact for the vast majority of people:
Benjamin Todd thinks āitās defensible to say that the best of all interventions in an area are about 10 times more effective than [as effective as] the mean, and perhaps as much as 100 timesā.
Donating 10 % more to an organisation 10 to 100 times as cost-effective as one one could join is 10 (= 0.1*10/ā0.1) to 100 (= 0.1*100/ā0.1) times as impactful as working there if the alternative hire would be 10 % less impactful.
Thanks for the post, Anthony.
Whereas, if youāre unaware or only coarsely aware of some possible worlds, how do you tell what tradeoffs youāre making? It would be misleading to say weāre simply āuncertainā over the possible worlds contained in a given hypothesis, because we havenāt spelled out the range of worlds weāre uncertain over in the first place. The usual conception of EV is ill-defined under unawareness.
There is an astronomical number of precise outcomes of rolling a die. For example, the die may stop in an astronomical number of precise locations. So there is a sense in which I am āunaware or only coarsely awareā of not only āsome possible worldsā, but practically all possible worlds. However, I can still precisely estimate the probability of outcomes of rolling a die. Predictions about impartial welfare will be much less accurate, but still informative, at least negligibly so, as long as they are infinitesimaly better than chance. Do you agree?
If not, do you have any views on which of the following I should do tomorrow to increase expected total hedonistic welfare (across all space and time)?
Killing my family, friends, and myself. I would never do this.
Donating 100 $ to the Shrimp Welfare Project (SWP), which I estimate would avert the equivalent of 6.39 k (= 639/ā10*100) human-years of disabling pain.
It is obvious for me the 2nd option is much better.
Thanks for sharing, Aidan! Great work.
The Centre for Exploratory Altruism Research (CEARCH) estimated GWWCās marginal multiplier to be 17.6 % (= 2.18*10^6/ā(12.4*10^6)) of GWWCās multiplier. This suggests GWWCās marginal multiplier from 2023 to 2024 was 1.06 (= 0.176*6), such that donating to GWWC over that period was roughly as cost-effective as to GiveWellās top charities. A marginal multiplier of 1 may look bad, but is actually optimal in the sense GWWC should spend more (less) for a marginal multiplier above (below) 1.
You estimate the mean annual donations across 10 % Pledgers by year of pledge, and use this to estimate the value of future pledges. Have you considered controlling not only for the year of the pledge, but also for the year in which the pledge started? I guess pledges starting in later years are less valuable, such that you are overestimating your impact by not controlling for the year the pledge started. To do this, I would:
Run a linear regression of the mean recorded donations D in year y of the pledges started in year s on s and y, respecting the equation D(s, y) = a + b*s + c*y.
Determine the expected recorded donations by year of the pledge for each pledge started in, for example, 2025 from D(2025, y) = a + b*2025 + c*y.
Calculate the expected donations by year of the pledge multiplying the above by the total donations as a fraction of the recorded donations, which you estimated to be 1.14.
I would be happy to run the regression above if you shared the recorded donations by pledger and year, and the years in which each pledge started.
Which fraction of the people who started The 10 % Pledge until the end of 2024 recorded a donation in 2024? I assume significantly less than 59.4 %, which is your estimate for the fraction of 10 % Pledgers recording a donation in the 1st year of their pledge. So I wonder whether the information below on GWWCās website is somewhat misleading. Maybe only around 1ā3 (less than 59.4 %) of the members recorded a donation in 2024. In addition, you estimate only 12.3 % (= 1 ā 1/ā1.14) of your impact from 2023 to 2024 came donations which were not recorded.
Our community includes 9,840 lifetime members pledging ā„10% of their income, plus 1,117 trial pledgers, together making up our 10,957 strong giving community
Have you considered retiring The Trial Pledge? You estimated 96 % of your impact came from The 10 % Pledge.
Makes sense. I also think tracking the monthly value of posts and comments along the lines I suggested outside events would be useful. CEAās dashboard has the number of posts with at least 2 upvotes excluding self-votes, but this accounts very little for quality (nitpick; I would account for self-votes, as I guess a post with no votes still has some value).
Thanks for the post!
In my last essay, I alluded to the fact that āwe all know the vegans are the good guys.ā
I think decreasing the consumption of animal-based foods is harmful due to effects on wild animals. I estimate School Plates in 2023, and Veganuary in 2024 harmed soil animals 5.75 k and 3.85 k times as much as they benefited farmed animals.
Thanks for the post, Toby!
I think the total karma across posts would be a better metric than the number of posts, and ones with more than 50 karma. It would combine information about quantity and quality, account for the value of posts with less than 51 karma, and account for varying quality among posts with more than 50 karma. If you believe the value of posts increases more (less) than linearly with karma, you can estimate the sum of karma^alpha across posts for alpha higher (lower) than 1. You can also include the value of comments in the same way, estimating it from the sum of k*karma^beta across comments, where k determines the value of comments compared to that of posts, and beta determine how the value of comments increases with karma (maybe it should simply be equal to alpha).
[...]
Sarah estimates that we raised between $74K and $85K for the charities that took part in our election, via platforming them (i.e. without counting the election money).
[...]
Last year we estimated costs at around $30-35K (most staff time). Iād expect this year was similar, if a little less (there were some process improvements from last year, and we benefited from experimentation on what works).
I estimate you raised 93.7 k$ (= (14.2 + 79.5)*10^3) during the last giving season, 14.2 k$ directly, and 79.5 k$ (= (74 + 85)*10^3) indirectly. For a cost equal to the lower bound of the giving season of 2023 of 30 k$, your multiplier would be 3.12 (= 93.7*10^3/ā(30*10^3)) assuming a counterfactual of no impact. So the money you influenced had to be more than 32.1 % (= 1ā3.12) as cost-effective as the counterfactual for the giving season to have counterfactually increased impact. I guess the counterfactual would mainly be making worse donations, not donating less.
Retro: Retro AI Welfare Debate Week
This links to a private doc.
Thanks for sharing!
Novel meat-reduction interventions
I think decreasing the consumption of animal-based foods is harmful due to effects on wild animals. I estimate School Plates in 2023, and Veganuary in 2024 harmed soil animals 5.75 k and 3.85 k times as much as they benefited farmed animals. I would be curious to know Faunalyticsā thoughts on this.
I agree there is be a non-arbitrary boundary between ācomparableā and āincomparableā that results from your framework. However, I think the empirics of some comparisons like the one above are such that we can still non-arbitrarily say that one option is better than the other for an infinite time horizon. Which empirical beliefs you hold would have to change for this to be the case? For me, the crucial consideration is whether the expected effects of actions decrease or increase over time and space. I think they decrease, and that one can get a sufficiently good grasp of the dominant nearterm effects to meaningfully compare actions.