I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
Vasco Grilošø
Do āĀ£5xā and āĀ£5yā refer to the impact accounting for all effects? If so, you are saying that the marginal multiplier accounting for all effects could be greater than the multiplier concerning the total spending accounting for all effects. I think this can only be the case if the organisation fails to allocate funds to the most cost-effective activities (accounting for all effects) 1st.
I still guess the marginal multiplier of the effective giving initiatives (EGIs) funded by Coefficient Giving (CG) is higher than 1, but I would be a bit surprised if it was 5. In this case, CG would be leaving lots of impact on the table by not funding EGIs more. CG is scaling up their funding of EGIs, and should ideally be doing this in the way that maximises impact. For CGās marginal funding of EGIs to have a multiplier of 5, one would have to think they should be scaling up faster. Maybe they should. The altruistic market is not perfectly efficient. However, it is worth having in mind that the multiplier of CGās marginal funding of EGIs may be closer to 1 after accounting for the risks of scaling up too fast. For example, a slower scale up could allow for learning more about which organisations are the most promising. I expect CG to be taking this into account, but mostly informally, not formally in the calculations of the multipliers of their grantees.
Thanks for the helpful example. I strongly upvoted it. I suspected you had something like it in mind. I still think the marginal multiplier of funding EGA at a given time (not across time) accounting for all effects decreases with spending if the organisation allocates funds to the most cost-effective activities 1st. In addition, I believe the marginal multiplier of funding EGA should ideally not change across time. EGA should try to move spending from the years with the lowest marginal multiplier to the years with the highest marginal multiplier, thus increasing the marginal multiplier of the years with the lowest marginal multiplier, and decreasing the marginal multiplier of the years with the highest marginal multiplier, until the marginal multiplier is the same in all years.
In your example, the marginal multiplier of the strategy EGA is scaling up neglecting effects on other strategies increases with spending. It is 4 for 200 k£ of spending, and 6 for 500 k£. However, I believe the marginal multiplier of EGA is not the same as the marginal multiplier of the strategy it is scaling up neglecting effects on other strategies. I would say a signicant fraction of the value of funding EGA while it has a bare-bones website, and scales up social media ads is increasing the probability of EGA shifting to the fancy website. Neglecting this results in underestimating the marginal multiplier of funding EGA. Here is another way of noting this. For a spending up to 10 k£, the marginal multiplier of the strategy EGA is scaling up neglecting effects on other strategies is close to 0 (assuming the bare-bones website barely raises funds without social media ads). Yet, this does not reflect well the cost-effectiveness of funding EGA in its earliest stages. A significant fraction of the impact of initial funding comes from increasing the probability of EGA achieving strategies with a higher marginal multiplier neglecting effects on other strategies.
Here is how I relate the above to economies of scale. Being an early adopter of solar panels would not have looked like a cost-effective way of decreasing greenhouse gas (GHG) emissions looking just at the initial cost of solar panels, and neglecting the reduction in cost resulting from increased adoption. However, a significant fraction of the (expected) decrease in GHG emissions would have come from the potential of early adoption enabling cheaper panels. This is why I mentioned in my past comment āmarginal multiplier accounting for all effects, including longterm and low probability effectsā.
Relatedly, it may naively seem that decreasing the consumption of chicken by 0.1 kg does not change the production of chicken if this can only be adjusted by multiples of e.g. 1 k kg. However, in this case, a better model would be that decreasing the consumption of chicken by 0.1 kg would increase by roughly 0.01 pp (= 0.1/ā(1*10^3)) the probability of the production of chicken decreasing by 1 k kg. So the expected reduction in the production of chicken would still be roughly 0.1 kg (= 1*10^-4*1*10^3).
The marginal multiplier should still decrease with spending if the organisation allocates funds to the most cost-effective activities 1st? I think so. If the marginal multiplier accounting for all effects, including longterm and low probability effects, of additional spending on core activities was lower than the marginal multiplier of additional spending on expansion activities, the organisation should move funds from core to expansion activities until their marginal multipliers were equal. Otherwise, they would be leaving impact on the table. The impact of core activities may not always be that visible. The impact of the organisation may not change much nearterm as a result of a temporary reduction in the spending on core activities. However, these are important for the longterm success of the organisation.
ConĀsciousĀness will slip through our fingers
Hi Kestrel and Melanie. Thanks for the relevant discussion.
Melanie, could you share your current bar in terms of the multiplier affecting the total expenses of grantees? You say this multiplier for 2025 was ā~5ā6xā, but your bar is lower because the cost-effectiveness of your grants has to be above the bar, and because you are expanding your funding?
That said, if weāre funding an organization, even below its full budget, you can assume we believe they are above our bar at their full projected budget. We use their full projected expenses when estimating the giving multiplier, so a partial grant from us is not a signal that we think the marginal dollar is low-value.
On the other hand, the marginal multiplier could in principle be 0 or negative even if the multiplier affecting the total expenses is high. For example, if the last 10 % of the total expenses have a multiplier of ā1, the 2nd last 10 % have a multiplier of 0, and the 1st 80 % have a multiplier of 10, the multiplier affecting the overall expenses would be 7.9 (= 0.1*(-1) + 0.1*0 + 0.8*10), but the marginal multiplier would be at most ā1 (at most because the multiplier could continue to decrease as expenses increase). I do not think a negative marginal multiplier is realistic, but I wonder whether it could be close to 1 when accounting for all effects, such as the benefits of a funding cap making grantees look for more counterfactual sources of funding, and the costs of CG scaling up the funding of effective giving initiatives too fast.
Hi Toby.
I wonder what the trends look like for other cause areas.
I thought about this too. It would be interesting to know the details, but I would be surprised if the number of comments per post decreased more in the overall population of posts than in GHD posts. According to Nickās 1st graph, there were around 9 comments per GHD post in the 1st 3 months of 2021, and around 2 in the last few months, 22.2 % as many (= 2ā9), which means there was a reduction of 77.8 % (= 1 ā 0.222). In contrast, as illustrated below, the number of engagement hours per day, and posts per month with at least 2 of karma are slightly higher today, which means the number of engagement hours per post has not changed much since early 2021. I guess the number of comments per post is not very far from proportional to the number of engagement hours per post. So I suspect the number of comments per post has not changed a lot in the overall population of posts.
Thanks for the post, Guillaume.
I think hedonistic welfare per unit time should be represented as a practically continuous distribution. This implies a probability of practically 0 for the welfare per unit time being equal to any particular value, including 0, which results in a probability of sentience of practically 100 %. Here is a related article arguing for accepting that all animals are conscious, and focussing on how they are conscious.
I believe increasing the welfare of shrimps can still have negligible benefits despite the above.
You may be interested in my post Are you overestimating the importance of the probability of sentience?.
Summary
If you trust there is as little variation in the probability of sentience as suggested by the values used by Ambitious Impact (AIM) and Animal Charity Evaluators (ACE), or presented in Bob Fischerās book about comparing welfare across species, I believe there are other factors which may be more important for the probability of a small donation increasing animal welfare:
Donating to incremental instead of hits-based interventions.
Donating to smaller organisations.
Donating to organisations in lower income countries
I wonder to what extent people donate to interventions targeting animals which are more likely to be sentient to boost the probability of increasing welfare. People routinely take actions which are super unlikely to actually matter:
I calculate driving a car for 10 km in Great Britain without a seatbelt leads to 1 additional death with a probability of 1 in 73.0 M. AIM uses a probability of sentience of shrimps which is 34.2 M times as high.
Andrew Gelman found the probability of a voter in a small US state polling around 50ā50 in a close election nationally changing the outcome of the national election could get as high as 1 in 3 million. AIM uses a probability of sentience of shrimps which is 1.40 M times as high.
Hi Michael. I agree. On the other hand, I also think the Shrimp Welfare Projectās (SWPās) Humane Slaughter Initiative (HSI), which leads to more farmed shrimps being electrically stunned, may have a very low cost-effectiveness even if their target shrimps are sentient with a probability of 100 %. Their subjective experiences can have a super low intensity. For individual welfare per fully-healthy-animal-year proportional to āindividual number of neuronsā^āexponentā, and āexponentā from 0 to 2, which covers my reasonable best guesses, the individual welfare per fully-healthy-shrimp-year is 10^-12 (= (10^-6)^2) to 1 times that of humans, as shrimps have 10^-6 times as many neurons as humans. In that case, I estimate that HSI has increased the welfare of farmed shrimps 2.26*10^-4 (= 2.06*10^-8/ā(9.13*10^-5)) to 1.49 k (= 20.6*10^3/ā13.8) times as cost-effectively as cage-free corporate campaigns increase the welfare of chickens.
Hi Hannah. Are you still planning to email me?
I would estimate the number of layer-years improved in expectation in year Y from āexpected population of layers in year Yā*(āexpected population of layers in cages in year Y without the intervention as a fraction of all of them in year Yāāāexpected population of layers in cages in year Y with the intervention as a fraction of all of them in year Yā) = P(Y)*(f_control(Y) - f_intervention(Y)), which is correct by definition.
Here is a post illustrating this.
Summary
Cost-effectiveness analyses (CEAs) of interventions accelerating animal welfare reforms usually estimate the increase in the welfare of the target animals (for example, hens in cages) based on the acceleration in years of the full implementation of the reform. This makes sense if each level of implementation of the reform is accelerated as much as its full implementation.
However, there are many cases where the acceleration of the full implementation of the reform is not enough to determine the number of animals helped, or animal-years improved. I discuss some below.
EsĀtiĀmaĀtion of the benefits of acĀcelĀerĀatĀing welfare reforms
Thanks for the great post, Stefan.
Risk aversion. Risk aversion is normally a reason not to make a hard-to-reverse decision. Since reversibility gives you the option to switch course if your strategy underperforms, it normally reduces the risk of a truly bad outcome. Note, though, that the standard view within the effective altruism movement seems to be that altruists should not be risk-averse.
It could similarly be argued that reversibility gives one the option to switch course despite their strategy performing well, this increasing the risk of missing a truly great outcome? The takeaway is that one should make certainly bad options less available, and certainly good options more unavoidable?
Hi Ajeya.
But for the first time, I donāt see any solid trend we can extrapolate to say it wonāt happen soon.[11] AI R&D really could be automated this year.
What are your predictions for the unemployment rate of software engineers? What do you think about these reasons for potentially overestimating the pace of automation based on AI benchmarks?
But thereās a big problem here ā if AIs are actually able to perform most tasks on 1-hour task horizons, why donāt we see more real-world task automation? For example, most emails take less than an hour to write, but crafting emails remains an important part of the lives of billions of people every day.
Some of this could be due to people underusing AI systems,[2] but in this post I want to focus on reasons that are more fundamental to the capabilities of AI systems. In particular, I think there are three such reasons that are the most important:
Time-horizon estimates are very domain-specific
Task reliability strongly influences task horizons
Tasks are very bundled together and hard to separate out.
Welcome to the EA Forum, Max. Thanks for the clarification, and additional context. I am rooting for your (GWWCās) success.
Thanks for asking, Vince. Here are some suggestions listed alphabetically which are not in your sheet, and have not yet been mentioned in other answers to your post:
Rethink Prioritiesā (RPās) animal welfare department.
Welfare Footprint Institute (WFI).
Thanks for the post, Michael.
However, any specific function or set of coefficients would (to me) require justification, and itās unclear that there can be any good justification.
I also worry about the arbitrariness of the weights (coefficients) of the models. In Bob Fischerās book about comparing welfare across species, there seems to be only 1 line about the weights used to aggregate the tentative estimates for the welfare range, the difference between the maximum and minimum hedonistic welfare per unit time. āWe assigned 30 percent credence to the neurophysiological model, 10 percent to the equality model, and 60 percent to the simple additive modelā. People usually give weights that are at least 0.1/āānumber of modelsā, which is at least 3.33 % (= 0.1/ā3) for 3 models, when it is quite hard to estimate the weights. However, giving weights which are not much smaller than the uniform weight of 1/āānumber of modelsā could easily lead to huge mistakes. As a silly example, if I asked random people with age 7 about whether the gravitational force between 2 objects is proportional to ādistanceā^-2 (correct answer), ādistanceā^-20, or ādistanceā^-200, I imagine I would get a significant fraction picking the exponents of ā20 and ā200. Assuming 60 % picked ā2, 20 % picked ā20, and 20 % picked ā200, one may naively conclude the mean exponent of ā45.2 (= 0.6*(-2) + 0.2*(-20) + 0.2*(-200)) is reasonable. Yet, there is lots of empirical evidence against this which the respondants are not aware of. The right conclusion would be that the respondants have no idea about the right exponent because they would not be able to adequately justify their picks. I think we are in a similar situation with respect to comparing hedonistic welfare across species.
Thanks for the post, Michael.
The more or larger such changes are necessary to get from one brain to another, the less tight the bounds on the comparisons could become, the further they may go both negative and positive overall,[2] and the less reasonable it seems to make such comparisons at all.
I agree comparisons become increasingly uncertain as the difference between the states of the organisms increases. However, I do not think there is a point where comparisons go from possible, but extremely difficult to not possible at all. I would say there is just a progressive widening of the distribution representing the hedonistic welfare per unit time of a given state of an organism as it moves away from typical human states. As an example, I could say my hedonistic welfare right now is 0.5 to 1.5 times that of random human who is awake, whereas that of a random nematode might be 10^-17 to 1 times that of a random human who is awake. I estimate the ratio between the individual number of neurons of nematodes and humans is 2.79*10^-9, whose square is 7.78*10^-18, roughly 10^-17.
Are you confident that CG should be increasing the funding of EGIs faster (for example, by using looser funding caps)? If not, can you be confident that funding the EGIs supported by CG is significantly more cost-effective than funding GiveWell?