Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I think this post is really pretty good. What I’d like to see, either included here or in future work, is a plain-language description of your assumptions and conclusions. I tried to make one, which went something like this:
It would also be nice to move the acronyms (i.e. t_a) to the figure captions where they’re presented, and work on better formatting the tables. You’ve done the research and thinking, so take a little more time to polish up the presentation so we can read it more easily :)
Thanks for the kind words! I like your summary. Just one note, since we are arguably so far from knowing whether insects have good or bad lives, I do not think we can take the conclusion below.
I believe the best attitude is one of cluelessness, where we just know that insects may dominate (or not) the analysis, either making GiveWell’s top charities much more harmful or beneficial. Moreover, we should beware surprising and suspicious convergence. If insects indeed went on to dominate the analysis (quite unclear), I would expect targetted wild animal interventions to be more effective than global health and development ones.
I have now restated the meaning of N_ta and N_h just before the tables, and improved the formatting of the headers of the table a little.
Ah, you are right!
Can you say why you feel that longtermism suffers from less cluelessness that what you argue the GiveWell charities do? The main limitation of longtermism is that affecting the future is riddled with cluelessness.
You mention Hilary Greaves’ talk, but it doesn’t seem to address this. She refers to “reducing the chance of premature human extinction” but doesn’t say how.
Hi Henry,
Thanks for engaging!
Assuming most of the expected value of the interventions of GiveWell’s top charities is in the future (due to effects on the population size), we are cluelessness about its total cost-effectiveness. This limitation also applies to longtermist interventions.
However, if the goal is maximising longterm cost-effectiveness (because that is where most of the value is), explicitly focussing on the longterm effects will tend to be better than explicitly focussing on nearterm effects. This is informed by the heuristic that it is easier to achieve something when we are trying to achieve it. So longtermist interventions will tend to be more effective.
It would also be surprising and suspicious convergence if the best interventions to save lives in the present were also the best from a longtermist perspective. The post from Alex HT I linked in the Summary has more details.
It’s worth noting that most arthropods by population are significantly smaller, have significantly smaller brains and would probably have less sophisticated behaviour (at least compared to adult black soldier flies; I’m not familiar with silkworm and other larval behaviour), so would probably score lower on both probability of sentience and welfare range. So, if you’re including all arthropods and using these figures for all arthropods, you should probably think of these numbers (or at least the BSF ones) as providing an overestimate of the arthropod welfare effects.
Hi Michael,
Thanks for pointing that out. I agree it is something worth having in mind.
However, the moral weight could still be much lower than those of black soldier flies and silkworms, and terrestrial arthropods still dominate. Assuming the moral weight is directly proportional to the number of neurons, in which case it is 0.0361 % (= 4.70 μ / 0.013) the one of black soldier flies, and 0.235 % (= 4.70 μ / 0.002) the one of silkworms, the mean cost-effectiveness would increase/decrease 353 % (assuming terrestrial arthropods have negative/positive lives).
It is true I may have overestimated the rate of deforestation, but I also expect the moral weight obtained by direct proportionality to the number of neurons to be an underestimate, so I think the analysis can go either way.
I think it would be really nice if Open Philanthropy, Rethink Priorities, Wild Animal Initiative, Faunalytics or other looked into considerations such this.
Hi Vasco, thanks for writing this! I’m glad to see more cross-cause research, and this seems like a useful starting point.
Some quick thoughts on why the deforestation rate assumptions might be too high:
This assumption would not hold if some of the major causes of deforestation are limited by factors not very sensitive to population size. For example, some deforestation may be driven by international demand for products that are produced in those countries, so that the effects of more people willing to work on these products (by saving lives) should be tempered by elasticity effects. They could also be limited by capital, which GiveWell beneficiaries may be unlikely to provide, given their poverty and living situations.
Deforestation for agriculture for domestic consumption or for living area would be sensitive to the population size, but, again, GiveWell beneficiaries may be unrepresentative, a possibility you implicitly acknowledge by assuming is not the case.
Furthermore, with increasing deforestation, there will be less land left to deforest, and that land may be harder to deforest (because of practical or political challenges). Each of these point towards the marginal effect of population being smaller than the average effect.
I haven’t looked into any of this in detail or tried to verify any of these possibilities, though.
Hi Michael,
Thanks for the encouragement!
I agree I may well have overestimated the deforestation rate. That being said, even if the deforestation rate is only 1 % of what I assumed, the mean relative variation in cost-effectiveness would range from 3.86 k to 0.166 μ. We can narrow this down by focussing on the plausible moral weights, but without looking further it looks like the analysis could go either way.
Wow fascinating, thanks for this post Vasco!
I’d be inclined to take a Bayesian approach to this kind of cost-effectiveness modelling, where the “prior evidence” is the estimated impact on lives saved. This is something we have strong reason to believe is good under many world views. Then the “additional evidence” would be the reduction in insect welfare caused by deforestation. I’m just so very uncertain about whether the second one is really a negative effect that I think it would be swamped by the impact on lives saved. This is because we have several steps of major uncertainty: impact of GiveWell charities on deforestation, impact of deforestation on insect welfare, moral weight of insects, baseline welfare of insects (positive or negative).
One issue here is that the same objection could potentially be applied to longtermist-focused charities, but I actually don’t think this is true. I think (say) working in government to reduce the risk of biological weapons is actually far more robustly positive than trying to improve insect welfare by reducing deforestation. It also seems like the value of the far future could be far greater than the impact on present-day insects.
What are your thoughts on this approach?
Hi Lucas,
Thanks for engaging!
I think the approach you are suggesting is very much in line with the one of section “Applying Bayesian adjustments to cost-effectiveness estimates for donations, actions, etc.” of this post from Holden Karnofsky.
I used to apply the above as (CE stands for cost-effectiveness, E for expected value, and V for variance):
E(“CE”) = “weight of modelled effects”*E(“CE for modelled effects”) + “weight of non-modelled effects”*E(“CE for non-modelled effects”).
“Weight of modelled effects” = 1/V(“CE for modelled effects”)/(1/V(“CE for modelled effects”) + 1/V(“CE for non-modelled effects”)). This tends to 1 as the uncertainty of the non-modelled effects increases.
“Weight of non-modelled effects” = 1/V(“CE for non-modelled effects”)/(1/V(“CE for modelled effects”) + 1/V(“CE for non-modelled effects”)). This tends to 0 as the uncertainty of the non-modelled effects increases.
If the modelled effects are lives saved in the near term, and the non-modelled effects are the impact on the welfare of terrestrial arthropods (which are not modelled by GW), V(“CE for modelled effects”) << V(“CE for non-modelled effects”). So, based on the above, you are saying that we should give much more weight to the lives saved in the near term, and therefore these are the driver for the cost-effectiveness.
I believe the formula of the 1st bullet is not correct. I will try to illustrate with a sort of reversed Pascal’s mugging. Imagine there was one button which would destroy the whole universe with probability 50 % when pressed, and someone was considering whether to press it or not. For the sake of the argument, we can suppose the person would certainly (i.e. with probability of 100 %) be happy while pressing the button. Based on the formula of the 1st bullet, it looks like all weight would go to the pretty negligible effect on the person pressing the button, because it would be a certain effect. So the cost-effectiveness of pressing the button would be essentially driven by the effect on one single person as opposed to the consideration that the whole universe could end with likelihood 50 %. The argument works for any probability of universal destruction lower than 1 (e.g. 99.99 %), so the example also implies null value of information for learning more about the impact of pressing the button. All of this seems pretty wrong.
However, I still think priors are valuable. If 2 restaurants have a rating of 4.5/5, but one of the ratings is based on 1 review, and another on 1 k reviews, the restaurant with more reviews is most likely better (assuming a prior lower than 4.5).
So I think the formula is not right as I wrote it above, but is pointing to something valuable. I would say it can be corrected as follows:
E(“CE”) = “weight of method 1“*E(“CE for method 1”) + “weight of method 2”*E(“CE for method 2”).
I do not have a clear approach to estimate the weights, but I think they should account not only for uncertainty, but also for their scale. Inverse-variance weighting appears to be a good approach if all methods output estimates for the same variable (such as in a meta-analysis). For cost-effectiveness analyses, I suppose the relevant variable is total cost-effectiveness. This encompasses near term effects on people, but also near term effects on animals, and long term effects. Since the scope of GW’s estimates for lives saved differs from that of my estimates for the impact on terrestrial arthropods, I believe we cannot directly apply inverse-variance weighting.
It is not reasonable to press a button which may well destroy the whole universe for the sake of being happy for certain. In the same way, but to a much smaller extent, I do not think we can conclude GW’s top charities are robustly cost-effective just because we are pretty certain about their near terms effects on people. We arguably have to investigate (decrease uncertainty, and increase resilience) about the other effects, such as those on animals, and the consequences of changing population size (which have apparently not been figured out; see comments here).
I agree efforts around pandemic preparedness are more robustly positive than those targetting insect welfare. 2 strong arguments come to mind:
It looks like at least some projects (e.g. developping affordable super PPE) are robustly good to decrease extinction risks, and I think extinction is robustly bad.
Extinction risks are pretty large in scale, and so they will tend to be a more important driver of the total cost-effectiveness. This is not necessarily the case for efforts on improving insect welfare. They might e.g. unintendly cause people to think that nature / wild life is intrinsically good/bad, and this may plausibly shape how people think about spreading (or not) wild life beyond Earth, which may be the driver of the total cost-effectiveness.
Putting a hold on helping people in poverty because of concern about insect rights is insulting to people who live in poverty and epitomises ivory-tower thinking that gets the Effective Altruism community so heavily criticised.
Saying “further research would be good” is easy because it is always true. Doing that research or waiting for it to be done is not always practical. I think you are being extremely unreasonable if, before helping someone die of malaria you ask for research to be done on:
the long term impacts of bednets on population growth
the effects of population growth on deforestation
the effects of deforestation on insect populations and welfare
specific quantification of insect suffering
I have a general disdain for criticizing arguments as ivory-tower thinking without engaging with the content itself. I think it is an ineffective way of communication which leaves room for quite a lot of non-central fallacy. The same ivory tower thinkings you identified were also quite important at promoting moral progress with careful reflections. I don’t think considering animals as deserving moral attention is naturally an insulting position. Perhaps a better way of approaching this question will be to actually consider whether or not this trade-off is worth it.
p.s I don’t think the post called for a stop of GiveWell’s act of giving. The research questions you identified are important decision relevant open-ended questions which will aid GiveWell’s research. Perhaps not all of it can be solved, but it doesn’t mean that we shouldn’t consider devoting a reasonable amount of resources to researching these questions. I’m a firm believer in world-view diversification. The comparative probably isn’t that GiveWell will stop helping someone die of malaria, but they may lower their recommendations for said program/or offer recommendations to make existing interventions more effective with an account for these new moral considerations.
I agree with you that criticising arguments without engaging with the content is bad. I do however probably agree with this statement.
“Putting a hold on helping people in poverty because of concern about insect rights is insulting to people who live in poverty and epitomises ivory-tower thinking that gets the Effective Altruism community so heavily criticised.”
I think that living a rich lifestyle in a western country, while saying that Givewell’s projects which help lift people out of poverty could be very harmful because of potential harm to insects is probably insulting to poor people, whether the argument is right or wrong. This also definitely gets the EA community heavily criticised.
And you say that the post doesn’t call for a stop on GiverWell’s act of giving, yet he suggests.
”I would say focussing on longtermist interventions is better, as their (longterm) effects are more predictable.”, which seems to lean in that direction.
I think a better approach due to the great uncertainty is to research things like terrestrial suffering, before referring to givewell or other types of giving. Why be potentially insulting or get the community criticised when you can encourage more research and thought without necessarily bringing global health and development into the question?
Thanks for commenting, Henry. I do feel you are pointing to something valuable. FWIW, I am confused about the implications of my analysis too. Somewhat relatedly, I liked this post from Michelle Hutchinson.