Thanks so much for your awesome work!! :)
What is StrongMinds’ room for more funding, and do you expect the cost-effectiveness of the marginal dollar (ie. additional funds) to be any worse than the average cost-effectiveness of StrongMinds?
Thanks so much for your awesome work!! :)
What is StrongMinds’ room for more funding, and do you expect the cost-effectiveness of the marginal dollar (ie. additional funds) to be any worse than the average cost-effectiveness of StrongMinds?
Fantastic work—thank you!
Re Jalil et. al (2022), it’s interesting to see there was a decrease in poultry/fish consumption as a result of climate change messaging (in addition to red meat). My prior concern would’ve been that people might simply switch from red meat to poultry/fish. For those interested in the general topic, note also this meta-review on interventions that influence animal-product consumption.
Ah yes, my apologies, I meant natural experiments (or in the case of Croke 2019, a natural experiment caused by an actual experiment).
I suppose it is possible deworming would have a much smaller effect when children also receive these other interventions. However, I would’ve thought many children currently being treated for worms are also receiving such interventions, therefore making it decision-relevant for GiveWell-funded deworming programs?
Thanks for this post, the critique of GiveDirectly seems particularly compelling and important.
On the issue of effects on males vs females, were you able to look into whether they may have converged towards more homogenous effects over time? It seems most of the eradication campaigns studied in the papers listed happened in the 1950s—I would suspect labour market opportunities are significantly stronger for women today, though I haven’t looked at the data or whether this is true for the low-income countries where GiveWell’s malaria charities do their work. Lucas 2010 also finds quite large educational effects on females (with no males included in the study), while Barreca 2010 find no significant differences in economic effects between males and females.
Note that if observational (i.e. non-experimental) studies are being included, one would probably also want to consider Croke 2019, which shows null effects on literacy and numeracy.
There is also Makamu et al. 2018, but I don’t think the natural experiment is very plausible (they use variation in which regions had deworming campaigns, but this is likely to be correlated with other policies/economic factors).
This is awesome, go Aussies!
Hi Holly, we’re not aware of how toxic ammonia is for other aquatic life. We believe it is always toxic, but some species may be more tolerant than others. Fish Welfare Initiative notes here that ammonia from mariculture farms may threaten aquatic life due to harmful algae blooms.
Hi MHR, thank you very much for your questions, these are important considerations!
1. We certainly aim to consider the long-term effects on the total number of shrimps farmed when designing our interventions. Though we have not yet had an opportunity to precisely model the net effect, we expect a full analysis would need to account for:
The reduction in mortality due to improved shrimp health
The opportunity for farmers to produce larger shrimps (and hence fewer individuals) due to improved health
The long-term impacts of profitability on shrimp production
The supply and demand effects of a change in shrimp production
This uncertainty is one reason why we see our supply-side work (i.e. work with farmers) as the first part of a long-term strategy to improve shrimp welfare, which may include other mechanisms such as corporate outreach and legislative change. In the short-term, our supply-side work aims to provide a proof of concept that shrimps can be farmed at higher welfare.
As a short-term policy to partially mitigate the risk of more shrimps being farmed, our Asks require farmers not to increase their stocking densities above their baseline.
2. We have not yet investigated these issues in detail, but we understand there may be a couple of ways to implement EE without affecting harvesting. One option is to remove EE structures prior to harvest. Alternatively, EE could even mean having a slightly shallower area of the pond, which should not affect harvesting. In general, however, the impact of EE on harvesting will likely depend on the type of EE and the type of harvesting method (e.g. nets or mechanical systems).
In terms of disease, one thing to note is that ponds require an ecologically balanced system, so the intention is typically not to completely disinfect the pond. Depending on the disease issue, implementing biosecurity protocols such as treating the pond with lime could also mitigate disease risk. However, having said that, it’s possible substrates could serve as vectors in disease transmission, particularly if biosecurity standards are not met.
Thanks for this thoughtful post Carolina! I would second Karthik’s note here—I think there have also been a few other GE studies which show contradictory results, so it’s not clear that the spillover effects would be positive once inflation and exchange rates effects are taken into account. Others have also raised concerns about possible negative pyschological spillovers, though from memory I think GiveDirectly typically provides cash to everyone in a village, which may mitigate this issue.
Thanks for writing this up Joseph, these are really valuable questions to raise. I’d be particularly excited to see someone do a systematic review of spillovers on the control group after developmental interventions.
Interesting, thanks both!
Hi everyone,
In this recent critique of EA, Erik Hoel claims that EA is sympathetic towards letting AGI develop because of the potential for billions of happy AIs (~35 mins) . He claims that this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation).
Is this true, or is it a misrepresentation of why EA funding goes towards alignment? For example, perhaps it is because EAs think AGI is inevitable or it is too difficult to delay/prevent?
Thanks very much!
Lucas
I don’t know too much about this topic, but this might provide some useful resources? :) This is a really important topic, so hopefully someone in the community will be able to review the research at some point!
Thanks for this really well-written post, I particularly like how you clarified the different connotations of longtermism and also the summary table of cost-effectiveness.
I think one thing to note is that an X-risk event would not only wipe out humans, but also the billions of factory farmed animals. Taking into account animal suffering would dramatically worsen the cost-effectiveness of X-risk from a neartermist point of view. I think this implies longtermism is necessary to justify working on X-risk (at least until factory farming is phased out).
While I don’t necessarily agree with Matty’s view that total utilitarianism is wrong, I think this comment highlights a key distinction between a) improving the lives of future people and b) bringing lives into existance.
The examples in this post are really useful to show that future people matter, but they don’t show that we should bring people into existance. For example, if future people were going to live unhappy lives, it would still be good to do things that prevent their lives from being worse (e.g. improve education, prevent climate change, pick up glass), but this doesn’t necessarily imply we should try to bring those unhappy people into existance (which may have been Josh’s concern, if I understand correctly).
Sorry I’m a bit late to the party on this, but thanks for the well-researched and well thought-out post.
My two cents, as this line caught my eye:
Notably, working on these issues can often improve the lives of people living today (e.g. working towards safe advanced AI includes addressing already present issues, like racial or gender bias in today’s systems).
I think the line of reasoning concerns me. If working on racial/gender bias from AI is one of the most cost-effective ways to make people happier or save lives, then I would advocate this line of reasoning, but I doubt this is the case.
Rather, if the arguments for working on AI as an X-risk aren’t convincing enough on its own, it seems this would be enough to re-consider whether we want to work on AI.
Alternatively, the racial/gender bias angle could be used more for optics, rather than truly being the rationale behind working on AI. While it’s possible this would bring more people on board, there are risks associated with hiding what you really think (see section “Longtermism vs X-risk” of this podcast for discussion on the issue—Will Macaskill notes “I think it’s really important to convey what you believe and why”).
Such an amazing talk, well done!! :)
This is an awesome and beautifully written post, thanks James!