I think this is a good topic, but including the word “far” kind of ruins the debate from the start as it seems like the person positing it may already have made up their mind and it introduces unnecessary bias.
Late to this conversation, but I like the debate idea. A simple way to get a cost-effectiveness slider might be just to have the statement be “On the current margin $100m should go to:” and the slider go from 100% animal welfare to 100% global health, with a mid-point being 50⁄50.
I think they are natural to compare because they both have interventions that cash out in short-term measurable outcomes, and can absorb a lot of funding to churn out these outcomes.
Comparing e.g. AI safety and Global Health brings in a lot more points of contention which I expect would make it harder to make progress in a narrowly scoped debate (in terms of pinning down what the cruxes are, actually changing people’s minds etc).
I think I’d rather talk about the important topic even if it’s harder? My concern is, for example, that the debate happens and let’s say people agree and start to pressure for moving $ from GHD to AW. But this ignores a third option, move $ from ‘longtermist’ work to fund both.
Feels like this is a ‘looking under the streetlight because it’s easier effect’ kind of phenomenon.
If Longtermist/AI Safety work can’t even to begin to cash out measurable incomes that should be a strong case against it. This is EA, we want the things we’re funding to be effective.
I arrived at a cost-effectiveness of corporate campaigns for chicken welfare of 15.0 DALY/$ (= 8.20*2.10*0.870), assuming:
Campaigns affect 8.20 chicken-years per $ (= 41*1/5), multiplying:
Saulius Šimčikas’ estimate of 41 chicken-years per $.
An adjustment factor of 1⁄5, since OP [Open Philanthropy] thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis [which is linked just above]”.
An improvement in chicken welfare per time of 2.10 times the intensity of the mean human experience, as I estimated for moving broilers from a conventional to a reformed scenario based on Rethink Priorities’ median welfare range for chickens of 0.332[6].
A ratio between humans’ healthy and total life expectancy at birth in 2016 of 87.0 % (= 63.1/72.5).
In light of the above, corporate campaigns for chicken welfare are 1.51 k (= 15.0/0.00994) times as cost-effective as TCF [GiveWell’s Top Charities Fund].
Does this basically just reflect how much people value human lives in relation to animal lives? If Alex values a chicken WALY at .00002 that of a human WALY, and Bob values a chicken WALY a 0.5 of a human WALY, then global health either is or isn’t more effective.
I have a lot of respect for OP, but I think it’s clear that we could really use a larger funding base. My guess is that there should be a lot more thinking here.
I wonder whether it may be better to frame the discussion around personal donations. Open Philanthropy accounts for the vast majority of what I guess you are calling EA funding, and my impression is that they are not very amenable to changing the allocation across their 3 major areas (global catastrophic risks, farmed animal welfare, and human global health and wellbeing) based on EA Forum discussions.
I’d like there to be more envisioning of what sorts of cultures, strengths, and community we want to aim for. I think there’s not much attention here now.
Why, if anyone, should be leaders within Effective Altruism?
I think that OP often actively doesn’t want much responsibility. CEA is the more obvious fit, but they often can only do so much, and also they arguably very much represent OP’s interests more than that of EA community members. (just look at where their funding is coming from, or the fact that there’s no way for EA community members to vote on their board or anything).
I think that there’s a clear responsibility gap and would like to see more understanding here, along with ideally plans of how things can improve.
Decision making is a personal favorite cause area of mine and I’d like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.
Decision making is a personal favorite cause are of mine and I’d like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.
I really liked the discussion week on PauseAI. I’d like to see another one on this topic, taking up the new developments in reasons and evidence.
When? Probably there are other topics that didn’t have a week, so they should be prioritized. I think PauseAI is one of the most important topics. So, maybe in the next 3 − 9 months?
While existential risks are widely acknowledged as an important cause area, some EA’s like William MacAskill have argued that “Trajectory Change” may be highly contingent even if x-risk is solved and so may be just as important for the long-term future. I would like to see this debated as a cause area
Definitely not worth spending a whole week debating vs. someone just writing a post if they feel strongly that this hasn’t been sufficiently discussed.
My quick guess is that the answer is pretty simple and boring. Like, “things were just a mess on the future fund level, and they were expecting things to get better over time.” I’d expect that there are like 5 people who really know the answer, and speculation by the rest of us won’t help much.
Animal welfare is far more effective per $ than Global Health.
Edit:
How about “The marginal $100 mn on animal welfare is 10x the impact of the marginal $100 mn on Global Health”
I think this is a good topic, but including the word “far” kind of ruins the debate from the start as it seems like the person positing it may already have made up their mind and it introduces unnecessary bias.
Ya, we could just use a more neutral framing: Is animal welfare or global health more cost-effective?
What do you think is the 50⁄50 point? Where half of people believe more, half less.
Not sure.
We could replace the agree/disagree slider with a cost-effectiveness ratio slider.
One issue could be that animal welfare has more quickly diminishing returns than GHD.
Maybe but let’s not overcomplicate things.
Late to this conversation, but I like the debate idea. A simple way to get a cost-effectiveness slider might be just to have the statement be “On the current margin $100m should go to:” and the slider go from 100% animal welfare to 100% global health, with a mid-point being 50⁄50.
Sure then quantify it, right?
Sure but 10x seems a weird place to start, surely start with “more cost effective” before applying arbitrary multipliers...
1x is an arbitrary multiplier too.
I would want to put the number at the 50th percentile belief on the forum.
Why just compare to Global Health here, surely it should be “Animal Welfare is far more effective per $ than other cause areas’?
I think they are natural to compare because they both have interventions that cash out in short-term measurable outcomes, and can absorb a lot of funding to churn out these outcomes.
Comparing e.g. AI safety and Global Health brings in a lot more points of contention which I expect would make it harder to make progress in a narrowly scoped debate (in terms of pinning down what the cruxes are, actually changing people’s minds etc).
I think I’d rather talk about the important topic even if it’s harder? My concern is, for example, that the debate happens and let’s say people agree and start to pressure for moving $ from GHD to AW. But this ignores a third option, move $ from ‘longtermist’ work to fund both.
Feels like this is a ‘looking under the streetlight because it’s easier effect’ kind of phenomenon.
If Longtermist/AI Safety work can’t even to begin to cash out measurable incomes that should be a strong case against it. This is EA, we want the things we’re funding to be effective.
Thanks for suggesting that, Nathan! For context:
Does this basically just reflect how much people value human lives in relation to animal lives? If Alex values a chicken WALY at .00002 that of a human WALY, and Bob values a chicken WALY a 0.5 of a human WALY, then global health either is or isn’t more effective.
I would like a discussion week once a month-ish.
I think we could give that a go, but it might make sense to have a vote after three months about whether it was too much.
I’d like them to be regular, but a little bit less frequent. Maybe once every two months? Once every six weeks?
How can we best find new EA donors?
I have a lot of respect for OP, but I think it’s clear that we could really use a larger funding base. My guess is that there should be a lot more thinking here.
This is a great one
Should Global Health comprise more than 15% of EA funding?
Hi Nathan,
I wonder whether it may be better to frame the discussion around personal donations. Open Philanthropy accounts for the vast majority of what I guess you are calling EA funding, and my impression is that they are not very amenable to changing the allocation across their 3 major areas (global catastrophic risks, farmed animal welfare, and human global health and wellbeing) based on EA Forum discussions.
Feels like maybe a broader discussion about how much EA should focus on long-termism vs near-termist interventions.
Where do we want EA to be in ~20 years?
I’d like there to be more envisioning of what sorts of cultures, strengths, and community we want to aim for. I think there’s not much attention here now.
AI Safety Advocates have been responsible for over half of the leading AI companies. We don’t take that seriously enough.
Why, if anyone, should be leaders within Effective Altruism?
I think that OP often actively doesn’t want much responsibility. CEA is the more obvious fit, but they often can only do so much, and also they arguably very much represent OP’s interests more than that of EA community members. (just look at where their funding is coming from, or the fact that there’s no way for EA community members to vote on their board or anything).
I think that there’s a clear responsibility gap and would like to see more understanding here, along with ideally plans of how things can improve.
Epistemics/forecasting should be an EA cause area
I’d like a debate week once every 2 months-ish.
Worldview diversity isn’t a coherent concept and mainly exists to manage internal OpenPhil conflict.
Seems needlessly provocative as a title, and almost purposefully designed to generate more heat than light in the resulting discussion.
Decision making is a personal favorite cause area of mine and I’d like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.
Decision making is a personal favorite cause are of mine and I’d like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.
Sensemaking of AI governance. What do people think is most promising and what are their cruxes.
Besides posts, I would like to see some kind of survey that quantifies and graphs people’s believes.
I really liked the discussion week on PauseAI. I’d like to see another one on this topic, taking up the new developments in reasons and evidence.
When?
Probably there are other topics that didn’t have a week, so they should be prioritized. I think PauseAI is one of the most important topics. So, maybe in the next 3 − 9 months?
While existential risks are widely acknowledged as an important cause area, some EA’s like William MacAskill have argued that “Trajectory Change” may be highly contingent even if x-risk is solved and so may be just as important for the long-term future. I would like to see this debated as a cause area
Wild animal welfare and longtermist animal welfare versus farmed animal welfare?
Non-consequentialist effective altruism/animal welfare/cause prio/longtermism
We still have not had satisfactory answers for why the FTX Future Fund was so sending cheques via strange bank accounts.
Definitely not worth spending a whole week debating vs. someone just writing a post if they feel strongly that this hasn’t been sufficiently discussed.
My quick guess is that the answer is pretty simple and boring. Like, “things were just a mess on the future fund level, and they were expecting things to get better over time.” I’d expect that there are like 5 people who really know the answer, and speculation by the rest of us won’t help much.