Has anyone talked with/lobbied the Gates Foundation on factory farming? I was concerned to read this in Gates Notes.
”On the way back to Addis, we stopped at a poultry farm established by the Oromia government to help young people enter the poultry industry. They work there for two or three years, earn a salary and some start-up money, and then go off to start their own agriculture businesses. It was a noisy place—the farm has 20,000 chickens! But it was exciting to meet some aspiring farmers and businesspeople with big dreams.”
It seems a disaster that the Gates foundation are funding and promoting the rapid scale up of factory farming in Africa, and reversing this seems potentially tractable to me. Could individuals, Gates insiders or the big animal rights orgs take this up?
The currently labelled “meat eater problem” has been referred to a number of times during debate week. The forum wiki on the “meat eater” problem summarises it like this.
“Saving human lives, and making humans more prosperous, seem to be obviously good in terms of direct effects. However, humans consume animal products, and these animal products may cause considerable animal suffering. Therefore, improving human lives may lead to negative effects that outweigh the direct positive effects.”
I think this an important issue to discuss, although I think we should be extremely sensitive and cautious while discussing it.
On this note I think we should re-label this the meat eating problem, as I think there are big upsides with minimal downside.
Accuracy: I don’t think the core problem actually the people who’s lives we are saving, its that they then eat meat and cause suffering. I think its important to separate the people from the core problem as this better helps us consider possible solutions
Persuasion: I think we’re more able to persuade if we discuss the problem separated from the people. I can talk about the “meat eating problem” with non-EA friends and it will be hard but they might understand, but if through the very name of the issue I make the people themselves the problem, that can easily make me seem callous, and people can switch off.
Fairness: Even if you disagree with me on accuracy and double down that the core problem is the people, I think its pretty unfair to lump the label of a serious philosophical problem on the poorest people on earth—people with little education who are often just trying to survive and have never had the chance to consider this issue.
It seems to me that this problem was mainly thought up and developed by the EA community (which is great), and we could probably just decide to call it something different from here on out. I’m asking the forum team to consider changing the name on the wiki as well.
NB: @JWS 🔸 proposed this name change a couple of months ago, which got me thinking about it again.
It’s true that meat eating is closer to what we actually care about, but it’s worth singling out causal pathways from saving lives and increasing incomes/wealth, as potential backfire effects. “Meat eating problem” seems likely to be understood too generally as the problem of animal consumption, without explanation. I’d prefer a more unique expression to isolate the specific causal pathways.
Some other ideas:
meat eating backfire (problem)
more meat backfire/problem
meat backfire (problem)
(more) animal product backfire (problem)
(Eggs and other animal products besides meat matter, too.)
Yep I’m happy with any of these, I especially like the “meat eating backfire” because it kind of implies we’re shooting in the right direction in the first place. Also you are right that in terms of suffering (especially here in Uganda) its probably the eggs that might be a bigger problem even than the meat.
Of course, there are other ways meat (and other animal product) consumption could increase from well-intentioned EA interventions than just by saving lives or increasing incomes/wealth. For example, interventions that involve subsidizing animal welfare improvements can carry this backfire risk.
I’m less worried about confusion with other problems, because they don’t come up as often, and researchers are more likely to account for them in animal welfare research anyway. All effects on nonhuman animals are usually omitted from analyses of interventions aimed specifically at helping humans, including GHD and CGRs. It’s worth reminding people of these backfire risks.
I could also argue that “the meat eater problem” is just as ambiguous because it could easily be misinterpreted as just the problem that everyone all around the world eats meat in general.
I don’t think precision is necessarily the be all and end all of names ;).
I think it is clearer yes, but I don’t really like about it for my reasons 2 and 3 above, and I still think the direct problem isn’t about the people existing, but they fact they are eating meat after their lives are “saved”. Labeling it the “poor meat eater” problem could potentially be even worse in that it could be perceived to be sounding like its blaming poor people (although I know that’s not the intent).
I think it’s totally fair name of the problem, as its “unfairness” comes from the problem statement, not its name. “I think its pretty unfair to lump the label of a serious philosophical problem on the poorest people on earth” here for example, it’s meat eater problem being morally icky, not its name.
Accuracy: I don’t think the core problem actually the people who’s lives we are saving, its that they then eat meat and cause suffering. I think its important to separate the people from the core problem as this better helps us consider possible solutions
The main takeaway of the ‘meat eater problem’ (sorry!) is to reassess the cost-effectiveness of saving human lives, not necessarily to argue that we should focus on reducing animal consumption in lower-income countries. While reducing animal consumption is important, that’s not typically the central takeaway from this specific ‘problem’.
In this sense, the saving lives aspect is more central to the problem than the meat consumption aspect, though both are pivotal. So, in a purely logical sense, the term ‘meat eater problem’ might actually be more accurate.
Depends if there’s a better option. I agree with MichaelStJules when he says “’Meat eating problem’ seems likely to be understood too generally as the problem of animal consumption.” The other proposed options don’t seem that great to me because they seem to abstract too far away from the issue of saving lives which is at the core of the problem.
It’s worth noting there is a cost to changing the name of something. You’ll then have the exact same thing referred to by different names in different places which can lead to confusion. Also it’s very hard to get a whole community to change the way they refer to something that has been around for a while.
With regards to the “persuasion” point—I think the issue is that the problem we are talking about is inherently uncomfortable. We’re talking about how saving human lives may not be as good as we think it is because humans cause suffering to animals. This is naturally going to be hard for a lot of people to swallow the second you explain it to them, and I don’t think putting a nicer name on it is going to change that.
With regard to fairness…this is my personal view but this doesn’t bother me much. I don’t see evidence of individuals in lower income countries caring about the language we use on the EA Forum which is what would ultimately influence me on this point.
I’m aware I’m in the extreme minority here and I might be wrong. I fully expect to get further downvotes but if people disagree I would welcome pushback in the form of replies.
I feel like 5% of EA directed funding is a high bar to clear to agree with the statement ““AI welfare should be an EA priority”. I would have maybe pitched for maybe 1% 2% as the “priority” bar, which would still be 10 million dollars a year even under quite conservative assumptions as to what would be considered unrestricted EA funding.
This would mean that across all domains (X-risk, animal welfare, GHD) a theoretical maximum of 20 causes, more realistically maybe 5-15 causes (assuming some causes warrant 10-30% of funding) would be considered EA Priorities. 80,000 hours doesn’t have AI welfare in their top 8 causes but it is in their top 16, so I doubt it would clear the “5%” bar, even though they list it under their “Similarly pressing but less developed areas”, which feels priorityish to me (perhaos they could share their perspective?)
It could also depend how broadly we characterise causes. Is “Global Health and development” one cause, or are Mosquito nets, deworming and cash transfers all their own causes? I would suspect the latter.
Many people could therefore consider AI welfare an important cause area in their eyes but disagree with the debate statement because they don’t think it warrants a large 5%+ of EA funding despite its importance.
Or I could be wrong and many could consider 5% a reasonable or even low bar. Its clearly a subjective question and not the biggest deal but hey :D.
In ordinary language, I wouldn’t generally consider something that gets 1% of resources to be a “priority.” Applying your reasoning above, that would create a theoretical maximum of 100 “priorities” and a more realistic range of perhaps 10-40. As we move beyond the low teens, the idea of a “priority” gets pretty watered down in my book.
The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I’m not sure exactly why I felt this way, but here are a few ideas.
(High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing—more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them.
(High certainty) When other orgs are criticised or asked questions, they often don’t reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I’m not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI’s funding is less than many of orgs that have not been scrutinised as much.
(Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a “scrutiny rebalancing” of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less.
Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don’t engage, so the thread falls off the frontpage. Orgs might reasonably decide that high degrees of transparency and engagement might do them net harm rather than good. This might not be good for anyone
Do you agree/disagree? And what could we do to make the situation better?
I think it’s fairest to compare HLI’s charity analysis with other charity evaluators like Givewell, ACE, and Giving Green.
Giving Green has been criticised regularly and robustly (just look up any of their posts). Givewell publish their analysis and engage with criticism; HLI themselves have actually criticised them pretty robustly! I don’t know about ACE because I don’t stay up to date on animals but I bet it’s similar there.
The dynamics are quite different for example in charitable foundations where they don’t need to convince anyone to donate differently, or charities that deliver a service who only need to convince their funders to continue donating.
Thanks Kristen for this clear and concise reply. This comparison with the experience of other charity evaluators has shifted my opinion on this somewhat nice one.
It seems a bit of a pity that they should receive significantly more scrutiny than charities or foundations though. In an ideal world everyone should be transparent and heavily scrutinised but it does make sense that the incentives might not be there for other orgs...
I agree that more orgs should get this kind of scrutiny. I agree that we are likely to blindly trust orgs that don’t transparently discuss their inst workings, which is super sad.
Interesting reflection on Mental health providers too, be that’s not a world I know!
This argument I struggle with...
“I don’t think the problem is that HLI got too much hate for fucking up, it’s that everyone else gets too little hate for being opaque”
I realize you are probably beinga bit tongue in cheek, but I think we could criticise and discuss while being more encouraging and positive. We are all human, to and I’m not sure piling on the “hate” will necessarily lead to improvement in epistemics and rigorous analysis.
HLI fucked up their analysis, but because it was public we found out about it. Most EAs are too fearful to expose their work to scrutiny. Compare them to others who work on mental health within EA...
Most coaches and therapists in EA don’t do any rigorous testing of whether what they are doing actually works. They don’t even allow you to leave public reviews for them. I think we’re the only organisation to even have a TrustPilot!!!
I don’t think the problem is that HLI got too much hate for fucking up, it’s that everyone else gets too little hate for being opaque.
Now HLI have been dragged through the mud, you can bet your ass they won’t be making the same mistakes again. So long as they keep being transparent, they’ll keep learning and growing as an org. Others will keep making the same mistakes indefinitely, only we’ll never know about it and will continue blindly trusting them.
I agree that more orgs should get this kind of scrutiny. I agree that we are likely to blindly trust orgs that don’t transparently discuss their inst workings, which is super sad.
Interesting reflection on Mental health providers too, be that’s not a world I know!
This argument I struggle with...
“I don’t think the problem is that HLI got too much hate for fucking up, it’s that everyone else gets too little hate for being opaque”
I realize you are probably beinga bit tongue in cheek, but I think we could criticise and discuss while being more encouraging and positive. We are all human, to and I’m not sure piling on the “hate” will necessarily lead to improvement in epistemics and rigorous analysis.
Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair.
One general problem with online discourse is that even if each individual makes a fair critique, the net effect of a lot of people doing this can be disproportionate, since there’s a coordination problem. That said, a few things make me think the level of criticism leveled at HLI was reasonable, namely:
HLI was asking for a lot of money ($200k-$1 million).
The critiques people were making seemed (generally) unique, specific, and fair.
The critiques came after some initial positive responses to the post, including responses to the effect of “I’m persuaded by this; how can I donate?”
Does there need to be a “scrutiny rebalancing” of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less.
I agree with you that GHD organizations tend to be scrutinized more closely, in large part because there is more data to scrutinize. But there is also some logic to balancing scrutiny levels within cause areas. When HLI solicits donations via Forum post, it seems reasonable to assume that donations they receive more likely come out of GiveWell’s coffers than MIRI’s. This seems like an argument for holding HLI to the GiveWell standard of scrutiny, rather than the MIRI standard (at least in this case).
That said, I do think it would be good to apply stricter standards of scrutiny to other EA organizations, without those organizations explicitly opening themselves up to evaluation by posting on the Forum. I wonder if there might be some way to incentivize this kind of review.
When HLI solicits donations via Forum post, it seems reasonable to assume that donations they receive more likely come out of GiveWell’s coffers than MIRI’s. This seems like an argument for holding HLI to the GiveWell standard of scrutiny, rather than the MIRI standard (at least in this case).
I am concerned that rationale would unduly entrench established players and stifle innovation. Young orgs on a shoestring budget aren’t going to be able to withstand 2023 GiveWell-level scrutiny . . . and neither could GiveWell at the young-org stage of development.
Yeah, I should’ve probably been more precise: the criticism of HLI has mainly been leveled against their evaluation of a single organization’s single intervention, whereas GW has evaluated 100+ programs, so my gut instinct is that it’s fair to hold HLI’s StrongMinds evaluation to the same ballpark level of scrutiny we’d hold a single GW evaluation to (and deworming certainly has been held to that standard). It might be unfair to expect an HLI evaluation to be at the level as a GW evaluation per dollar invested/hour spent (given that there’s a learning curve associated with doing such evaluations and there’s value associated with having multiple organizations do them), but this seems like—if anything—an argument for scrutinizing HLI’s work more closely, since HLI is trying to climb a learning curve, and feedback facilitates this.
I think another factor is that HLI’s analysis is not just below the level of Givewell, but below a more basic standard. If HLI had performed at this basic standard, but below Givewell, I think strong criticism would have been unreasonable, as they are still a young and small org with plenty of room to grow. But as it stands the deficiencies are substantial, and a major rethink doesn’t appear to be forthcoming, despite being warranted.
Probably a stupid question (probably just missed), can someone point me to where Givewell do a meta-analysis or similar depth of analysis as this HLI one. I can’t seem to find it and I would be keen to do a quick compare myself.
I’m not aware of a GW analysis quite like this one, although I didn’t go back and look at all its prior work.
In a situation like this, where GiveWell was considering StrongMinds as a top charity recommendation, it’s almost certain that it would have first funded a bespoke RCT designed to address key questions for which the available literature was mixed or inconclusive. HLI doesn’t have that luxury, of course. Moreover, what HLI is trying to measure is significantly harder to tease out than “how well do bednets work at saving lives” and similar questions.
I think those are relevant considerations that make comparing HLI’s work to the “GiveWell standard” inappropriate. However, to acknowledge Ben’s point, HLI’s critics are alleging that the stuff that was missed was pretty obvious and that HLI hasn’t responded appropriately when the missed stuff was pointed out. I lack the technical background and expertise to fully evaluate those claims.
Which GiveWell evaluation(s) though? The ones on that spreadsheet range from the evaluations used to justify Top Charity status to decisions to deprioritize a potential program after a shallow review. Two deworming charities were until recently GiveWell Top Charities, and I believe Open Phil still makes significant grants to them (presumably in reliance on GiveWell’s work).
In this post, HLI explicitly compares its evaluation of StrongMinds to GiveWell’s evaluation of AMF, and says:
“At one end, AMF is 1.3x better than StrongMinds. At the other, StrongMinds is 12x better than AMF. Ultimately, AMF is less cost-effective than StrongMinds under almost all assumptions.
Our general recommendation to donors is StrongMinds.”
This seems like an argument for scrutinizing HLI’s evaluation of StrongMinds just as closely as we’d scrutinize GiveWell’s evaluation of AMF (i.e., closely). I apologize for the trite analogy, but: if every year Bob’s blueberry pie wins the prize for best pie at the state fair, and this year Jim, a newcomer, is claiming that his blueberry pie is better than Bob’s, this isn’t an argument for employing a more lax standard of judging for Jim’s pie. Nor do I see how concluding that Jim’s pie isn’t the best pie this year—but here’s a lot of feedback on how Jim can improve his pie for next year—undermines Jim’s ability to win pie competitions going forward.
This isn’t to say that we should expect the claims in HLI’s evaluation to be backed by the same level of evidence as GiveWell’s, but we should be able to take a hard look at HLI’s report and determine that the strong claims made on its basis are (somewhat) justified.
Yes, agree that the language re: AMF justifies a higher level of scrutiny than would be warranted in its absence. Also, the AMF-related claim makes more moderate changes in the CEA bottom-line material than if the claims had been limited to stuff like: SM is more cost-effective than other predominately life-enhancing charities like GiveDirectly.
My read is it wasn’t the statistics they got hammered on misrepresenting other people’s views of them as endorsements e.g. James Snowden’s views. I will also say the AI side does get this criticism but not on cost-effectiveness but on things like culture war (AI Ethics vs. AI Safety) and dooming about techniques (e.g. working in a big company vs. more EA aligned research group and RLHF discourse).
Yes in that post the misrepresentation was part of the criticism they receive (which they engaged with and was at least partially corrected which is impressive) but I think the statistical analysis bore the most heavy overall criticism in that post, and in other earlier posts.
“Fair” and “unfair” are tricky words to nail down.
I think there are a wide range of factors that explain why HLI has been treated differently than other orgs -- some “fair” under most definitions of the word, some less so. Some of those reasons are adjacent to questions of funding and influence, but I’m not sure they provide much room to criticize HLI’s critics.
HLI is running in a lane—global health/development/wellbeing—where the evidentiary standards are much higher than in longtermist areas. Part of this is the nature of the work; asking a biosecurity program how many pandemics it has prevented is not workable. Part of it is that there is a very well-funded organization that has been doing CEAs that conensus views as high-quality. Yet another aspect is that GHDW work has been much more limited by funding constraints, which has incentivized GHDW funders to adopt higher standards.
I think people generally need to be kinder to smaller-scale, early-stage efforts . . . but see point 3 below.
HLI is a charity recommender, a significant portion of whose focus currently involves making recommendations to ordinary people (not megadonors, foundations, etc.) I do think the level of scrutiny should ordinarily be higher for charity recommenders, especially those making recommendations to the general public. The purpose of a charity recommender is to evaluate the relative merits of various charities, and for ordinary donors their recommendations may be seen as near-authoritative. A sense that the community needs to carefully scrutinize the recommender’s work destroys much of a recommender’s value proposition in the first place. And while it’s not very utilitarian of me, I do feel more protective of small donors who don’t have an in-house staff to pick up on a recommender’s mistakes.
I think an overconfident marketing campaign in 2022 did play a major role in how much grace people are willing to extend on the CEA. I haven’t been around that long, but this does seem to significantly distinguish HLI from other orgs. I believe that HLI has expressed regret for certain statements, but a framework that compares statements made at that time (that have not been clearly and explicitly retracted) to what the data actually support strikes me as on the “fair” side of the ledger.
This was HLI’s first major recommendation; people would be less prone to draw negative inferences about (e.g.) an org whose first four analyses/recommendations were fine but whose fifth had some significant issues.
StrongMinds spends (and could potentially fundraise) enough money to make a significant dive into its cost-effectiveness worthwhile for critics, but probably not so much as to justify an airtight multi-million dollar workup (including by commissioning our own studies to fill any major holes in the data that would have a big effect on the CEA). So it’s an awkward-size program to evaluate.
Pretty much all skeptical analysis is done by volunteers on their own time, and so the volume/quality of that work will heavily depend on who is interested in and available to doing it. It’s plausible to me that having a controversial and/or novel framework could motivate more critics to volunteer for duty.
There could also be a snowball effect; the detection of one significant weakness in a CEA may motivate others to start looking.
HLI asked Forum users to contribute money. Although I take a wide stance on “standing” to criticize organizations, one could reasonably characterize asking users for action as opening the door to some extent. Having an active fundraising ask may also provide a more concrete payoff/impact for criticism, by preventing users from taking an action the critic found undesirable.
HLI has been unusually transparent with data and responsive to criticism, which has made such criticism easier and kept it up longer. I think you’re right to be concerned about the ferocity of criticism disincentivizing trransparency and opennness on the margin.
The barriers to criticizing HLI are much lower. Because HLI has little power, no one is concerned about blowback. Compare that to the recent Omega criticisms of AI labs, which were posted psuedonymously and which had to rely on undisclosed data. Criticism from established community members who sign their work and can show their work carries more weight, and there’s a disincentive to writing anonymous criticism (you’ll never get any credit for it).
Several of these points are at least adjacent to questions of funding and power, and they cumulatively make me feel at least somewhat uncomfortable, e.g.:
It’s unlikely an organization with more secure funding would have made a fundraising appeal at this time. Rather, it likely would have laid low until it had produced a new CEA for SM and until more time had passed since the prior harsh posts.
HLI may have felt pressure to be particularly transparent and responsive than a more established org. It’s unlikely HLI would have been taken seriously if it didn’t show its receipts, and it doesn’t have the power/prestige needed for a “no real comment” approach to criticism to have a good shot at working.
That being said, I find it challenging to assign much fault for those factors to the Forum user community on those. For example, in point 10, the unfairness is not that HLI is being criticized by named users who have built up a reputation, but that the criticism of other orgs is disincentivized and psuedonymous.
I think you’re right that the response to HLI may discourage transparency and responsiveness on the margin, and that this is a problem. As a practical matter, I think there are two factors that mitigate this to some extent. One is that I think the criticism of HLI reflects a convergence of a number of factors as listed above, and I’m not sure how much marginal effect comes from their good transparency and responsiveness. Second, I think any startup org trying to pursue HLI-like goals has to be transparent and responsive to get a hearing from the community, so I think it less likely that knowledge of current events will change another org’s stance to a materially less open and responsive one.
I’m undecided on the net effect of all of this. My hope is that it will ultimately result in adoption of better epistemic safeguards and communications management—both at HLI and elsewhere in the ecosystem. (Cf. my recent post on the HLI thread). That would be a good result, although I’d still wish we had gotten there with a lot less rancor.
Quite right. Far too much scrutiny was applied to HLI. Five thousand words autistic debunkings, though highly entertaining to read and no doubt equally entertaining to their authors, should not have been necessary. Any reasonable model of how the world works would not perhaps not quite rule the idea of group therapy in poor countries out of court, but require an incredibly high standard of evidence to even begin discussing it somewhat politely.
On the subject of scrutinizing other orgs, I note that some hardworking but anonymous EAs have done their best to scrutizine EA’s various AI research orgs, but of course this is much more specialized endeavour requiring deeper expertise and is also entirely pointless because OpenPhil will probably fund them anyway.
We’ve banned Sol3:2 for 3 weeks. This comment is uncivil and was reported multiple times. Other comments have been reported in the past for similar reasons.
I want to note that criticism can be extremely valuable, and we have a slightly higher bar for taking mod action against criticism. But referring to analyses of HLI’s work as “autistic” clearly violates core Forum norms and is above that bar. I think it’s possible to outline strong disagreements while still following our norms, and we’d want to see this from Sol3:2 in the future.
If Sol3:2 thinks that this is not right, they can appeal.
I’m a little confused as to why we consider the leaders of AI companies (Altman, Hassabis, Amodei etc.) to be “thought leaders” in the field of AI safety in particular. Their job descriptions are to grow the company and increase shareholder value, so their public persona and statements has to reflect that. Surely they are far too compromised for their opinions to be taken too seriously, they couldn’t make strong statements against AI growth and development even if they wanted to, because of their job and position.
The recent post “Sam Altman’s chip ambitions undercut OpenAI’s safety strategy” seems correct and important, while also almost absurdly obvious—the guy is trying to grow his company and they need more and better chips. We don’t seriously listen to big tobacco CEOs about the dangers of smoking, or Oil CEOs about the dangers of climate change, or Factory Farming CEOs about animal suffering, so why do we seem to take the opinions of AI bosses about safety even in moderate good faith? The past is often the best predictor of the future, and the past here says that CEOs will grow their companies, while trying however possible to maintain public goodwill as to minimise the backlash.
I agree that these CEOs could be considered thought leaders in AI in general and the Future and potential of AI, and their statements about safety and the future are critically practically important and should be engaged with seriously. But I don’t really see the point of engaging with them as thought leaders in the AI safety discussion, it would make more sense to me to rather engage with intellectuals and commentators who can fully and transparently share their views without crippling levels of compromisation.
I’m interested to hear though arguments in favour of taking their thoughts more seriously.
Who is considering Altman and Hassabis thought leaders in AI safety? I wouldn’t even consider Altman a thought leader in AI—his extraordinary skill seems mostly social and organizational.
There’s maybe an argument for Amodei, as Anthropic is currently the only one of the company whose commitment to safety over scaling is at least reasonably plausible.
Thanks that’s a helpful perspective and I would be happy if it was true that they weren’t considered AI safety thought leaders. I do feel like they are often seen this way though in the public sphere, and sometimes here on the forum too.
I realize that my question sounded rethorical, but I’m actually interested in your sources or reasons for your impression. I certainly don’t have a good idea of the general opinion and the media I consume is biased towards what I consider reasonable takes. That being said, I haven’t encountered the position you’re concerned about very much and would be interested to be hear where you did. Regarding this forum, I imagine one could read into some answers, but overall I don’t get the impression that the AI CEO’s are seen as big safety proponents.
I think thoughtleader sometimes means “has thoughts at the leading edge” and sometimes mean “leads the thoughts of the herd on a subject” and that there is sometimes a deliberate ambiguity between the two.
The value of re-directing non-EA funding to EA orgs might still be under-appreciated. While we obsess over (rightly so) where EA funding should be going, shifting money from one EA cause to another “better” ne might often only make an incremental difference, while moving money from a non-EA pool to fund cost-effective interventions might make an order of magnitude difference.
There’s nothing new to see here. High impact foundations are being cultivated to shift donor funding to effective causes, the “Center for effective aid policy” was set up (then shut down) to shift governement money to more effective causes, and many great EAs work in public service jobs partly to redirect money. The Lead exposure action fund spearheaded by OpenPhil is hopefully re-directing millions to a fantastic cause as we speak.
I would love to see an analysis (might have missed it) which estimates the “cost-effectiveness” of redirecting a dollar into a 10x or 100x more cost-effective intervention, How much money/time would it be worth spending to redirect money this way? Also I’d like to get my head around how much might the working “cost-effectiveness” of an org improve if its budget shifted from 10% non-EA funding to 90% non- EA funding.
There are obviously costs to roping in non-EA funding. From my own experience it often takes huge time and energy. One thing I’ve appreciated about my 2 attempts applying for EA adjacent funding is just how straightforward It has been – probably an order of magnitude less work than other applications.
Here’s a few practical ideas to how we could further redirect funds
EA orgs could put more effort into helping each other access non-EA money. This is already happening through the AIM cluster, but I feel the scope could be widened to other orgs, and co-ordination could be improved a lot without too much effort. I’m sure pools of money are getting missed all the time. For example I sure hope we’re doing whatever we can through our networks to help EA gender based violence orgs / family planning orgs to get hold of some of this 250 million dollars from Melinda.
When assessing cost-effectiveness of new interventions and charities (especially global health), I think potential to access non-EA future funding could be taken into account. If a new charity has a relatively smooth path to millions of dollars of external funding, should our cost-effectiveness bar be lower? Again this might well be happening already.
We might have a blind spot missing cause areas where cost-effectiveness might initially look sub-optimal, but huge available non-EA money-pools might shift the calculus. One example is climate mitigation, where Billions of dollars slosh around, wasted on ineffective interventions. Many “mitigation activities” I see here in northern Uganda might as well be burning money (in a carbon neutral way of course). GiveDirectly have made a great play here re-directing millions of climate mitigations funds to cash transfers. Could other “climate mitigation orgs” be set up to utilise this money better even if the end point of the money wasn’t strictly climate related?
I would imagine far smarter people have thought about this far more deeply, but there might still be room for more exploration and awareness here.
The CE of redirecting money is simply (dollars raised per dollar spent) * (difference in CE between your use of the money vs counterfactual use). So if GD raises $10 from climate mitigation for every $1 it spent, and that money would have otherwise been neutral, then that’s a cost-effectiveness of 10x in GiveWell units.
There’s nothing complicated about estimating the value of leverage. The problem is actually doing leverage. Everyone is trying to leverage everyone else. When there is money to be had, there are a bunch of organizations trying to influence how it is spent. Melinda French Gates is likely deluged with organizations trying to pitch her for money. The CEAP shutdown post you mentioned puts it perfectly:
The core thesis of our charity fell prey to the 1% fallacy. Within any country, much of the development budget is fixed and difficult to move. For example, most countries will have made binding commitments spanning several years to fund various projects and institutions. Another large chunk is going to be spent on political priorities (funding Ukraine, taking in refugees, etc.) which is also difficult for an outsider to influence.
What is left is fought over by hundreds, if not thousands of NGOs all looking for funding. I can’t think of any other government budget with as many entities fighting over as small a budget. The NGOs which survive in this space, are those which were best at getting grants. Like other industries dependent on government subsidies, they fight tooth and nail to ensure those subsidies stay put.
This doesn’t mean that leverage is impossible. It just means that leverage opportunities tend to be specific and limited. We have to take them on opportunistically, rather than making leverage a theory of impact.
I largely agree, although I don’t think we’re trying to leverage money that hard in some areas areas. I do think there needs to be some strategy for leverage as well as a lot of opportunism as you say. Collaboration as I mentioned opens up opportunities as well.
Sometimes also it’s not so hard to access pools of money, for example how many orgs are trying hard to access all that climate money?
On the subject of redirecting streams of money from less impactful causes to EA causes, I feel I need to beat my drum regarding the potential of Profit for Good businesses (businesses with charities in all or almost all of the shareholder position). In such cases, to the extent EA PFGs profits displace those of normal businesses, funds are diverted from the average shareholder to an effective charity.
So when a business like Humanitix (PFG helping projects in the developing world, $4mil AUD to The Life You Can Save) displaces the marketshare of Ticketmaster, funds are diverted not from charities, but from the funds of the business’s competitors. This method of diversion seems less difficult because the operative actors (consumers, employees, business partners) are not deciding between a strong non-EA charity often optimized for warm fuzzies and marketing, but rather choosing between products with similar value propositions, but where engaging with one—in addition to the other value proposition—implies helping fight malaria or something instead of enriching a random investor.
If you’re interested in learning more about Profit for Good, here is a reading list on the subject.
Im intrigued where people stand on the threshold where farmed animal lives might become net positive? I’m going to share a few scenarios i’m very unsure about and id love to hear thoughts or be pointed towards research on this.
Animals kept in homesteads in rural Uganda where I live. Often they stay inside with the family at night, then are let out during the day to roam free along the farm or community. The animals seem pretty darn happy most of the time for what it’s worth, playing and galavanting around. Downsides here include poor veterinary care so sometimes parasites and sickness are pretty bad and often pretty rough transport and slaughter methods (my intuition net positive).
Grass fed sheep in New Zealand, my birth country. They get good medical care, are well fed on grass and usually have large roaming areas (intuition net positive)
Grass fed dairy cows in New Zealand. They roam fairly freely and will have very good vet care, but have they calves taken away at birth, have constantly uncomfortably swollen udders and are milked at least twice daily. (Intuition very unsure)
Free range pigs. Similar to the above except often space is smaller but they do get little houses. Pigs are far more intelligent than cows or sheep and might have more intellectual needs not getting met. (Intuition uncertain)
Obviously these kind of cases make up a small proportion of farmed animals worldwide, with the predominant situation—factory farmed animals likely having net negative lives.
I know that animals having net positive lives far from justifies farming animals on it’s own, but it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering.
It’s really hard to judge whether a life is net positive. I’m not even sure when my own life is net positive—sometimes if I’m going through a difficult moment, as a mental exercise I ask myself, “if the rest of my life felt exactly like this, would I want to keep living?” And it’s genuinely pretty hard to tell. Sometimes it’s obvious, like right at this moment my life is definitely net positive, but when I’m feeling bad, it’s hard to say where the threshold is. If I can’t even identify the threshold for myself, I doubt I can identify it in farm animals.
If I had to guess, I’d say the threshold is something like
if the animals spend most of their time outdoors, their lives are net positive
if they spend most of their time indoors (in crowded factory farm conditions, even if “free range”), their lives are net negative
it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering.
To this point, I think the most important things are
whatever the threshold is, factory-farmed animals clearly don’t meet it
99% of animals people eat are factory-farmed (in spite of people’s insistence that they only eat meat from their uncle’s farm where all of the animals are treated like their own children etc)
That’s really interesting on your own life. Even in the midst of my worst emotional states (emotional not physical pain,) I would still feelI’m on the positive side of the ledger.
Yes I agree on your list 2 points, those are the most important in general
In Northern Uganda here though. the majority of animals people eat (not often, many people eat meat once or twice a month) have lives from my first example. In New Zealand almost all beef and sheep meat is from something like options 2 and 3, so I think the question has some relevance to a decent number of people.
Just adding: the discussion of dairy cows, here and elsewhere, tends to focus on the experience of the adult cattle & the suffering for them of being milked, deprived of their babies, etc.
But it’s not implausible to me that the majority of the disvalue from dairy is in the lives of the calves born to dairy cows. In typical milk-producing operations, adult cows have 1 calf every 18 months or so; 50% of them are male, and so are killed within a few hours to a few months after birth.
(& these lives more likely to be net negative because they have less time to experience positive things to outweigh the terror and pain of death. Undoubtedly, some of their deaths will be quite quick, but others are slow and brutal.)
(Also, veal calves are treated very badly—intense confinement to reduce movement to keep the meat tender, dietary restriction to keep the meat pale, individual confinement in a tiny ‘hutch’, etc.)
(& let’s not forget the fetal calves who are still gestating when their mothers go to slaughter. They’re killed slowly, if they ever get purposefully slaughtered at all rather than just left to asphyxiate. Obviously, it’s unclear whether they’re conscious, but I’ve read accounts of them moving, opening eyes, trying to breathe, etc.).
Thanks Bella, this has crossed my mind and definitely updates me towards dairy farmed cows in New Zealand being more likely to be net negative. I’m not sure whether the veal thing happens in New Zealand though I’ll look into it.
Thanks for this. My view is the same as yours. The first two strike me as “net positive.” I’m also unsure about what pigs and dairy cows need. I wouldn’t be hugely surprised if they have either “net positive” or “net negative” lives, but I think it’s most likely (80%+ chance) they are “net positive.”
(Qualifying discussion of net value of existence with ” ” because I find such valuations always so fraught with uncertainty and I feel I owe other beings tremendous humility in this!)
I’m always surprised to see sheep get lumped in with cows in discussions of farmed animal welfare (ex. the SSC Adversarial Collaboration). Sure, it’s not a terrible proxy, but sheep are often freer, need to be regularly shorn to avoid overheating, and usually die of natural causes. There are definitely some practices which are awful, but sheep are quite hard to optimise in the same way we’ve done with pigs & chickens, or even cows.
However, we eat them when they’re babies so maybe it swings in the absolute other direction.
Nice point Huw, I agree. I wasn’t trying to lump them together exactly, I agree its very different. Does eating them when they are babies necessarily mean a net negative life though, if the slaughter is humane? It does seem like a weird question though...
The idea behind why eating babies is more likely to be net negative is that there’s a shorter lifespan of positive experiences to balance out the terror and pain of death.
From my experience watching lots of slaughterhouse footage and reading accounts from workers, even the best humane conditions still involve, routinely, a (shorter or longer) period in which the animal goes through the process of dying. This is probably pretty bad. If they only lived for a few weeks before that, it’s harder to imagine it’s a good deal overall.
Under some frameworks, you’d be depriving them of many years of happy life; but then again, if you didn’t kill them as children they probably would never have been born for food. Here we’d be getting too deep into the moral philosophy for me to have a confident take 😅. Interesting nonetheless.
“but it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering.”
I’m highly skeptical of this—why do you think it is important for your own moral decision making? It seems to me that whether farmed animals lives are worth living or not is irrelevant—either way we should try to improve their conditions, and the best ways of doing that seem to be: a boycott & political pressure (I would argue that the two work well together).
By analogy, no one raises the question of whether the lives of people living in extreme poverty, or working in sweatshops and so on, are worth living, because it’s simply irrelevant.
This seems relevant to any intervention premised on “it’s good to reduce the amount of net-negative lives lived.”
If factory-farmed chickens have lives that aren’t worth living, then one might support an intervention that reduces the number of factory-farmed chickens, even if it doesn’t improve the lives of any chickens that do come to exist. (It seems to me this would be the primary effect of boycotts, for instance, although I don’t know empirically how true that is.)
I agree that this is irrelevant to interventions that just seek to improve conditions for animals, rather than changing the number of animals that exist. Those seem equally good regardless of where the zero point is.
I suppose I agree with this. And I’ve been mulling over why it still seems like the wrong way to think about it to me, and I think it’s that I find it rather short-termist. In the short term if farms shut down they might be replaced with nature, with even less happy animals, it’s true. But in the long term opposing speciesism is the only way to achieve a world with happy beings. Clearly the kinds of farms @NickLaing is talking about, with lives worth living but still pretty miserable, are not optimal. Figuring out whether they are worth living or not seems only relevant to trying to reduce suffering in the short term, but not so much in the long term, because in the long term this isn’t what we want anyway.
Wanted to give a shoutout to Ajeya Cotra (from OpenPhil), for her great work explaining AI stuff on a recent Freakonomics podcast series. Her explanations about both her work on the development of AI, and her easy to understand predictions of how AI might progress from here were great, she was my favourite expert on the series.
People have been looking for more high quality public communicators to get EA/AI safety stuff out there, perhaps Ajeya could be a candidate if she’s keen?
Applying my global health knowledge to the animal welfare realm, I’m requesting 1,000,000 dollars to launch this deep net positive (Shr)Impactful charity. I’ll admit the funding opportunity is pretty marginal…
Thanks @Toby Tremlett🔹 for bringing this to life. Even though she doesn’t look so happy I can assure you this intervention nets a 30xwelfare range improvement for this shrimp, so she’s now basically a human.
Although I’m enamoured as a mostly-neartermist that the front page (for the first time in my experience) is devoid of AI content, I really would like to hear the job experience and journey of a few AI safety/policy workers for this jobs week. The first 10ish wonderful people who shared are almost all neartermist focused, which probably doesn’t represent the full experience of the community.
I’m genuinely interested to understand how your AI safety job works and how you wonderful people motivate yourselves on a day to day basis, when seeing clear progress and wins must be hard a lot of the time. I find it hard enough some days working in Global health!
Or maybe your work is so important, neglected and urgent that your can’t spare a couple of hours to write a post ;).
I’ve been on the forum for maybe 9 months now, and I’ve been intrigued by the idea of “hits based” giving, explained well in this 2016 article by Holden Karnofsky. The idea that “we will sometimes bet on ideas that contradict conventional wisdom, contradict some expert opinion, and have little in the way of clear evidential support.”
1) Is there a database with a list of donations considered “hits based”by Openphil? If not that would be a helpful and transparent way of tracking success on these. I had a quick look through their donations but its not clear which ones are considered “hits based”
2) Are there any donations considered “hits based” since that 2016 post which have clearly turned out really well, i.e. been winners? We might have seen some these hit based donations resolve positively or negatively, although 6 years is a short timeline so most would remain unresolved. Is there any official or unofficial info on that?
The net net effect could be positive or negative. Let me untangle it for you.
In favour of net positivity is the net positive human lives saved through net negative, negative net effects on mosquitos and malaria causing a net positive effect on humans. The insecticides positively embed these bed net, net positive effects.
However there’s a catch in favour of net negativity—as nets are positively dragged to net fish, the netted fish suffer net negative net negativity.
While caught between the net positive net positivity to humans and net negative net negativity to fish, we are enmeshed in a net of uncertainty as we struggle to net a clear positive net answer.
Hopefully I didn’t tie you in knots as I strung you along.
Has anyone talked with/lobbied the Gates Foundation on factory farming? I was concerned to read this in Gates Notes.
”On the way back to Addis, we stopped at a poultry farm established by the Oromia government to help young people enter the poultry industry. They work there for two or three years, earn a salary and some start-up money, and then go off to start their own agriculture businesses. It was a noisy place—the farm has 20,000 chickens! But it was exciting to meet some aspiring farmers and businesspeople with big dreams.”
It seems a disaster that the Gates foundation are funding and promoting the rapid scale up of factory farming in Africa, and reversing this seems potentially tractable to me. Could individuals, Gates insiders or the big animal rights orgs take this up?
Can we call it the Meat EatING problem?
The currently labelled “meat eater problem” has been referred to a number of times during debate week. The forum wiki on the “meat eater” problem summarises it like this.
“Saving human lives, and making humans more prosperous, seem to be obviously good in terms of direct effects. However, humans consume animal products, and these animal products may cause considerable animal suffering. Therefore, improving human lives may lead to negative effects that outweigh the direct positive effects.”
I think this an important issue to discuss, although I think we should be extremely sensitive and cautious while discussing it.
On this note I think we should re-label this the meat eating problem, as I think there are big upsides with minimal downside.
Accuracy: I don’t think the core problem actually the people who’s lives we are saving, its that they then eat meat and cause suffering. I think its important to separate the people from the core problem as this better helps us consider possible solutions
Persuasion: I think we’re more able to persuade if we discuss the problem separated from the people. I can talk about the “meat eating problem” with non-EA friends and it will be hard but they might understand, but if through the very name of the issue I make the people themselves the problem, that can easily make me seem callous, and people can switch off.
Fairness: Even if you disagree with me on accuracy and double down that the core problem is the people, I think its pretty unfair to lump the label of a serious philosophical problem on the poorest people on earth—people with little education who are often just trying to survive and have never had the chance to consider this issue.
It seems to me that this problem was mainly thought up and developed by the EA community (which is great), and we could probably just decide to call it something different from here on out. I’m asking the forum team to consider changing the name on the wiki as well.
NB: @JWS 🔸 proposed this name change a couple of months ago, which got me thinking about it again.
It’s true that meat eating is closer to what we actually care about, but it’s worth singling out causal pathways from saving lives and increasing incomes/wealth, as potential backfire effects. “Meat eating problem” seems likely to be understood too generally as the problem of animal consumption, without explanation. I’d prefer a more unique expression to isolate the specific causal pathways.
Some other ideas:
meat eating backfire (problem)
more meat backfire/problem
meat backfire (problem)
(more) animal product backfire (problem)
(Eggs and other animal products besides meat matter, too.)
Yep I’m happy with any of these, I especially like the “meat eating backfire” because it kind of implies we’re shooting in the right direction in the first place. Also you are right that in terms of suffering (especially here in Uganda) its probably the eggs that might be a bigger problem even than the meat.
Of course, there are other ways meat (and other animal product) consumption could increase from well-intentioned EA interventions than just by saving lives or increasing incomes/wealth. For example, interventions that involve subsidizing animal welfare improvements can carry this backfire risk.
I’m less worried about confusion with other problems, because they don’t come up as often, and researchers are more likely to account for them in animal welfare research anyway. All effects on nonhuman animals are usually omitted from analyses of interventions aimed specifically at helping humans, including GHD and CGRs. It’s worth reminding people of these backfire risks.
It’s true.
I could also argue that “the meat eater problem” is just as ambiguous because it could easily be misinterpreted as just the problem that everyone all around the world eats meat in general.
I don’t think precision is necessarily the be all and end all of names ;).
I think ‘meat-eating problem’ > ‘meat-eater problem’ came in my comment and associated discussion here, but possibly somewhere else.[1]
(I still stand by the comment, and I don’t think it’s contradictory with my current vote placement on the debate week question)
When we were talking about this in 2012 we called it the “poor meat-eater problem”, which I think is clearer.
I think it is clearer yes, but I don’t really like about it for my reasons 2 and 3 above, and I still think the direct problem isn’t about the people existing, but they fact they are eating meat after their lives are “saved”. Labeling it the “poor meat eater” problem could potentially be even worse in that it could be perceived to be sounding like its blaming poor people (although I know that’s not the intent).
And if people in high-income countries die from pandemics, nukes or AI, that’s also good for farmed animals. It’s not just poor people.
I think it’s totally fair name of the problem, as its “unfairness” comes from the problem statement, not its name. “I think its pretty unfair to lump the label of a serious philosophical problem on the poorest people on earth” here for example, it’s meat eater problem being morally icky, not its name.
The main takeaway of the ‘meat eater problem’ (sorry!) is to reassess the cost-effectiveness of saving human lives, not necessarily to argue that we should focus on reducing animal consumption in lower-income countries. While reducing animal consumption is important, that’s not typically the central takeaway from this specific ‘problem’.
In this sense, the saving lives aspect is more central to the problem than the meat consumption aspect, though both are pivotal. So, in a purely logical sense, the term ‘meat eater problem’ might actually be more accurate.
You can argue that, but even then can points 2 and 3 not still make it better to use a different name?
Depends if there’s a better option. I agree with MichaelStJules when he says “’Meat eating problem’ seems likely to be understood too generally as the problem of animal consumption.” The other proposed options don’t seem that great to me because they seem to abstract too far away from the issue of saving lives which is at the core of the problem.
It’s worth noting there is a cost to changing the name of something. You’ll then have the exact same thing referred to by different names in different places which can lead to confusion. Also it’s very hard to get a whole community to change the way they refer to something that has been around for a while.
With regards to the “persuasion” point—I think the issue is that the problem we are talking about is inherently uncomfortable. We’re talking about how saving human lives may not be as good as we think it is because humans cause suffering to animals. This is naturally going to be hard for a lot of people to swallow the second you explain it to them, and I don’t think putting a nicer name on it is going to change that.
With regard to fairness…this is my personal view but this doesn’t bother me much. I don’t see evidence of individuals in lower income countries caring about the language we use on the EA Forum which is what would ultimately influence me on this point.
I’m aware I’m in the extreme minority here and I might be wrong. I fully expect to get further downvotes but if people disagree I would welcome pushback in the form of replies.
I feel like 5% of EA directed funding is a high bar to clear to agree with the statement ““AI welfare should be an EA priority”. I would have maybe pitched for maybe
1%2% as the “priority” bar, which would still be 10 million dollars a year even under quite conservative assumptions as to what would be considered unrestricted EA funding.This would mean that across all domains (X-risk, animal welfare, GHD) a theoretical maximum of 20 causes, more realistically maybe 5-15 causes (assuming some causes warrant 10-30% of funding) would be considered EA Priorities. 80,000 hours doesn’t have AI welfare in their top 8 causes but it is in their top 16, so I doubt it would clear the “5%” bar, even though they list it under their “Similarly pressing but less developed areas”, which feels priorityish to me (perhaos they could share their perspective?)
It could also depend how broadly we characterise causes. Is “Global Health and development” one cause, or are Mosquito nets, deworming and cash transfers all their own causes? I would suspect the latter.
Many people could therefore consider AI welfare an important cause area in their eyes but disagree with the debate statement because they don’t think it warrants a large 5%+ of EA funding despite its importance.
Or I could be wrong and many could consider 5% a reasonable or even low bar. Its clearly a subjective question and not the biggest deal but hey :D.
In ordinary language, I wouldn’t generally consider something that gets 1% of resources to be a “priority.” Applying your reasoning above, that would create a theoretical maximum of 100 “priorities” and a more realistic range of perhaps 10-40. As we move beyond the low teens, the idea of a “priority” gets pretty watered down in my book.
Thanks Jason, we clearly have different bars but you make a good point. I would consider 10-20 priorities fine. I will adjust up to 2% based on this.
The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I’m not sure exactly why I felt this way, but here are a few ideas.
(High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing—more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them.
(High certainty) When other orgs are criticised or asked questions, they often don’t reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I’m not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI’s funding is less than many of orgs that have not been scrutinised as much.
(Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a “scrutiny rebalancing” of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less.
Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don’t engage, so the thread falls off the frontpage. Orgs might reasonably decide that high degrees of transparency and engagement might do them net harm rather than good. This might not be good for anyone
Do you agree/disagree? And what could we do to make the situation better?
I think it’s fairest to compare HLI’s charity analysis with other charity evaluators like Givewell, ACE, and Giving Green.
Giving Green has been criticised regularly and robustly (just look up any of their posts). Givewell publish their analysis and engage with criticism; HLI themselves have actually criticised them pretty robustly! I don’t know about ACE because I don’t stay up to date on animals but I bet it’s similar there.
The dynamics are quite different for example in charitable foundations where they don’t need to convince anyone to donate differently, or charities that deliver a service who only need to convince their funders to continue donating.
Thanks Kristen for this clear and concise reply. This comparison with the experience of other charity evaluators has shifted my opinion on this somewhat nice one.
It seems a bit of a pity that they should receive significantly more scrutiny than charities or foundations though. In an ideal world everyone should be transparent and heavily scrutinised but it does make sense that the incentives might not be there for other orgs...
I agree that more orgs should get this kind of scrutiny. I agree that we are likely to blindly trust orgs that don’t transparently discuss their inst workings, which is super sad.
Interesting reflection on Mental health providers too, be that’s not a world I know!
This argument I struggle with...
“I don’t think the problem is that HLI got too much hate for fucking up, it’s that everyone else gets too little hate for being opaque”
I realize you are probably beinga bit tongue in cheek, but I think we could criticise and discuss while being more encouraging and positive. We are all human, to and I’m not sure piling on the “hate” will necessarily lead to improvement in epistemics and rigorous analysis.
I don’t think this was a reply to me?
Sorry accidentally got confused and sister a comment my bad!
I am a bit more familiar with ACE, and my impression is that you are right.
HLI fucked up their analysis, but because it was public we found out about it. Most EAs are too fearful to expose their work to scrutiny. Compare them to others who work on mental health within EA...
Most coaches and therapists in EA don’t do any rigorous testing of whether what they are doing actually works. They don’t even allow you to leave public reviews for them. I think we’re the only organisation to even have a TrustPilot!!!
I don’t think the problem is that HLI got too much hate for fucking up, it’s that everyone else gets too little hate for being opaque.
Now HLI have been dragged through the mud, you can bet your ass they won’t be making the same mistakes again. So long as they keep being transparent, they’ll keep learning and growing as an org. Others will keep making the same mistakes indefinitely, only we’ll never know about it and will continue blindly trusting them.
I agree that more orgs should get this kind of scrutiny. I agree that we are likely to blindly trust orgs that don’t transparently discuss their inst workings, which is super sad.
Interesting reflection on Mental health providers too, be that’s not a world I know!
This argument I struggle with...
“I don’t think the problem is that HLI got too much hate for fucking up, it’s that everyone else gets too little hate for being opaque”
I realize you are probably beinga bit tongue in cheek, but I think we could criticise and discuss while being more encouraging and positive. We are all human, to and I’m not sure piling on the “hate” will necessarily lead to improvement in epistemics and rigorous analysis.
One general problem with online discourse is that even if each individual makes a fair critique, the net effect of a lot of people doing this can be disproportionate, since there’s a coordination problem. That said, a few things make me think the level of criticism leveled at HLI was reasonable, namely:
HLI was asking for a lot of money ($200k-$1 million).
The critiques people were making seemed (generally) unique, specific, and fair.
The critiques came after some initial positive responses to the post, including responses to the effect of “I’m persuaded by this; how can I donate?”
I agree with you that GHD organizations tend to be scrutinized more closely, in large part because there is more data to scrutinize. But there is also some logic to balancing scrutiny levels within cause areas. When HLI solicits donations via Forum post, it seems reasonable to assume that donations they receive more likely come out of GiveWell’s coffers than MIRI’s. This seems like an argument for holding HLI to the GiveWell standard of scrutiny, rather than the MIRI standard (at least in this case).
That said, I do think it would be good to apply stricter standards of scrutiny to other EA organizations, without those organizations explicitly opening themselves up to evaluation by posting on the Forum. I wonder if there might be some way to incentivize this kind of review.
I am concerned that rationale would unduly entrench established players and stifle innovation. Young orgs on a shoestring budget aren’t going to be able to withstand 2023 GiveWell-level scrutiny . . . and neither could GiveWell at the young-org stage of development.
Yeah, I should’ve probably been more precise: the criticism of HLI has mainly been leveled against their evaluation of a single organization’s single intervention, whereas GW has evaluated 100+ programs, so my gut instinct is that it’s fair to hold HLI’s StrongMinds evaluation to the same ballpark level of scrutiny we’d hold a single GW evaluation to (and deworming certainly has been held to that standard). It might be unfair to expect an HLI evaluation to be at the level as a GW evaluation per dollar invested/hour spent (given that there’s a learning curve associated with doing such evaluations and there’s value associated with having multiple organizations do them), but this seems like—if anything—an argument for scrutinizing HLI’s work more closely, since HLI is trying to climb a learning curve, and feedback facilitates this.
I think another factor is that HLI’s analysis is not just below the level of Givewell, but below a more basic standard. If HLI had performed at this basic standard, but below Givewell, I think strong criticism would have been unreasonable, as they are still a young and small org with plenty of room to grow. But as it stands the deficiencies are substantial, and a major rethink doesn’t appear to be forthcoming, despite being warranted.
Probably a stupid question (probably just missed), can someone point me to where Givewell do a meta-analysis or similar depth of analysis as this HLI one. I can’t seem to find it and I would be keen to do a quick compare myself.
I’m not aware of a GW analysis quite like this one, although I didn’t go back and look at all its prior work.
In a situation like this, where GiveWell was considering StrongMinds as a top charity recommendation, it’s almost certain that it would have first funded a bespoke RCT designed to address key questions for which the available literature was mixed or inconclusive. HLI doesn’t have that luxury, of course. Moreover, what HLI is trying to measure is significantly harder to tease out than “how well do bednets work at saving lives” and similar questions.
I think those are relevant considerations that make comparing HLI’s work to the “GiveWell standard” inappropriate. However, to acknowledge Ben’s point, HLI’s critics are alleging that the stuff that was missed was pretty obvious and that HLI hasn’t responded appropriately when the missed stuff was pointed out. I lack the technical background and expertise to fully evaluate those claims.
Which GiveWell evaluation(s) though? The ones on that spreadsheet range from the evaluations used to justify Top Charity status to decisions to deprioritize a potential program after a shallow review. Two deworming charities were until recently GiveWell Top Charities, and I believe Open Phil still makes significant grants to them (presumably in reliance on GiveWell’s work).
In this post, HLI explicitly compares its evaluation of StrongMinds to GiveWell’s evaluation of AMF, and says:
This seems like an argument for scrutinizing HLI’s evaluation of StrongMinds just as closely as we’d scrutinize GiveWell’s evaluation of AMF (i.e., closely). I apologize for the trite analogy, but: if every year Bob’s blueberry pie wins the prize for best pie at the state fair, and this year Jim, a newcomer, is claiming that his blueberry pie is better than Bob’s, this isn’t an argument for employing a more lax standard of judging for Jim’s pie. Nor do I see how concluding that Jim’s pie isn’t the best pie this year—but here’s a lot of feedback on how Jim can improve his pie for next year—undermines Jim’s ability to win pie competitions going forward.
This isn’t to say that we should expect the claims in HLI’s evaluation to be backed by the same level of evidence as GiveWell’s, but we should be able to take a hard look at HLI’s report and determine that the strong claims made on its basis are (somewhat) justified.
Yes, agree that the language re: AMF justifies a higher level of scrutiny than would be warranted in its absence. Also, the AMF-related claim makes more moderate changes in the CEA bottom-line material than if the claims had been limited to stuff like: SM is more cost-effective than other predominately life-enhancing charities like GiveDirectly.
My read is it wasn’t the statistics they got hammered on misrepresenting other people’s views of them as endorsements e.g. James Snowden’s views. I will also say the AI side does get this criticism but not on cost-effectiveness but on things like culture war (AI Ethics vs. AI Safety) and dooming about techniques (e.g. working in a big company vs. more EA aligned research group and RLHF discourse).
Thanks for the AI perspective!
Yes in that post the misrepresentation was part of the criticism they receive (which they engaged with and was at least partially corrected which is impressive) but I think the statistical analysis bore the most heavy overall criticism in that post, and in other earlier posts.
“Fair” and “unfair” are tricky words to nail down.
I think there are a wide range of factors that explain why HLI has been treated differently than other orgs -- some “fair” under most definitions of the word, some less so. Some of those reasons are adjacent to questions of funding and influence, but I’m not sure they provide much room to criticize HLI’s critics.
HLI is running in a lane—global health/development/wellbeing—where the evidentiary standards are much higher than in longtermist areas. Part of this is the nature of the work; asking a biosecurity program how many pandemics it has prevented is not workable. Part of it is that there is a very well-funded organization that has been doing CEAs that conensus views as high-quality. Yet another aspect is that GHDW work has been much more limited by funding constraints, which has incentivized GHDW funders to adopt higher standards.
I think people generally need to be kinder to smaller-scale, early-stage efforts . . . but see point 3 below.
HLI is a charity recommender, a significant portion of whose focus currently involves making recommendations to ordinary people (not megadonors, foundations, etc.) I do think the level of scrutiny should ordinarily be higher for charity recommenders, especially those making recommendations to the general public. The purpose of a charity recommender is to evaluate the relative merits of various charities, and for ordinary donors their recommendations may be seen as near-authoritative. A sense that the community needs to carefully scrutinize the recommender’s work destroys much of a recommender’s value proposition in the first place. And while it’s not very utilitarian of me, I do feel more protective of small donors who don’t have an in-house staff to pick up on a recommender’s mistakes.
I think an overconfident marketing campaign in 2022 did play a major role in how much grace people are willing to extend on the CEA. I haven’t been around that long, but this does seem to significantly distinguish HLI from other orgs. I believe that HLI has expressed regret for certain statements, but a framework that compares statements made at that time (that have not been clearly and explicitly retracted) to what the data actually support strikes me as on the “fair” side of the ledger.
This was HLI’s first major recommendation; people would be less prone to draw negative inferences about (e.g.) an org whose first four analyses/recommendations were fine but whose fifth had some significant issues.
StrongMinds spends (and could potentially fundraise) enough money to make a significant dive into its cost-effectiveness worthwhile for critics, but probably not so much as to justify an airtight multi-million dollar workup (including by commissioning our own studies to fill any major holes in the data that would have a big effect on the CEA). So it’s an awkward-size program to evaluate.
Pretty much all skeptical analysis is done by volunteers on their own time, and so the volume/quality of that work will heavily depend on who is interested in and available to doing it. It’s plausible to me that having a controversial and/or novel framework could motivate more critics to volunteer for duty.
There could also be a snowball effect; the detection of one significant weakness in a CEA may motivate others to start looking.
HLI asked Forum users to contribute money. Although I take a wide stance on “standing” to criticize organizations, one could reasonably characterize asking users for action as opening the door to some extent. Having an active fundraising ask may also provide a more concrete payoff/impact for criticism, by preventing users from taking an action the critic found undesirable.
HLI has been unusually transparent with data and responsive to criticism, which has made such criticism easier and kept it up longer. I think you’re right to be concerned about the ferocity of criticism disincentivizing trransparency and opennness on the margin.
The barriers to criticizing HLI are much lower. Because HLI has little power, no one is concerned about blowback. Compare that to the recent Omega criticisms of AI labs, which were posted psuedonymously and which had to rely on undisclosed data. Criticism from established community members who sign their work and can show their work carries more weight, and there’s a disincentive to writing anonymous criticism (you’ll never get any credit for it).
Several of these points are at least adjacent to questions of funding and power, and they cumulatively make me feel at least somewhat uncomfortable, e.g.:
It’s unlikely an organization with more secure funding would have made a fundraising appeal at this time. Rather, it likely would have laid low until it had produced a new CEA for SM and until more time had passed since the prior harsh posts.
HLI may have felt pressure to be particularly transparent and responsive than a more established org. It’s unlikely HLI would have been taken seriously if it didn’t show its receipts, and it doesn’t have the power/prestige needed for a “no real comment” approach to criticism to have a good shot at working.
That being said, I find it challenging to assign much fault for those factors to the Forum user community on those. For example, in point 10, the unfairness is not that HLI is being criticized by named users who have built up a reputation, but that the criticism of other orgs is disincentivized and psuedonymous.
I think you’re right that the response to HLI may discourage transparency and responsiveness on the margin, and that this is a problem. As a practical matter, I think there are two factors that mitigate this to some extent. One is that I think the criticism of HLI reflects a convergence of a number of factors as listed above, and I’m not sure how much marginal effect comes from their good transparency and responsiveness. Second, I think any startup org trying to pursue HLI-like goals has to be transparent and responsive to get a hearing from the community, so I think it less likely that knowledge of current events will change another org’s stance to a materially less open and responsive one.
I’m undecided on the net effect of all of this. My hope is that it will ultimately result in adoption of better epistemic safeguards and communications management—both at HLI and elsewhere in the ecosystem. (Cf. my recent post on the HLI thread). That would be a good result, although I’d still wish we had gotten there with a lot less rancor.
Quite right. Far too much scrutiny was applied to HLI. Five thousand words autistic debunkings, though highly entertaining to read and no doubt equally entertaining to their authors, should not have been necessary. Any reasonable model of how the world works would not perhaps not quite rule the idea of group therapy in poor countries out of court, but require an incredibly high standard of evidence to even begin discussing it somewhat politely.
On the subject of scrutinizing other orgs, I note that some hardworking but anonymous EAs have done their best to scrutizine EA’s various AI research orgs, but of course this is much more specialized endeavour requiring deeper expertise and is also entirely pointless because OpenPhil will probably fund them anyway.
We’ve banned Sol3:2 for 3 weeks. This comment is uncivil and was reported multiple times. Other comments have been reported in the past for similar reasons.
I want to note that criticism can be extremely valuable, and we have a slightly higher bar for taking mod action against criticism. But referring to analyses of HLI’s work as “autistic” clearly violates core Forum norms and is above that bar. I think it’s possible to outline strong disagreements while still following our norms, and we’d want to see this from Sol3:2 in the future.
If Sol3:2 thinks that this is not right, they can appeal.
I’m a little confused as to why we consider the leaders of AI companies (Altman, Hassabis, Amodei etc.) to be “thought leaders” in the field of AI safety in particular. Their job descriptions are to grow the company and increase shareholder value, so their public persona and statements has to reflect that. Surely they are far too compromised for their opinions to be taken too seriously, they couldn’t make strong statements against AI growth and development even if they wanted to, because of their job and position.
The recent post “Sam Altman’s chip ambitions undercut OpenAI’s safety strategy” seems correct and important, while also almost absurdly obvious—the guy is trying to grow his company and they need more and better chips. We don’t seriously listen to big tobacco CEOs about the dangers of smoking, or Oil CEOs about the dangers of climate change, or Factory Farming CEOs about animal suffering, so why do we seem to take the opinions of AI bosses about safety even in moderate good faith? The past is often the best predictor of the future, and the past here says that CEOs will grow their companies, while trying however possible to maintain public goodwill as to minimise the backlash.
I agree that these CEOs could be considered thought leaders in AI in general and the Future and potential of AI, and their statements about safety and the future are critically practically important and should be engaged with seriously. But I don’t really see the point of engaging with them as thought leaders in the AI safety discussion, it would make more sense to me to rather engage with intellectuals and commentators who can fully and transparently share their views without crippling levels of compromisation.
I’m interested to hear though arguments in favour of taking their thoughts more seriously.
Who is considering Altman and Hassabis thought leaders in AI safety? I wouldn’t even consider Altman a thought leader in AI—his extraordinary skill seems mostly social and organizational. There’s maybe an argument for Amodei, as Anthropic is currently the only one of the company whose commitment to safety over scaling is at least reasonably plausible.
Thanks that’s a helpful perspective and I would be happy if it was true that they weren’t considered AI safety thought leaders. I do feel like they are often seen this way though in the public sphere, and sometimes here on the forum too.
I realize that my question sounded rethorical, but I’m actually interested in your sources or reasons for your impression. I certainly don’t have a good idea of the general opinion and the media I consume is biased towards what I consider reasonable takes. That being said, I haven’t encountered the position you’re concerned about very much and would be interested to be hear where you did. Regarding this forum, I imagine one could read into some answers, but overall I don’t get the impression that the AI CEO’s are seen as big safety proponents.
I think thoughtleader sometimes means “has thoughts at the leading edge” and sometimes mean “leads the thoughts of the herd on a subject” and that there is sometimes a deliberate ambiguity between the two.
The value of re-directing non-EA funding to EA orgs might still be under-appreciated. While we obsess over (rightly so) where EA funding should be going, shifting money from one EA cause to another “better” ne might often only make an incremental difference, while moving money from a non-EA pool to fund cost-effective interventions might make an order of magnitude difference.
There’s nothing new to see here. High impact foundations are being cultivated to shift donor funding to effective causes, the “Center for effective aid policy” was set up (then shut down) to shift governement money to more effective causes, and many great EAs work in public service jobs partly to redirect money. The Lead exposure action fund spearheaded by OpenPhil is hopefully re-directing millions to a fantastic cause as we speak.
I would love to see an analysis (might have missed it) which estimates the “cost-effectiveness” of redirecting a dollar into a 10x or 100x more cost-effective intervention, How much money/time would it be worth spending to redirect money this way? Also I’d like to get my head around how much might the working “cost-effectiveness” of an org improve if its budget shifted from 10% non-EA funding to 90% non- EA funding.
There are obviously costs to roping in non-EA funding. From my own experience it often takes huge time and energy. One thing I’ve appreciated about my 2 attempts applying for EA adjacent funding is just how straightforward It has been – probably an order of magnitude less work than other applications.
Here’s a few practical ideas to how we could further redirect funds
EA orgs could put more effort into helping each other access non-EA money. This is already happening through the AIM cluster, but I feel the scope could be widened to other orgs, and co-ordination could be improved a lot without too much effort. I’m sure pools of money are getting missed all the time. For example I sure hope we’re doing whatever we can through our networks to help EA gender based violence orgs / family planning orgs to get hold of some of this 250 million dollars from Melinda.
When assessing cost-effectiveness of new interventions and charities (especially global health), I think potential to access non-EA future funding could be taken into account. If a new charity has a relatively smooth path to millions of dollars of external funding, should our cost-effectiveness bar be lower? Again this might well be happening already.
We might have a blind spot missing cause areas where cost-effectiveness might initially look sub-optimal, but huge available non-EA money-pools might shift the calculus. One example is climate mitigation, where Billions of dollars slosh around, wasted on ineffective interventions. Many “mitigation activities” I see here in northern Uganda might as well be burning money (in a carbon neutral way of course). GiveDirectly have made a great play here re-directing millions of climate mitigations funds to cash transfers. Could other “climate mitigation orgs” be set up to utilise this money better even if the end point of the money wasn’t strictly climate related?
I would imagine far smarter people have thought about this far more deeply, but there might still be room for more exploration and awareness here.
The CE of redirecting money is simply (dollars raised per dollar spent) * (difference in CE between your use of the money vs counterfactual use). So if GD raises $10 from climate mitigation for every $1 it spent, and that money would have otherwise been neutral, then that’s a cost-effectiveness of 10x in GiveWell units.
There’s nothing complicated about estimating the value of leverage. The problem is actually doing leverage. Everyone is trying to leverage everyone else. When there is money to be had, there are a bunch of organizations trying to influence how it is spent. Melinda French Gates is likely deluged with organizations trying to pitch her for money. The CEAP shutdown post you mentioned puts it perfectly:
This doesn’t mean that leverage is impossible. It just means that leverage opportunities tend to be specific and limited. We have to take them on opportunistically, rather than making leverage a theory of impact.
I largely agree, although I don’t think we’re trying to leverage money that hard in some areas areas. I do think there needs to be some strategy for leverage as well as a lot of opportunism as you say. Collaboration as I mentioned opens up opportunities as well.
Sometimes also it’s not so hard to access pools of money, for example how many orgs are trying hard to access all that climate money?
On the subject of redirecting streams of money from less impactful causes to EA causes, I feel I need to beat my drum regarding the potential of Profit for Good businesses (businesses with charities in all or almost all of the shareholder position). In such cases, to the extent EA PFGs profits displace those of normal businesses, funds are diverted from the average shareholder to an effective charity.
So when a business like Humanitix (PFG helping projects in the developing world, $4mil AUD to The Life You Can Save) displaces the marketshare of Ticketmaster, funds are diverted not from charities, but from the funds of the business’s competitors. This method of diversion seems less difficult because the operative actors (consumers, employees, business partners) are not deciding between a strong non-EA charity often optimized for warm fuzzies and marketing, but rather choosing between products with similar value propositions, but where engaging with one—in addition to the other value proposition—implies helping fight malaria or something instead of enriching a random investor.
If you’re interested in learning more about Profit for Good, here is a reading list on the subject.
Im intrigued where people stand on the threshold where farmed animal lives might become net positive? I’m going to share a few scenarios i’m very unsure about and id love to hear thoughts or be pointed towards research on this.
Animals kept in homesteads in rural Uganda where I live. Often they stay inside with the family at night, then are let out during the day to roam free along the farm or community. The animals seem pretty darn happy most of the time for what it’s worth, playing and galavanting around. Downsides here include poor veterinary care so sometimes parasites and sickness are pretty bad and often pretty rough transport and slaughter methods (my intuition net positive).
Grass fed sheep in New Zealand, my birth country. They get good medical care, are well fed on grass and usually have large roaming areas (intuition net positive)
Grass fed dairy cows in New Zealand. They roam fairly freely and will have very good vet care, but have they calves taken away at birth, have constantly uncomfortably swollen udders and are milked at least twice daily. (Intuition very unsure)
Free range pigs. Similar to the above except often space is smaller but they do get little houses. Pigs are far more intelligent than cows or sheep and might have more intellectual needs not getting met. (Intuition uncertain)
Obviously these kind of cases make up a small proportion of farmed animals worldwide, with the predominant situation—factory farmed animals likely having net negative lives.
I know that animals having net positive lives far from justifies farming animals on it’s own, but it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering.
Thanks for your input.
It’s really hard to judge whether a life is net positive. I’m not even sure when my own life is net positive—sometimes if I’m going through a difficult moment, as a mental exercise I ask myself, “if the rest of my life felt exactly like this, would I want to keep living?” And it’s genuinely pretty hard to tell. Sometimes it’s obvious, like right at this moment my life is definitely net positive, but when I’m feeling bad, it’s hard to say where the threshold is. If I can’t even identify the threshold for myself, I doubt I can identify it in farm animals.
If I had to guess, I’d say the threshold is something like
if the animals spend most of their time outdoors, their lives are net positive
if they spend most of their time indoors (in crowded factory farm conditions, even if “free range”), their lives are net negative
To this point, I think the most important things are
whatever the threshold is, factory-farmed animals clearly don’t meet it
99% of animals people eat are factory-farmed (in spite of people’s insistence that they only eat meat from their uncle’s farm where all of the animals are treated like their own children etc)
That’s really interesting on your own life. Even in the midst of my worst emotional states (emotional not physical pain,) I would still feelI’m on the positive side of the ledger.
Yes I agree on your list 2 points, those are the most important in general
In Northern Uganda here though. the majority of animals people eat (not often, many people eat meat once or twice a month) have lives from my first example. In New Zealand almost all beef and sheep meat is from something like options 2 and 3, so I think the question has some relevance to a decent number of people.
Just adding: the discussion of dairy cows, here and elsewhere, tends to focus on the experience of the adult cattle & the suffering for them of being milked, deprived of their babies, etc.
But it’s not implausible to me that the majority of the disvalue from dairy is in the lives of the calves born to dairy cows. In typical milk-producing operations, adult cows have 1 calf every 18 months or so; 50% of them are male, and so are killed within a few hours to a few months after birth.
(& these lives more likely to be net negative because they have less time to experience positive things to outweigh the terror and pain of death. Undoubtedly, some of their deaths will be quite quick, but others are slow and brutal.)
(Also, veal calves are treated very badly—intense confinement to reduce movement to keep the meat tender, dietary restriction to keep the meat pale, individual confinement in a tiny ‘hutch’, etc.)
(& let’s not forget the fetal calves who are still gestating when their mothers go to slaughter. They’re killed slowly, if they ever get purposefully slaughtered at all rather than just left to asphyxiate. Obviously, it’s unclear whether they’re conscious, but I’ve read accounts of them moving, opening eyes, trying to breathe, etc.).
Thanks Bella, this has crossed my mind and definitely updates me towards dairy farmed cows in New Zealand being more likely to be net negative. I’m not sure whether the veal thing happens in New Zealand though I’ll look into it.
Thanks for this. My view is the same as yours. The first two strike me as “net positive.” I’m also unsure about what pigs and dairy cows need. I wouldn’t be hugely surprised if they have either “net positive” or “net negative” lives, but I think it’s most likely (80%+ chance) they are “net positive.”
(Qualifying discussion of net value of existence with ” ” because I find such valuations always so fraught with uncertainty and I feel I owe other beings tremendous humility in this!)
I’m always surprised to see sheep get lumped in with cows in discussions of farmed animal welfare (ex. the SSC Adversarial Collaboration). Sure, it’s not a terrible proxy, but sheep are often freer, need to be regularly shorn to avoid overheating, and usually die of natural causes. There are definitely some practices which are awful, but sheep are quite hard to optimise in the same way we’ve done with pigs & chickens, or even cows.
However, we eat them when they’re babies so maybe it swings in the absolute other direction.
Nice point Huw, I agree. I wasn’t trying to lump them together exactly, I agree its very different. Does eating them when they are babies necessarily mean a net negative life though, if the slaughter is humane? It does seem like a weird question though...
The idea behind why eating babies is more likely to be net negative is that there’s a shorter lifespan of positive experiences to balance out the terror and pain of death.
From my experience watching lots of slaughterhouse footage and reading accounts from workers, even the best humane conditions still involve, routinely, a (shorter or longer) period in which the animal goes through the process of dying. This is probably pretty bad. If they only lived for a few weeks before that, it’s harder to imagine it’s a good deal overall.
Under some frameworks, you’d be depriving them of many years of happy life; but then again, if you didn’t kill them as children they probably would never have been born for food. Here we’d be getting too deep into the moral philosophy for me to have a confident take 😅. Interesting nonetheless.
“but it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering.”
I’m highly skeptical of this—why do you think it is important for your own moral decision making? It seems to me that whether farmed animals lives are worth living or not is irrelevant—either way we should try to improve their conditions, and the best ways of doing that seem to be: a boycott & political pressure (I would argue that the two work well together).
By analogy, no one raises the question of whether the lives of people living in extreme poverty, or working in sweatshops and so on, are worth living, because it’s simply irrelevant.
This seems relevant to any intervention premised on “it’s good to reduce the amount of net-negative lives lived.”
If factory-farmed chickens have lives that aren’t worth living, then one might support an intervention that reduces the number of factory-farmed chickens, even if it doesn’t improve the lives of any chickens that do come to exist. (It seems to me this would be the primary effect of boycotts, for instance, although I don’t know empirically how true that is.)
I agree that this is irrelevant to interventions that just seek to improve conditions for animals, rather than changing the number of animals that exist. Those seem equally good regardless of where the zero point is.
I suppose I agree with this. And I’ve been mulling over why it still seems like the wrong way to think about it to me, and I think it’s that I find it rather short-termist. In the short term if farms shut down they might be replaced with nature, with even less happy animals, it’s true. But in the long term opposing speciesism is the only way to achieve a world with happy beings. Clearly the kinds of farms @NickLaing is talking about, with lives worth living but still pretty miserable, are not optimal. Figuring out whether they are worth living or not seems only relevant to trying to reduce suffering in the short term, but not so much in the long term, because in the long term this isn’t what we want anyway.
Wanted to give a shoutout to Ajeya Cotra (from OpenPhil), for her great work explaining AI stuff on a recent Freakonomics podcast series. Her explanations about both her work on the development of AI, and her easy to understand predictions of how AI might progress from here were great, she was my favourite expert on the series.
People have been looking for more high quality public communicators to get EA/AI safety stuff out there, perhaps Ajeya could be a candidate if she’s keen?
Ajeya is already doing that with Kelsey Piper over at their blog Planned Obsolescence :)
Applying my global health knowledge to the animal welfare realm, I’m requesting 1,000,000 dollars to launch this deep net positive (Shr)Impactful charity. I’ll admit the funding opportunity is pretty marginal…
Thanks @Toby Tremlett🔹 for bringing this to life. Even though she doesn’t look so happy I can assure you this intervention nets a 30x welfare range improvement for this shrimp, so she’s now basically a human.
Although I’m enamoured as a mostly-neartermist that the front page (for the first time in my experience) is devoid of AI content, I really would like to hear the job experience and journey of a few AI safety/policy workers for this jobs week. The first 10ish wonderful people who shared are almost all neartermist focused, which probably doesn’t represent the full experience of the community.
I’m genuinely interested to understand how your AI safety job works and how you wonderful people motivate yourselves on a day to day basis, when seeing clear progress and wins must be hard a lot of the time. I find it hard enough some days working in Global health!
Or maybe your work is so important, neglected and urgent that your can’t spare a couple of hours to write a post ;).
Pro tip: you can adjust your settings so you never have to see AI content on the front page
I’ve been on the forum for maybe 9 months now, and I’ve been intrigued by the idea of “hits based” giving, explained well in this 2016 article by Holden Karnofsky. The idea that “we will sometimes bet on ideas that contradict conventional wisdom, contradict some expert opinion, and have little in the way of clear evidential support.”
1) Is there a database with a list of donations considered “hits based”by Openphil? If not that would be a helpful and transparent way of tracking success on these. I had a quick look through their donations but its not clear which ones are considered “hits based”
2) Are there any donations considered “hits based” since that 2016 post which have clearly turned out really well, i.e. been winners? We might have seen some these hit based donations resolve positively or negatively, although 6 years is a short timeline so most would remain unresolved. Is there any official or unofficial info on that?
Thanks!
The net net effect could be positive or negative. Let me untangle it for you.
In favour of net positivity is the net positive human lives saved through net negative, negative net effects on mosquitos and malaria causing a net positive effect on humans. The insecticides positively embed these bed net, net positive effects.
However there’s a catch in favour of net negativity—as nets are positively dragged to net fish, the netted fish suffer net negative net negativity.
While caught between the net positive net positivity to humans and net negative net negativity to fish, we are enmeshed in a net of uncertainty as we struggle to net a clear positive net answer.
Hopefully I didn’t tie you in knots as I strung you along.
In response to @David Mathers ;).