This is not an unreasonable take, but just in the interest of having an accurate public record, Iâm actually the strategy director for WAI (although I was the executive director previously). Also, none of us at Arthropoda are technically animal welfare scientists. Our training is all in different things (for example, my PhD is in engineering mechanics and Bobâs a philosopher who published a lot of skeptical pieces on insects).
Basically, I think we came to Arthropoda because the work we did before that changed our minds. More importantly, I donât think the majority of Arthropodaâs work will be about checking for sentience? Rather, weâre taking a precautionary framework about insects being sentient and asking how to improve their welfare if they are. In this context our views on sentience seem less likely to cause a COIâalthough I also expect all our research to be publicly available for people to red-team as needed :)
Finally, fully agree on the extreme personnel overlap. I would love to not be co-running a bug granting charity as a volunteer in addition to my two other jobs! But the resource constraints and unusualness of this space are unfortunately not particularly conducive to finding a ton of people willing to take on leadership roles.
mal_grahamđ¸
AnÂnouncÂing the Field Work Forum
Field Work Forum
All very interesting, and yes letâs talk more later!
One quick thing: Sorry my comment was unclearâwhen I said âprecise probabilitiesâ I meant the overall approach, which amounts to trying to quantify everything about an intervention when deciding its cost effectiveness (perhaps the post was also unclear).I think most people in EA/âAW spaces use the general term âprecise probabilitiesâ the same way youâre describing, but perhaps there is on average a tendency toward the more scientific style of needing more specific evidence for those numbers. That wasnât necessarily true of early actors in the WAW space and I think it had some mildly unfortunate consequences.
But this makes me realize I should not have named the approach that way in the original post, and should have called it something like the âquantify as much as possibleâ approach. I think that approach requires using precise probabilitiesâsince if you allow imprecise ones you end up with a lot of things being indeterminateâbut thereâs more to it than just endorsing precise probabilities over imprecise ones (at least as Iâve seen it appear in WAW).
Thanks Eli!
I sort of wonder if some people in the AI communityâany maybe you, from what youâve said here? -- are using precise probabilities to get to the conclusion that you want to work primarily on AI stuff, and then spotlighting to that cause area when youâre analyzing at the level of interventions.
I think someone using precise probabilities all the way down is building a lot more explicit models every time they consider a specific intervention. Like if youâre contemplating running a fellowship program for AI interested people, and you have animals in your moral circle, youâre going to have to build this botec that includes the probability an X% of the people you bring into the fellowship are not going to care about animals and likely, if they get a policy role, to pass policies that are really bad for them. And all sorts of things like that. So your output would be a bunch of hypotheses about exactly how these fellows are going to benefit AI policy, and some precise probabilities about how those policy benefits are going to help people, and possibly animals to what degree, etc.I sort of suspect that only a handful of people are trying to do this, and I get why! I made a reasonably straightforward botec for calculating the benefits to birds of bird-safe glass, that accounted for backfire to birds, and it took a lot of research effort. If you asked me how bird-safe glass policy is going to affect AI risk after all that, I might throw my computer at you. But I think the precise probabilities approach would imply that I should.
Re:It might be interesting to move out of high-level reason zone entirely and just look at the interventions, e.g. directly compare the robustness of installing bird-safe glass in a building vs. something like developing new technical techniques to help us avoid losing control of AIs.
Iâm definitely interested in robustness comparisons but not always sure how they would work, especially given uncertainty about what robustness means. I suspect some of these things will hinge on how optimistic you are about the value of life. I think the animal community attracts a lot more folks who are skeptical about humans being good stewards of the world, and so are less convinced that a rogue AI would be worse in expectation (and even folks who are skeptical that extinction would be bad). So I worry AI folks would view âpreserving the value of the futureâ as extremely obviously positive by default, and that (at least some) animal folks wouldnât, and that would end up being the crux about whether these interventions are in fact robust. But perhaps you could still have interesting discussions among folks who are aligned on certain premises.Re:
What would the justification standards in wild animal welfare say about uncertainty-laden decisions that involve neither AI nor animals: e.g. as a government, deciding which policies to enact, or as a US citizen, deciding who to vote for President?
Yeah, I think this is a feeling that the folks working on bracketing are trying to capture: that in quotidian decision-making contexts, we generally use the factors we arenât clueless about (@Anthony DiGiovanniâI think I recall a bracketing piece explicitly making a comparison to day-to-day decision making, but now canât find it⌠so correct me if Iâm wrong!). So Iâm interested to see how that progresses.
I suspect though, that people generally just donât think about justification that much. In the case of WAW-tractability-skeptics, Iâd guess some large percentage are likely more driven by the (not unreasonable at first glance) intuition that messing around in nature is risky. The problem of course is that all of life is just messing around in nature, so thereâs no avoiding it.
Yeah, I could have made that more clearâI am more focused on the sociology of justification. I supposed if youâre talking pure epistemics, it depends whether youâre constructivist about epistemological truth. If you are, then youâd probably have a similar positionâthat different communities can reasonably end up with justification standards, and no one community have more claim to truth than the other.
I suspect, though, that most EAs are not constructivists about epistemology, and so vaguely think that some communities have better justification standards than others. If thatâs right, then the point is more sociological: that some communities are more rigorous about this stuff than others, or even that they might use the same justification standards but differ in some other way (like not caring about animals) that means the process looks a little different. So the critic Iâm modeling in the post is saying something like: âsure, some people do justification better than others, but these are different communities so it makes sense that some communities care more about getting this right than others do.â
I guess another angle could be from meta-epistemic uncertainty. Like if we think there is a truth about what kinds of justification practices are better than others, but weâre deeply uncertain about what it is, it may then still seem quite reasonable that different groups are trying different things, especially if they arenât trying to participate in the same justificatory community.
Not entirely sure Iâve gotten all the philosophical terms technically right here, but hopefully the point Iâm trying to make is clear enough!
Hi Vasco! As weâve discussed in other threads/âemails/âetc, we have different meta-ethical views and different views about consciousness. So Iâm not surprised weâve landed in somewhat different places on this issue :)
Bob and I make most of the strategic and granting decisions for Arthropoda, and we have slightly different views, so I donât know exactly where we will land (heâll reply in a second with his thoughts). But broadly, we both agree that we donât think soil nematodes and some other soil invertebrates have enough likelihood of being sentient to be a high priority, nor do we think that (for those that are sentient) we have a good enough understanding of what would help them to make action-oriented grants (which is Arthropodaâs focus) â in part because we donât endorse precise-probabilities approaches to handling uncertainty, and so want to make grants that are aimed towards actions that appear robustly positive under a range of possible probability assignments/âways of handling uncertainty.
That said, our confidence in our own position is not high. So, weâd be willing to fund things to challenge our own views: If we had sufficient funding from folks interested in the question, Arthropoda would fund a grant round specifically on soil invertebrate sentience and relevant natural history studies (especially in ways that attempt to capture the likely enormous range of differences between species in this group). Currently, much of our grant-making funds are restricted (at least informally) to farmed insects and shrimp, so itâs not an option.
As a result, I expect that Arthropoda is probably still one of the better bets for soil invertebrate interested donors. As a correction to your comment, Arthropoda is not restricted in focus as a matter of principle, but just has happened for contingent reasons to focus on farmed animals in its first rounds. We collaborate with Wild Animal Initiative (Iâm the strategy director at WAI) to reduce duplication of effort, and have a slightly better public profile for running soil invertebrate studies, so we expect it will generally be Arthropoda rather than WAI who would be more likely to run this kind of program. I donât want to speak for CWAW, so Iâll let them reply if they have interests in this area; but from my own conversations I doubt they would be in a good position to make soil invertebrates a priority in the next couple of years. Finally, you havenât mentioned them, but Rethink Priorities may also be open to some work in this area (Iâm not sure though).
Arthropoda treasurer hereâpretty much option 2. We are hoping to increase our expenditure next year to run an extra grants round, add a contractor to help manage some things (currently weâre almost entirely volunteer), add a bit to our strategic reserve (to carry us through donation fluctuations without needing to pause grant-making), and a few other small bits and pieces. A good chunk of this expansion can be covered by our reserves + some existing donor commitments, and 55k is about whatâs left.
We have actually a much higher room for more funding in theory, up to several million to run a couple of targeted programs we have in mind. These activities would require hiring someone to run them as a program manager as well as a lot more in grants. But weâre not really expecting EA Forum readers to fill that gap unless they happen to run a large foundation :)
haha I can confirm I did not karma knock you and I was kind of surprised you had gotten so downvoted! I actually upvoted when I saw that to counteract.
One random thought Iâll add is that since you are most experienced (afaict?) in ghd, Iâd expect your arguments to be at their best in that context, so you getting upvoted on GHD and downvoted on AW is at least consistent with having more expertise in one than the other, so not necessarily evidence that AW folks are more sensitive. Although Iâm not ruling that out!The other thing Iâm not sure I understand is how much weight a single individualâs downvote can haveâis there any chance that a few AW people have a ton of karma here, so that just a few people downvoting can take you negative in a way that wouldnât happen as much in GHD?
Thanks! I think I might end up writing a separate post on palatability issues, to be honest :)
On the intervention front, the movement of WAW folks is turning now to interventions in at least some cases (in WAIâs case, rodenticide fertility control is something theyâre trying to fundraise for, and at NYU/âArthropoda Iâm working on or fundraising for work on humane insecticides and bird window collisions). I just meant that perhaps one reason we donât have more of them is that thereâs been a big focus on field-building for the last five years.For field-building purposes, thereâs still been some focus on interventions for the reasons you mention, but with additional constraintsânot just cost-effective to pursue but also attractive to scientists to work on and serves to clarify what WAW is, etc., to maximize the field-building outcomes if we can.
Hi Nick! Thanks for engaging. Iâm not reading you as being anti WAW interventions, and I think youâre bringing up something that many people will wonder about, so I appreciate you giving me the opportunity to comment on it.
Basically, letâs say the type of intractability worry I was mainly addressing in the post is âintractability due to indirect ecological effects.â And the type youâre talking about is âintractability due to palatabilityâ or something like that.
I think for readers who broadly buy the arguments in my post, but donât think WAW interventions are palatable, are not correct but for understandable reasons. I think the reason is either (1) underexposure to the most palatable WAW ideas because WAW EAs tend not to focus on/âenjoy talking about those or (2) using the âecologically inertâ framework when talking about WAW and one of the other frameworks when talking about other types of interventions.
Letâs first assume youâre okay with spotlighting, at least to a certain degree. Then, âpreventing bird-window collisions with bird safe glass legislationâ and âbanning second generation anti-coagulant rodenticidesâ are actually very obviously good things to do, and also seem quite cost-effective based on the limited evidence available. I think people donât really realize how many animals are affected by these issuesâmy current best-guess CEA for bird safe glass suggest itâs competitive with corporate chicken campaigns, although I want to do a little more research to pin down some high-uncertainty parameters before sharing it more widely.
Anti-coagulant bans and bird-safe glass are also palatable, and the proof is in the pudding: California, for example, has already passed a state-wide ban on these specific rodenticides, and 22 cities (including NYC and Washington DC) have already passed bird-safe glass regulations. I think I could provide probably at least 5 other examples of things that fit into this bucket (low backfire under spotlighting, cost effective, palatable), and I donât really spend most of my time trying to think of them (because WAI is focused on field-building, not immediate intervention development, and because Iâm uncertain if spotlighting is okay or if I should only be seeking ecologically inert interventions).
The important thing to note is that WAW is actually more tractable, in some cases, then FAW interventions because it doesnât require anyone to change their diet, and people in many cultures have been conditioned to care about wild animals in a way theyâve been conditioned to reject caring about farmed animals. Thereâs also a lot of âI love wild animalsâ sentiment being channelled into conservation, but my experience is that when you talk to folks with that sentiment, they also get excited about bird window collision legislation and things like that.
But perhaps youâre actually hoping for ecologically inert interventions. Then, Iâm not sure which interventions youâd think would be acceptable instead? Sure, humane insecticides could end up being hard (although I think much less hard than you think, for reasons I wonât go into here). But literally nothing elseâin FAW, in GHD, in AIâseems reasonably likely to be ecologically inert while still plausibly causing a reduction in suffering (maybe keel bone fracture issues in FAW?). But the folks who say âWAW interventions arenât palatableâ have not generally, in my experienced, said âand I also donât do GHD because itâs not ecologically inertââso I suspect in at least some instances they are asking for ecologically inert interventions from WAW, and something else from their cause area of preference.
Hi Vasco! Thanks for the comment. I agree with you that switching is not necessarily worse (depending on your goals and principles) then just pursuing one uncertain intervention. I also agree with you that research is important when you find yourself in such a positionâitâs why Iâve dedicated my career to research :) And critically, I appreciate the clarification that âdecreasing uncertaintyâ is your priorityâI didnât realize that from past posts, but I think your most recent one is clear on that.
One thing Iâll just mention as a matter of personal inclinationâI feel unenthusiastic about precise probabilities for more reasons than just the switching issue (I pointed it out just to add to the discourse about things someone with that view should reflect on). Personally, it just doesnât feel accurate to my own epistemic state. When I look at my own uncertainties of this kind, it feels almost like lying to put a precise number on them (Iâm not saying others should feel this way, just that it is how I feel). So thatâs the most basic reason (among the other sort of theoretic reasons out there) that I feel attached to imprecise probabilities.
Thanks so much!
I actually have a lot of sympathy with farmed animal advocates who feel the way you describe, despite disagreeing that WAW should be seen as intractable by their lights. I think in the scheme of things, if I had to choose, Iâd prefer global health and AI folks updated to care more about animals, rather than farmed animal advocates updated more to care about indirect effects. But Iâm not sure thatâs a well-calibrated view as opposed to frustration with how little people care about animals in general.
Yes, totally agree that some longtermist or AI safety oriented types have actually thought about these things, and endorse precise probabilties, and have precise probability assignments to things I find quite strange, like thinking itâs 80% likely that the universe will be dominated by sentient machines instead of wild animals. Although I expect Iâd find any precise probability assignment about outcomes like this quite surprising, perhaps Iâm just a very skeptical person.
But I think a lot of EAs I talk to have not reflected on this much and donât realize how much the view hinges on these sorts of beliefs.
If wild anÂiÂmal welfare is inÂtractable, evÂeryÂthing is inÂtractable.
Thanks, thatâs helpful. I agree that the former feels more natural but am not sure where that comes from.
Not relevant to the main text here, but based on this I suspect at least part of the reason white folks in the UK have lower life expectancy is rates of alcohol consumption. See figure 1, for example. I havenât dug into the report methodology so my confidence is low, but it at least tracks with my experience living there. These data on cause of death are interesting as well.
This might not be the place for a discussion of this, but I personally donât feel that the ârobustnessâ of Tomasikian chain of reasoning you note here is similar to the ârobustnessâ of the idea that factory farms contain a crazy amount of suffering.
In the first instance, the specific chain of arrows above seems quite speculative, since we really have no idea how land use would change in a world with no factory farming. Are we that confident net primary productivity will increase? Iâm aware there are good arguments for it, but Iâd be surprised if someone couldnât come up with good arguments against if they tried.
More importantly, I donât think thatâs a sufficient reasoning chain to demonstrate that wild animal effects dominate? Youâd need to show that wild+farmed animal welfare on post-factory farmed land uses is lower than wild+farmed animal welfare on current land uses, and that seems very sensitive to specific claims about moral weights, weights between types of suffering, empirical information about wild animal quality of life, what it means for a life to be net-negative, etc.
Or am I misunderstanding what you mean by robustness? Iâve just finished reading your unawareness sequence and mostly feel clueless about everything, including what it could mean for a reasoning chain to be robust.
Iâm also very interested in this question, because it isnât obvious to me where to draw the line in fields like wild animal welfare. I think I know as little about nematode sentience + welfare, for example, as I do about possible far future beings.
Maybe one difference is that it at least feels possible in theory to get more information about nematodes, but not really possible to get more information about far future beings? Although Iâm skeptical of my intuitions here, since maybe itâs easier than I think to get information about far future beings and harder than I think to get information about nematode sentience.
@Eli Roseđ¸ I think Anthony is referring to a call he and I had :)
@Anthony DiGiovanni
I think I meant more like there was a justification of the basic intuition bracketing is trying to capture as being similar to how someone might make decisions in their life, where we may also be clueless about many of the effects of moving home or taking a new job, but still move forward. But I could be misremembering!Just read your comment more carefully and I think youâre right that this conversation is what I was thinking of.