1. Why stop at insects, why not write this same article about demodex mites, earthworms or krill?
2. I think thereâs a big reason why the concerns of insects and smaller animals are dismissed that you havenât touched on, which is that any consideration of these animals leads to absurd conclusions, like that every moral pursuit of humanity up to now is actually meaningless compared to improving the lives of insects. Most people can see that this is not a fruitful avenue of thinking.
I think youâre underestimating the average person by suggesting that the only reason theyâre not interested in insect welfare is entrenched social norms. Whereas there were reasonable alternatives to slavery, and there are reasonable alternatives to factory farming, I think the average person can intuit that thereâs no reasonable alternative to just politely ignoring the suffering of the quintillions of insects, worms and mites on the planet.
Your first point seems like a legitimate question to me. Iâve not read much about those animals, but I would assume there are many of them, perhaps far more than there are insects. I would be curious to read about indicators of their sentience. The author, however, described evidence of several indicators of insect sentience (âresponding to anesthetic, nursing their wounds, making tradeoffs between pain and reward, cognitively modeling both risks and reward in decision-making, responding in novel ways to novel experiences, self-medicatingâ), but doesnât seem to think the animals you listed are conscious. I would guess they are missing some of these indicators.
Your second point is less interesting. A couple of your claims seem false, or at least incompatible. For example, the conclusion that every other moral pursuit of humanity is relatively meaningless if insects are given consideration also requires that helping insects be tractable, which you donât seem to think. If we cannot and could never help insects, the greatest moral pursuits must be other (likely more normal) things, which I suppose would make them relatively meaningful. If we do say that helping insects is tractable and conclude that other pursuits are relatively meaningless, we can still acknowledge that on an absolute scale those other pursuits are incredibly meaningful, and that many of those pursuits are instrumentally useful for our goal of helping insects.
You also make the claim that âthe average person can intuit that thereâs no reasonable alternative to just politely ignoring the suffering of the quintillions of insects, worms and mites on the planet.â Again, I think one ought to be skeptical of their intuitions, especially surrounding issues that they have very little knowledge of. A nascent field of research has sprung up around these issues, and I suspect that more insights and paths forward will emerge as we learn more. There are, however, things we can do already. Brian Tomasic has written âHow to Kill Bugs Humanely,â which almost everyone can apply in day-to-day life. A quick search of Wild Animal Initiativeâs research library revealed âImproving pest management for wild insect welfare,â which says that âAgricultural pest insect management practices may be a particularly tractable avenue for improving the expected welfare of a large number of insects.â
If I wanted to write something to disagree with the post, Iâd have explored other avenues such as these:
The alleged indicators of sentience cited in the research arenât good at indicating sentienceâhere are some better ones, and hereâs why I think theyâre better
Some insects do show evidence of indicators cited in the research, but many donât
Insects generally fail to show evidence of the indicators that I think are best
The insects that are most likely to be sentient (based on some set of indicators) are also the hardest to help (or something else arguing intractability)
The methodology in the research coming up with moral weights/âwelfare capacities is weak (This would be a critique that Iâd be particularly interested in from someone trained in research methods, and I think itâs an easier target)
Extreme suffering matters so much more than moderate suffering that the likely aggregation of far more instances of moderate suffering is insufficiently significant to make intervening worthwhile
Sentience isnât the trait we should be focused on; the metaethical foundations are weak
I think the early questions are particularly interesting and underexplored, but there are many other options too! I downvoted your comment because I think it doesnât effectively engage with the substance of the disagreement, not because I disagree. I would be excited to see more comments from people whose views donât overlap with mine, which currently lean towards supporting work on issues affecting small non-human animals, provided that they engage with the core disagreements in a meaningful way.
I didnât say anything about the tractability of insect welfare interventions but Iâm sure there are many things you could do to help insects. Almost all of those things will be at the direct or indirect cost of people. There are very few worlds in which you can consider insects sentient and not go completely off the rails sacrificing human welfare to insect welfare.
If we do say that helping insects is tractable and conclude that other pursuits are relatively meaningless, we can still acknowledge that on an absolute scale those other pursuits are incredibly meaningful
In a world with limited resources, meaningfullness is necessarily measured on a relative scale to triage resources. A toddler dropping their ice cream is âabsolutely importantâ but I donât spend much time daily preventing that when there are families struggling to put food on the table, or 600,000 people dying of malaria annually, or chickens in cages. When one moral issue is magnitudes greater than any existing moral issue it requires a similarly large reorientation of attention and resources. I think youâre too flippant in dismissing how disruptive this would be.
I may have made an incorrect assumption! I thought that when you said âthe average person can intuit that thereâs no reasonable alternative to just politely ignoring the suffering of the quintillions of insects, worms and mites on the planet,â you were arguing that solving the problem wasnât tractable.
Generally people on the EA Forum prioritize work on problems that do well under the ITN framework. If you suggested that we ignore the suffering, then perhaps you partly accept that there is suffering, and itâs important, though now Iâm curious whether you actually think that. Do you believe that insects suffer? If they do suffer, is it important?
I believe that there are hardly any actors in the insect welfare space, and that the resources allocated are very minimal. I guessed that you were aware of this situation as well and considered insect welfare neglected, at least in the sense that there is little being done to improve it (as opposed to in the sense that more resources should go towards it). Maybe you can correct me here too!
That left tractability, which I know is commonly questioned when the topic of insect welfare, especially for wild insects, comes up. I have this question too, despite there being some preliminary reason to think that there are some opportunities for useful work at scale.
I very much agree with you on the opportunity cost issue. The most likely source of donations and talented people for insect welfare work is the effective altruism community. Some of those resources (especially financial, I suspect) would presumably be diverted from global health and development work, which would mean sacrificing some human welfare.
You seem to be thinking more in terms of binaries and major changes than I would. If everyone were convinced that insect welfare was the best thing to work on, there would indeed be fundamental disruptions to the systems that are currently improving human welfare most cost-effectively. I do not think we are remotely close to being at risk for that sort of thing. While any reallocation would come with some loss of human welfare and life, amounts that could realistically be reallocated within the next few years could hardly be considered disruptive on a systemic level.
I also think some of the resources put towards insect welfare would support research that would be useful for future cause prioritization, and could result in meaningful increases or decreases in allocations to insect welfare in the future. I would be excited to learn that insects are not sentient, and we can reallocate resources back to other non-human animals or humans. I would also be happy to learn that we really had been missing something important for a long time, and we should be allocating far more to the insects. Though ultimately I would be aware of the large (on an absolute scale) human cost of the reallocation.
I was using meaningfulness differently than you are. Sometimes people feel negatively about discovering that their past efforts likely led to results that are far less meaningful than the results they could have gotten from doing different work. I think reframing the thought as, âMy past work was very meaningful, but my future work can be far more meaningful than even that,â is more productive than, âMy past work was relatively meaningless, and my future work will be relatively meaningful.â You seem to be using the word the way Iâd choose to use importance. I think itâs more appropriate to focus on importance only on a relative scale when doing cause prioritization, because as you say, weâre doing triage. Reallocating scarce resources to the places they can have the greatest impact is the goal.
Iâm not sure if I agree with you or not, but I donât know why you were getting so downvoted for this comment (before I strong up-voted, just to balance things out).
I thought the karma system was supposed to be independent of agreement/âdisagreement? I want to see your side of the discussion explored in the comments. I donât think people should be downvoting this kind of objection!
Your point 1 seems like a very good question to me, and I would be interested to read the authorâs reply.
Your second point also seems like a reasonable response to the piece, and Iâm sure represents what a lot of people would feel, especially if not familiar with EA. The author did a good job of anticipating and responding to lots of potential objections, but I donât think directly addressed this âdoesnât this lead to absurd conclusions?â objection.
The whole argument does feel like it resembles a Pascalâs mugging, in the same vein as strong-longtermism. When you try to do expected value maximization using Bayesian subjective probabilities (e.g. around extinction risk or likelihood of insect sentience or intensity of insect experience), and then start considering situations with huge amounts of potential value, it does seem like a recipe for decision paralysis: âbut look how big these numbers are, you canât be that certain they donât matter, surely??â
FWIW, while I didnât downvote the comment, I can see how folks would consider âWhy stop at X?â a lazy âgotchaâ argument or appeal to absurdity heuristic, which seems worth discouraging.
If yes: Guess what? I am a sorcerer from a parallel universe who has the ability to conjure arbitrary numbers of sentient beings into existence at will, and subject them to extreme torture. You tell me how unlikely you think this claim is. I will then threaten 10x the reciprocal of that number, unless you give me ÂŁ100. I can send you my details and we can arrange the transfer.
If no: How do you explain this other than by an appeal to absurdity? I would love to know the solution to this problem.
Unless or until we have a better solution to this problem than âthatâs absurdâ, then I think we have to allow appeals to absurdity, especially when used against an argument that bears some resemblance to this pascal mugger example, at least superficially.
Haha, ok, fair enough, I was not expecting that response!
Your solution (and Karnofskyâs) sound very interesting to me. But Iâll need to read both links in more depth to properly wrap my head around it.
A few questions though:
Karnofskyâs worked example for applying their multi-model technique leads with: âdoes this action deviate greatly from ânormality?ââ Why is this not just a more formalized version of the appeal to absurdity heuristic?
Not everyone is a galaxy-brain philosopher who can come up with complex blogposts like those to explain why giving their wallet to a Pascal mugger is wrong, yet everyone gets the correct (presumably) answer to this thought experiment anyway. And I think most are getting there by using some kind of absurdity heuristic? I think that should count in favour of the usefulness of the appeal to absurdity heuristic! Really feels like thereâs a good galaxy-brain meme in this. (I get Iâm rolling back here on my early suggestion that we could abandon the absurdity heuristic as soon as just one person could come up with a solution to the problem of pascalâs mugger).
Back to the actual subject of this post: Do you think the approach outlined in your 2 links could be used as an argument against the overwhelming importance of insect suffering, at least for someone who was extremely uncertain about the likelihood of insect sentience or its intensity?
Thanks! I unfortunately donât have time to engage fully with this thread going forward, but briefly:
To be clear, I donât share Karnofskyâs overall framework. Iâm skeptical of the âregression to normalityâ criterion myself. (And I donât find his model of the problem behind Pascalâs mugging probabilities compelling, since he still uses precise estimates.)
In the Pascalâs mugging case, I think people have some fuzzy sense that the muggerâs claim is made-up, which can be more carefully operationalized with imprecise credences. But if we canât even point to what our âthis is absurdâ reaction is about, and are instead merely asserting that our pretheoretic sense should dictate our decisions, Iâm more skeptical. Especially when weâre embracing an ethical principle most people would consider absurd (impartial altruism).
Appeal to absurdity is a reasonable objection and shouldnât be discouraged. We need to be able to say clearly why idea X doesnât also imply some similar absurd idea Y.
@tobycrisford đ¸ unfortunately on mamy animal welfare threads, more extreme dissenting views get downvoted to oblivion without strong up votes (like mine and yours) to compensate. This pattern seems mostly to apply to animal welfare threads unfortunately, and I think more discourse would be encouraged if animal welfare supporters didnât obliterate dissenting views.
Only a handful of us, including myself and @Henry Howardđ¸ engage with different perspectives on these animal welfare threads and I think it would be more useful if these kind of comments were encouraged, even if only to better understand what many (probably most) non EA people might be intuiting and thinking when they see these arguments.
Iâm mostly not engaging with these threads because I often donât find the engagement particularly rewarding unfortunately. Iâll keep trying from time to time :D
I think @Henry Howardđ¸ s 2 points are very important, even if you donât necessarily agree with them.
Agreed Nick. One of my recent comments has 7 agrees, 11 disagrees but â10 karma. If 7 people agree with a comment itâs unlikely to be disruptive trolling that needs to be buried.
Clear misuse of voting and evidence of heavy forum bias that I sense but canât prove.
Iâm not sure itâs âmisuseâ of voting exactly, I think people should vote how they want. I just think this downvoting pattern is unfortunate for encouraging discourse and a diversity of views.
Iâm doubtful that any of those are conscious, but I agree that given that itâs possible they are, their interests matter a decent amount in expectationâthough probably less than insects.
Two points:
1. Why stop at insects, why not write this same article about demodex mites, earthworms or krill?
2. I think thereâs a big reason why the concerns of insects and smaller animals are dismissed that you havenât touched on, which is that any consideration of these animals leads to absurd conclusions, like that every moral pursuit of humanity up to now is actually meaningless compared to improving the lives of insects. Most people can see that this is not a fruitful avenue of thinking.
I think youâre underestimating the average person by suggesting that the only reason theyâre not interested in insect welfare is entrenched social norms. Whereas there were reasonable alternatives to slavery, and there are reasonable alternatives to factory farming, I think the average person can intuit that thereâs no reasonable alternative to just politely ignoring the suffering of the quintillions of insects, worms and mites on the planet.
Your first point seems like a legitimate question to me. Iâve not read much about those animals, but I would assume there are many of them, perhaps far more than there are insects. I would be curious to read about indicators of their sentience. The author, however, described evidence of several indicators of insect sentience (âresponding to anesthetic, nursing their wounds, making tradeoffs between pain and reward, cognitively modeling both risks and reward in decision-making, responding in novel ways to novel experiences, self-medicatingâ), but doesnât seem to think the animals you listed are conscious. I would guess they are missing some of these indicators.
Your second point is less interesting. A couple of your claims seem false, or at least incompatible. For example, the conclusion that every other moral pursuit of humanity is relatively meaningless if insects are given consideration also requires that helping insects be tractable, which you donât seem to think. If we cannot and could never help insects, the greatest moral pursuits must be other (likely more normal) things, which I suppose would make them relatively meaningful. If we do say that helping insects is tractable and conclude that other pursuits are relatively meaningless, we can still acknowledge that on an absolute scale those other pursuits are incredibly meaningful, and that many of those pursuits are instrumentally useful for our goal of helping insects.
You also make the claim that âthe average person can intuit that thereâs no reasonable alternative to just politely ignoring the suffering of the quintillions of insects, worms and mites on the planet.â Again, I think one ought to be skeptical of their intuitions, especially surrounding issues that they have very little knowledge of. A nascent field of research has sprung up around these issues, and I suspect that more insights and paths forward will emerge as we learn more. There are, however, things we can do already. Brian Tomasic has written âHow to Kill Bugs Humanely,â which almost everyone can apply in day-to-day life. A quick search of Wild Animal Initiativeâs research library revealed âImproving pest management for wild insect welfare,â which says that âAgricultural pest insect management practices may be a particularly tractable avenue for improving the expected welfare of a large number of insects.â
If I wanted to write something to disagree with the post, Iâd have explored other avenues such as these:
The alleged indicators of sentience cited in the research arenât good at indicating sentienceâhere are some better ones, and hereâs why I think theyâre better
Some insects do show evidence of indicators cited in the research, but many donât
Insects generally fail to show evidence of the indicators that I think are best
The insects that are most likely to be sentient (based on some set of indicators) are also the hardest to help (or something else arguing intractability)
The methodology in the research coming up with moral weights/âwelfare capacities is weak (This would be a critique that Iâd be particularly interested in from someone trained in research methods, and I think itâs an easier target)
Extreme suffering matters so much more than moderate suffering that the likely aggregation of far more instances of moderate suffering is insufficiently significant to make intervening worthwhile
Altruists should be risk-averse, and insect work is risky in relevant ways (https://âârethinkpriorities.org/ââresearch-area/ââhow-can-risk-aversion-affect-your-cause-prioritization/ââ)
Sentience isnât the trait we should be focused on; the metaethical foundations are weak
I think the early questions are particularly interesting and underexplored, but there are many other options too! I downvoted your comment because I think it doesnât effectively engage with the substance of the disagreement, not because I disagree. I would be excited to see more comments from people whose views donât overlap with mine, which currently lean towards supporting work on issues affecting small non-human animals, provided that they engage with the core disagreements in a meaningful way.
I didnât say anything about the tractability of insect welfare interventions but Iâm sure there are many things you could do to help insects. Almost all of those things will be at the direct or indirect cost of people. There are very few worlds in which you can consider insects sentient and not go completely off the rails sacrificing human welfare to insect welfare.
In a world with limited resources, meaningfullness is necessarily measured on a relative scale to triage resources. A toddler dropping their ice cream is âabsolutely importantâ but I donât spend much time daily preventing that when there are families struggling to put food on the table, or 600,000 people dying of malaria annually, or chickens in cages. When one moral issue is magnitudes greater than any existing moral issue it requires a similarly large reorientation of attention and resources. I think youâre too flippant in dismissing how disruptive this would be.
I may have made an incorrect assumption! I thought that when you said âthe average person can intuit that thereâs no reasonable alternative to just politely ignoring the suffering of the quintillions of insects, worms and mites on the planet,â you were arguing that solving the problem wasnât tractable.
Generally people on the EA Forum prioritize work on problems that do well under the ITN framework. If you suggested that we ignore the suffering, then perhaps you partly accept that there is suffering, and itâs important, though now Iâm curious whether you actually think that. Do you believe that insects suffer? If they do suffer, is it important?
I believe that there are hardly any actors in the insect welfare space, and that the resources allocated are very minimal. I guessed that you were aware of this situation as well and considered insect welfare neglected, at least in the sense that there is little being done to improve it (as opposed to in the sense that more resources should go towards it). Maybe you can correct me here too!
That left tractability, which I know is commonly questioned when the topic of insect welfare, especially for wild insects, comes up. I have this question too, despite there being some preliminary reason to think that there are some opportunities for useful work at scale.
I very much agree with you on the opportunity cost issue. The most likely source of donations and talented people for insect welfare work is the effective altruism community. Some of those resources (especially financial, I suspect) would presumably be diverted from global health and development work, which would mean sacrificing some human welfare.
You seem to be thinking more in terms of binaries and major changes than I would. If everyone were convinced that insect welfare was the best thing to work on, there would indeed be fundamental disruptions to the systems that are currently improving human welfare most cost-effectively. I do not think we are remotely close to being at risk for that sort of thing. While any reallocation would come with some loss of human welfare and life, amounts that could realistically be reallocated within the next few years could hardly be considered disruptive on a systemic level.
I also think some of the resources put towards insect welfare would support research that would be useful for future cause prioritization, and could result in meaningful increases or decreases in allocations to insect welfare in the future. I would be excited to learn that insects are not sentient, and we can reallocate resources back to other non-human animals or humans. I would also be happy to learn that we really had been missing something important for a long time, and we should be allocating far more to the insects. Though ultimately I would be aware of the large (on an absolute scale) human cost of the reallocation.
I was using meaningfulness differently than you are. Sometimes people feel negatively about discovering that their past efforts likely led to results that are far less meaningful than the results they could have gotten from doing different work. I think reframing the thought as, âMy past work was very meaningful, but my future work can be far more meaningful than even that,â is more productive than, âMy past work was relatively meaningless, and my future work will be relatively meaningful.â You seem to be using the word the way Iâd choose to use importance. I think itâs more appropriate to focus on importance only on a relative scale when doing cause prioritization, because as you say, weâre doing triage. Reallocating scarce resources to the places they can have the greatest impact is the goal.
Iâm not sure if I agree with you or not, but I donât know why you were getting so downvoted for this comment (before I strong up-voted, just to balance things out).
I thought the karma system was supposed to be independent of agreement/âdisagreement? I want to see your side of the discussion explored in the comments. I donât think people should be downvoting this kind of objection!
Your point 1 seems like a very good question to me, and I would be interested to read the authorâs reply.
Your second point also seems like a reasonable response to the piece, and Iâm sure represents what a lot of people would feel, especially if not familiar with EA. The author did a good job of anticipating and responding to lots of potential objections, but I donât think directly addressed this âdoesnât this lead to absurd conclusions?â objection.
The whole argument does feel like it resembles a Pascalâs mugging, in the same vein as strong-longtermism. When you try to do expected value maximization using Bayesian subjective probabilities (e.g. around extinction risk or likelihood of insect sentience or intensity of insect experience), and then start considering situations with huge amounts of potential value, it does seem like a recipe for decision paralysis: âbut look how big these numbers are, you canât be that certain they donât matter, surely??â
FWIW, while I didnât downvote the comment, I can see how folks would consider âWhy stop at X?â a lazy âgotchaâ argument or appeal to absurdity heuristic, which seems worth discouraging.
Would you give your wallet to a pascal mugger?
If yes: Guess what? I am a sorcerer from a parallel universe who has the ability to conjure arbitrary numbers of sentient beings into existence at will, and subject them to extreme torture. You tell me how unlikely you think this claim is. I will then threaten 10x the reciprocal of that number, unless you give me ÂŁ100. I can send you my details and we can arrange the transfer.
If no: How do you explain this other than by an appeal to absurdity? I would love to know the solution to this problem.
Unless or until we have a better solution to this problem than âthatâs absurdâ, then I think we have to allow appeals to absurdity, especially when used against an argument that bears some resemblance to this pascal mugger example, at least superficially.
I happen to have a response here that doesnât appeal to absurdity. :) (Cf. Karnofsky here.)
Haha, ok, fair enough, I was not expecting that response!
Your solution (and Karnofskyâs) sound very interesting to me. But Iâll need to read both links in more depth to properly wrap my head around it.
A few questions though:
Karnofskyâs worked example for applying their multi-model technique leads with: âdoes this action deviate greatly from ânormality?ââ Why is this not just a more formalized version of the appeal to absurdity heuristic?
Not everyone is a galaxy-brain philosopher who can come up with complex blogposts like those to explain why giving their wallet to a Pascal mugger is wrong, yet everyone gets the correct (presumably) answer to this thought experiment anyway. And I think most are getting there by using some kind of absurdity heuristic? I think that should count in favour of the usefulness of the appeal to absurdity heuristic! Really feels like thereâs a good galaxy-brain meme in this. (I get Iâm rolling back here on my early suggestion that we could abandon the absurdity heuristic as soon as just one person could come up with a solution to the problem of pascalâs mugger).
Back to the actual subject of this post: Do you think the approach outlined in your 2 links could be used as an argument against the overwhelming importance of insect suffering, at least for someone who was extremely uncertain about the likelihood of insect sentience or its intensity?
Thanks! I unfortunately donât have time to engage fully with this thread going forward, but briefly:
To be clear, I donât share Karnofskyâs overall framework. Iâm skeptical of the âregression to normalityâ criterion myself. (And I donât find his model of the problem behind Pascalâs mugging probabilities compelling, since he still uses precise estimates.)
In the Pascalâs mugging case, I think people have some fuzzy sense that the muggerâs claim is made-up, which can be more carefully operationalized with imprecise credences. But if we canât even point to what our âthis is absurdâ reaction is about, and are instead merely asserting that our pretheoretic sense should dictate our decisions, Iâm more skeptical. Especially when weâre embracing an ethical principle most people would consider absurd (impartial altruism).
Appeal to absurdity is a reasonable objection and shouldnât be discouraged. We need to be able to say clearly why idea X doesnât also imply some similar absurd idea Y.
@tobycrisford đ¸ unfortunately on mamy animal welfare threads, more extreme dissenting views get downvoted to oblivion without strong up votes (like mine and yours) to compensate. This pattern seems mostly to apply to animal welfare threads unfortunately, and I think more discourse would be encouraged if animal welfare supporters didnât obliterate dissenting views.
Only a handful of us, including myself and @Henry Howardđ¸ engage with different perspectives on these animal welfare threads and I think it would be more useful if these kind of comments were encouraged, even if only to better understand what many (probably most) non EA people might be intuiting and thinking when they see these arguments.
Iâm mostly not engaging with these threads because I often donât find the engagement particularly rewarding unfortunately. Iâll keep trying from time to time :D
I think @Henry Howardđ¸ s 2 points are very important, even if you donât necessarily agree with them.
Agreed Nick. One of my recent comments has 7 agrees, 11 disagrees but â10 karma. If 7 people agree with a comment itâs unlikely to be disruptive trolling that needs to be buried.
Clear misuse of voting and evidence of heavy forum bias that I sense but canât prove.
Iâm not sure itâs âmisuseâ of voting exactly, I think people should vote how they want. I just think this downvoting pattern is unfortunate for encouraging discourse and a diversity of views.
Iâm doubtful that any of those are conscious, but I agree that given that itâs possible they are, their interests matter a decent amount in expectationâthough probably less than insects.
If the world is very weird then the right ethical view should get weird results. For more on this see https://ââwonderandaporia.substack.com/ââp/ââsurely-were-not-moral-monsters and https://ââbenthams.substack.com/ââp/ââlyman-stone-continues-being-dumb?utm_source=publication-search starting at âLymanâs a pro-natalistâ. A view shouldnât be judged by matching intuitions about the actual world if those intuitions were formed unreliably.
Why? The average person says that same thing about insects.