FWIW, I think there are some complicating factors, which makes me think some WAW interventions could be among my top 1-3 priorities.
Some factors:
Maybe us being in short-lived simulations means that the gap between near and long-term interventions is closer than expected.
Maybe there are nested minds, which might complicate things?
There may be some overlap between the digital minds issue you pointed out and WAW, e.g., simulated ecosystems which could contain a lot of suffering. It seems that a lot of people might more easily see “more complex” digital minds as sentient and deserving of moral consideration, but digital minds “only” as complex as insects wouldn’t be perceived as such (which might be unfortunate if they can indeed suffer).
Also,
A while ago I wrote a post on the possibility of microorganism suffering. It was probably a bit too weird and didn’t get too much attention—but given sufficient uncertainty about philosophy of mind, the scope of the problem is potentially huge.[1] I kind of suspect this could really be one of the biggest near term issues. To quote the piece, there are roughly “1027 to 1029 [microbe] deaths per hour on Earth” (~10 OOMs greater than the number of insects alive at any time, I believe).
The problem with possibilities like these is that it complicates the entire picture.
For instance, if microorganism suffering was the dominant source of suffering in the near term, then the near term value of farm animal interventions is dominated by how it changes microbe suffering, which makes it a factor to consider when choosing between farm animal interventions.
I think it’s less controversial to do some WAW interventions through indirect effects and/or by omission (e.g., changing the distribution of funding on different interventions that change the amount of microbe suffering in the near term). If there’s the risk of people creating artificial ecosystems extraterrestrially/simulations in the medium term, then maybe advocacy of WAW would help discourage creating that wild animal suffering. And in addition to that, as Tomasik said, “actions that reduce possible plant/bacteria suffering [in a Tomasik sense of limiting NPP] are the same as those that reduce [wild] animal suffering”[2], which could suggest maintaining prioritization of WAW to potentially do good in this other area as well.
FYI, I don’t consider this a “Pascal’s mugging”. It seems wrong for me to be very confident that microbes don’t suffer, but at the same time think that other humans and non-human animals do, despite huge uncertainties due to being unable to take the perspectives of any other possible mind (problem of other minds).
To be clear, Tomasik gives less weight than I do to microbes: “In practice, it probably doesn’t compete with the moral weight I give to animals”. I think he and I would both agree that ultimately it’s not a question that can be resolved, and that it’s, in some sense, up to us to decide.
Hi Elias. Thank you for raising many interesting points. Here is my answer to the first part of your comment:
I agree about short-lived simulations. But as I said, for short-term work, farmed animal welfare currently seems more promising to me. Also, the current WAW work of research and promoting WAW in academia will have actual impact on animals much later than most farmed animal advocacy work. Hence, farmed animal advocacy impacts are better protected against our simulation shutting down.
If there are nested minds, then it’s likely that there are more of them in big animals rather than in small animals. And my guess would be that nested minds are usually happy when the animal they are in is healthy and happy. So this would be an argument for caring about big animals more. This would make me deprioritize WAW further because the case for WAW depends on caring about zillions of small animals. I’m not sure how ecosystems being conscious would change things though.
I find it somewhat unlikely that a large fraction of computational resources of the future will be used for simulating nature. Hence, I don’t think that it’s amongst the most important concerns for digital minds. I discuss this in more detail here.
I also worry about people not caring about small digital minds. I’m not convinced that work on WAW is best suited for addressing it. For example, promoting welfare in insect, fish, and chicken farms might be better suited for that because then we don’t have to argue against arguments like “but it’s natural!!” Direct advocacy for small digital minds might be even better. I don’t think that the time is ripe for that though, so we could just invest money to use it for that purpose later, or simply tackle other longtermist issues.
You could find many more complications by reading Brian Tomasik’s articles. But Brian himself seems to prioritize digital minds to a large degree these days. This suggests that those complications don’t change the conclusion that digital minds are more important.
I’ve updated somewhat from your response and will definitely dwell on those points :)
And glad you plan to read and think about microbes. :) Barring the (very complicated) nested minds issue, the microbe suffering problem is the most convincing reason for me to put some weight to near-term issues (although disclaimer that I’m currently putting most of my efforts on longtermist interventions that improve the quality of the future).
Sorry, I still plan to look into microbes someday but now I don’t know when I’ll get to it anymore. I suddenly got quite busy and I am extremely slow at reading. For now I’ll just say this: I criticized the WAW movement as I currently see it. That is, a WAW movement that doesn’t focus on microbes, nor on decreasing wild animal populations. I currently simply don’t have an opinion about a WAW movement that would focus on such things. There were some restrictions on the kind of short-term interventions I could recommend in my intervention search. Interventions that would help microbes (or help wild populations just by reducing their populations) simply didn’t qualify.
FWIW, I think there are some complicating factors, which makes me think some WAW interventions could be among my top 1-3 priorities.
Some factors:
Maybe us being in short-lived simulations means that the gap between near and long-term interventions is closer than expected.
Maybe there are nested minds, which might complicate things?
There may be some overlap between the digital minds issue you pointed out and WAW, e.g., simulated ecosystems which could contain a lot of suffering. It seems that a lot of people might more easily see “more complex” digital minds as sentient and deserving of moral consideration, but digital minds “only” as complex as insects wouldn’t be perceived as such (which might be unfortunate if they can indeed suffer).
Also,
A while ago I wrote a post on the possibility of microorganism suffering. It was probably a bit too weird and didn’t get too much attention—but given sufficient uncertainty about philosophy of mind, the scope of the problem is potentially huge.[1] I kind of suspect this could really be one of the biggest near term issues. To quote the piece, there are roughly “1027 to 1029 [microbe] deaths per hour on Earth” (~10 OOMs greater than the number of insects alive at any time, I believe).
The problem with possibilities like these is that it complicates the entire picture.
For instance, if microorganism suffering was the dominant source of suffering in the near term, then the near term value of farm animal interventions is dominated by how it changes microbe suffering, which makes it a factor to consider when choosing between farm animal interventions.
I think it’s less controversial to do some WAW interventions through indirect effects and/or by omission (e.g., changing the distribution of funding on different interventions that change the amount of microbe suffering in the near term). If there’s the risk of people creating artificial ecosystems extraterrestrially/simulations in the medium term, then maybe advocacy of WAW would help discourage creating that wild animal suffering. And in addition to that, as Tomasik said, “actions that reduce possible plant/bacteria suffering [in a Tomasik sense of limiting NPP] are the same as those that reduce [wild] animal suffering”[2], which could suggest maintaining prioritization of WAW to potentially do good in this other area as well.
FYI, I don’t consider this a “Pascal’s mugging”. It seems wrong for me to be very confident that microbes don’t suffer, but at the same time think that other humans and non-human animals do, despite huge uncertainties due to being unable to take the perspectives of any other possible mind (problem of other minds).
To be clear, Tomasik gives less weight than I do to microbes: “In practice, it probably doesn’t compete with the moral weight I give to animals”. I think he and I would both agree that ultimately it’s not a question that can be resolved, and that it’s, in some sense, up to us to decide.
Hi Elias. Thank you for raising many interesting points. Here is my answer to the first part of your comment:
I agree about short-lived simulations. But as I said, for short-term work, farmed animal welfare currently seems more promising to me. Also, the current WAW work of research and promoting WAW in academia will have actual impact on animals much later than most farmed animal advocacy work. Hence, farmed animal advocacy impacts are better protected against our simulation shutting down.
If there are nested minds, then it’s likely that there are more of them in big animals rather than in small animals. And my guess would be that nested minds are usually happy when the animal they are in is healthy and happy. So this would be an argument for caring about big animals more. This would make me deprioritize WAW further because the case for WAW depends on caring about zillions of small animals. I’m not sure how ecosystems being conscious would change things though.
I find it somewhat unlikely that a large fraction of computational resources of the future will be used for simulating nature. Hence, I don’t think that it’s amongst the most important concerns for digital minds. I discuss this in more detail here.
I also worry about people not caring about small digital minds. I’m not convinced that work on WAW is best suited for addressing it. For example, promoting welfare in insect, fish, and chicken farms might be better suited for that because then we don’t have to argue against arguments like “but it’s natural!!” Direct advocacy for small digital minds might be even better. I don’t think that the time is ripe for that though, so we could just invest money to use it for that purpose later, or simply tackle other longtermist issues.
You could find many more complications by reading Brian Tomasik’s articles. But Brian himself seems to prioritize digital minds to a large degree these days. This suggests that those complications don’t change the conclusion that digital minds are more important.
I plan to read and think about microbes soon :)
I’ve updated somewhat from your response and will definitely dwell on those points :)
And glad you plan to read and think about microbes. :) Barring the (very complicated) nested minds issue, the microbe suffering problem is the most convincing reason for me to put some weight to near-term issues (although disclaimer that I’m currently putting most of my efforts on longtermist interventions that improve the quality of the future).
Sorry, I still plan to look into microbes someday but now I don’t know when I’ll get to it anymore. I suddenly got quite busy and I am extremely slow at reading. For now I’ll just say this: I criticized the WAW movement as I currently see it. That is, a WAW movement that doesn’t focus on microbes, nor on decreasing wild animal populations. I currently simply don’t have an opinion about a WAW movement that would focus on such things. There were some restrictions on the kind of short-term interventions I could recommend in my intervention search. Interventions that would help microbes (or help wild populations just by reducing their populations) simply didn’t qualify.