Hi Elias. Thank you for raising many interesting points. Here is my answer to the first part of your comment:
I agree about short-lived simulations. But as I said, for short-term work, farmed animal welfare currently seems more promising to me. Also, the current WAW work of research and promoting WAW in academia will have actual impact on animals much later than most farmed animal advocacy work. Hence, farmed animal advocacy impacts are better protected against our simulation shutting down.
If there are nested minds, then it’s likely that there are more of them in big animals rather than in small animals. And my guess would be that nested minds are usually happy when the animal they are in is healthy and happy. So this would be an argument for caring about big animals more. This would make me deprioritize WAW further because the case for WAW depends on caring about zillions of small animals. I’m not sure how ecosystems being conscious would change things though.
I find it somewhat unlikely that a large fraction of computational resources of the future will be used for simulating nature. Hence, I don’t think that it’s amongst the most important concerns for digital minds. I discuss this in more detail here.
I also worry about people not caring about small digital minds. I’m not convinced that work on WAW is best suited for addressing it. For example, promoting welfare in insect, fish, and chicken farms might be better suited for that because then we don’t have to argue against arguments like “but it’s natural!!” Direct advocacy for small digital minds might be even better. I don’t think that the time is ripe for that though, so we could just invest money to use it for that purpose later, or simply tackle other longtermist issues.
You could find many more complications by reading Brian Tomasik’s articles. But Brian himself seems to prioritize digital minds to a large degree these days. This suggests that those complications don’t change the conclusion that digital minds are more important.
I’ve updated somewhat from your response and will definitely dwell on those points :)
And glad you plan to read and think about microbes. :) Barring the (very complicated) nested minds issue, the microbe suffering problem is the most convincing reason for me to put some weight to near-term issues (although disclaimer that I’m currently putting most of my efforts on longtermist interventions that improve the quality of the future).
Hi Elias. Thank you for raising many interesting points. Here is my answer to the first part of your comment:
I agree about short-lived simulations. But as I said, for short-term work, farmed animal welfare currently seems more promising to me. Also, the current WAW work of research and promoting WAW in academia will have actual impact on animals much later than most farmed animal advocacy work. Hence, farmed animal advocacy impacts are better protected against our simulation shutting down.
If there are nested minds, then it’s likely that there are more of them in big animals rather than in small animals. And my guess would be that nested minds are usually happy when the animal they are in is healthy and happy. So this would be an argument for caring about big animals more. This would make me deprioritize WAW further because the case for WAW depends on caring about zillions of small animals. I’m not sure how ecosystems being conscious would change things though.
I find it somewhat unlikely that a large fraction of computational resources of the future will be used for simulating nature. Hence, I don’t think that it’s amongst the most important concerns for digital minds. I discuss this in more detail here.
I also worry about people not caring about small digital minds. I’m not convinced that work on WAW is best suited for addressing it. For example, promoting welfare in insect, fish, and chicken farms might be better suited for that because then we don’t have to argue against arguments like “but it’s natural!!” Direct advocacy for small digital minds might be even better. I don’t think that the time is ripe for that though, so we could just invest money to use it for that purpose later, or simply tackle other longtermist issues.
You could find many more complications by reading Brian Tomasik’s articles. But Brian himself seems to prioritize digital minds to a large degree these days. This suggests that those complications don’t change the conclusion that digital minds are more important.
I plan to read and think about microbes soon :)
I’ve updated somewhat from your response and will definitely dwell on those points :)
And glad you plan to read and think about microbes. :) Barring the (very complicated) nested minds issue, the microbe suffering problem is the most convincing reason for me to put some weight to near-term issues (although disclaimer that I’m currently putting most of my efforts on longtermist interventions that improve the quality of the future).