Thank you for writing this! I have major disagreements with you on this.
However, to me, WAW doesn’t seem to be the most important thing for the far future—not even close. Digital minds could be much more efficient, thrive in environments where biological beings can’t, utilize more resources, and seem more likely to exist in huge numbers.
(on a separate paragraph) The tractability of trying to reduce digital mind suffering might be even lower than for longtermist animal welfare work, but the scale is much much higher.
The first passage I quoted is plausible, or even likely to be true (I don’t have informed views on this yet). But even assuming this is true there is something wrong with using this argument to claim that “Hence, some other longtermist work seems much more promising to me than longtermist animal welfare work.” That something wrong is the difference in standards of rigor you applied to the two cause areas. You applied a high level of rigor in evaluating the tractability of WAW as a non-longtermist cause area (so much that you even wrote a short form on it) and concluded that “There seem to be no cost-effective interventions to pursue now”. But you didn’t use the same level of rigor in evaluating the tractability for helping future digital minds, in fact, I believe you didn’t attempt to evaluate it at all. If you use the same standard for WAW and digital minds as cause areas, either you would evaluate none of the cause areas and lead to conclusions like “WAW is far more important than factory farming” (which I believe is a view you moved away from partly because you evaluated tractability). Alternatively, you evaluate both of them, in which case you might not necessarily conclude that WAW is far less important than digital minds from the longtermist perspective.
In fact, I think it’s likely that your prioritization between digital mind and WAW might switch. First, there are still huge uncertainties about whether there will actually be digital minds who are actually sentient. The uncertainties are much higher than for wild animals. We know both that sentient wild animals can exist, and that a lot of them will exist (for a certain amount of time), but we have uncertainties on whether sentient digital minds are possible, and also, if they are possible, whether they will actually be produced in huge numbers. Also, in terms of tractability, there is little evidence for most people to think that there is anything we can do now to help future digital minds. As far as my knowledge goes, Holden Karnofsky, Sentience Institute (SI), and the Center on Long-Term Risk (CLR) are the only three EA-affiliated entities that work on digital minds. They might provide some evidence that something can be done, but I suspect the update is not much, as CLR doesn’t disclose most of their research, SI is still in a very early stage of their digital mind research, and Holden Karnofsky doesn’t seem to have said much about what we can do to help digital minds particularly. Of course, research to figure out whether there could be interventions could itself be an impactful intervention. But that’s true for WAW too. If this is a reason that digital mind being more imporatant than longtermist animal welfare (note: this would imply digital minds welfare is also more important than “longtermist human welfare” ), then I wonder why the same argument form won’t make WAW way more important than factory farming, and lead you to conclude: “The tractability of trying to reduce wild animal suffering might be lower than work in tackling factory farming, but the scale is much much higher.”
Also, if you do use CLR and SI as your main evidence in believing that helping digital minds is tractable, I am afraid you might have to change another conclusion in your post. SI is not entirely optimistic that the future with digital minds is going to be positive (and from chatting with their people I believe they seem pessimistic), and CLR seems to think that astronomical suffering from digital minds is pretty much the default future scenario. If you put high credence in their views about digital minds, I can’t see how you would conclude that “reducing x-risks is much much more promising”. To be fair to SI and CLR, my understanding is that they are strongly opposed to holding extremely unpopular and disturbing ideas such as increasing X-risk for the reason that this will actually increase suffering-risks. I believe this is the correct position to hold for people who think the future is in expectation negative. But I think at the minimum, if you put high credence in SI and CLR’s views, you should probably be at least skeptical about the view that decreasing X-risk is a top priority.
NOTE 1: on the last paragraph: I struggled a lot in writing the last sentence because I am clearly being self-defeating by saying this sentence right after expressing what I called “the correct position”.)
NOTE 2: Some longtermists define X-risk as the extinction of intelligent lives OR the “permanent and drastic destruction of its potential for desirable future development”. In this definition S-risk seems quite clearly a form of X-risk. So it is possible for someone who solely cares about S-risk to claim that their priority is reducing X-risk. But operationally speaking it seems that S-risk and X-risk are used entirely separately.
NOTE 3: Personally I have a different argument against increasing extinction risk than cooperative reasons. Even if one holds that the future is in expectation negative, it doesn’t necessarily follow that it is better for earth-originated intelligent beings to go extinct now because it is possible that most suffering in the future will be caused by intelligent beings not originating from earth. In fact, if there are many non-earth-originated intelligent beings, it seems extremely likely that most of the future suffering (or well-being) will be created by them, not “us”. Given that we are a group of intelligent beings who are already thinking S-risk (alas, we have SI and CLR), and by that we proved to be a kind of intelligent being who could at least possibly develop into beings who care about S-risk, maybe this justifies humanity to continue under the negative future view.
Hi Fai, I appreciate your disagreements. Regarding the tractability of helping digital minds, I participated in CLR’s six week S-risk Intro Fellowship and I thought that their stuff is quite promising. For example, many digital minds s-risks come from an advanced AI that could be developed soon. CLR has connections with some of the organizations that might develop advanced AI. So it seems plausible to me that they could reduce s-risks by impacting how AI is developed. You can see CLR’s ideas on how to influence the development of AI to reduce s-risks in their publications page [edit 2023-02-21: actually, I’m unsure if it is easy to learn their ideas from that page, this is not how I learnt them, so I regret mentioning it]. Some of their other stuff seems promising to me too. I don’t see such powerful levers for animals in longtermism, perhaps you can convince me otherwise. I am not familiar with the work of SI. (I’ll address your other points separately)
First, there are still huge uncertainties about whether there will actually be digital minds who are actually sentient. The uncertainties are much higher than for wild animals. We know both that sentient wild animals can exist, and that a lot of them will exist (for a certain amount of time), but we have uncertainties on whether sentient digital minds are possible, and also, if they are possible, whether they will actually be produced in huge numbers.
I agree but I don’t think this changes much. There can be so many digital minds for so long, that in terms of expected value, I think that digital minds dominate even if you think that there is only a 10% chance that they can be sentient, and a 1% chance that they will exist in high numbers (which I think is unreasonable). I explain why I think that here. Although I’ve just skimmed it and I don’t think I did a great job at it, I remember reading a much better explanation somewhere, I’ll try to find it later.
For now, I’ll just add one more argument to it: Stuart Armstrong makes it seem like it’s not that difficult to build a Dyson Sphere by disassembling a planet like Mercury. I imagine that the materials and energy from disassembling planets could also be probably used to build A LOT of digital minds. Animals are only able to use resources from the surface layer of a small fraction of planets and they are not doing it that efficiently. Anyway, I want to look into this topic deeper myself, I may write more here when I do.
Participating in CLR’s fellowship does make you more informed about their internal views. Thank you for sharing that. I am personally not convinced by CLR’s open publications that those are things that would in expectation reduce s-risk substantially. But mabye that’s due to my lack of mathematical and computer science capabilities.
I agree but I don’t think this changes much. There can be so many digital minds for so long, that in terms of expected value, I think that digital minds dominate even if you think that there is only a 10% chance that they can be sentient, and a 1% chance that they will exist in high numbers (which I think is unreasonable).
I would have the same conclusion if have the same probabilities you assigned, and the same meaning of “high numbers”. I believe my credence on this should depend on whether we are the only planet with civilization now. If yes, and if by high numbers it means >10,000x of that of the expected number of wild animals there will be in the universe, my current credence that there actually will be a high number of digital beings created is <1/10000 (in fact, contrary to what you believe, I think a significant portion of this would come from the urge to simulate the whole universe’s history of wild animals) BTW, I change my credence on this topics rapidly and with orders or magnitudes of changes, and there are many considerations related to this. So I might have changed my mind the next time we discuss this.
But I do have other considerations that would likely make me conclude that, if there are ways to reduce digital being suffering, this is a or even the priority. These considerations can be summarized to one question: If sentient digital beings can exist and will exist, how deeply will they suffer? It seems to me that on digital (or even non-biological analog) hardware, suffering can go much stronger and run much faster than on biological hardware.
Thank you for writing this! I have major disagreements with you on this.
The first passage I quoted is plausible, or even likely to be true (I don’t have informed views on this yet). But even assuming this is true there is something wrong with using this argument to claim that “Hence, some other longtermist work seems much more promising to me than longtermist animal welfare work.” That something wrong is the difference in standards of rigor you applied to the two cause areas. You applied a high level of rigor in evaluating the tractability of WAW as a non-longtermist cause area (so much that you even wrote a short form on it) and concluded that “There seem to be no cost-effective interventions to pursue now”. But you didn’t use the same level of rigor in evaluating the tractability for helping future digital minds, in fact, I believe you didn’t attempt to evaluate it at all. If you use the same standard for WAW and digital minds as cause areas, either you would evaluate none of the cause areas and lead to conclusions like “WAW is far more important than factory farming” (which I believe is a view you moved away from partly because you evaluated tractability). Alternatively, you evaluate both of them, in which case you might not necessarily conclude that WAW is far less important than digital minds from the longtermist perspective.
In fact, I think it’s likely that your prioritization between digital mind and WAW might switch. First, there are still huge uncertainties about whether there will actually be digital minds who are actually sentient. The uncertainties are much higher than for wild animals. We know both that sentient wild animals can exist, and that a lot of them will exist (for a certain amount of time), but we have uncertainties on whether sentient digital minds are possible, and also, if they are possible, whether they will actually be produced in huge numbers. Also, in terms of tractability, there is little evidence for most people to think that there is anything we can do now to help future digital minds. As far as my knowledge goes, Holden Karnofsky, Sentience Institute (SI), and the Center on Long-Term Risk (CLR) are the only three EA-affiliated entities that work on digital minds. They might provide some evidence that something can be done, but I suspect the update is not much, as CLR doesn’t disclose most of their research, SI is still in a very early stage of their digital mind research, and Holden Karnofsky doesn’t seem to have said much about what we can do to help digital minds particularly. Of course, research to figure out whether there could be interventions could itself be an impactful intervention. But that’s true for WAW too. If this is a reason that digital mind being more imporatant than longtermist animal welfare (note: this would imply digital minds welfare is also more important than “longtermist human welfare” ), then I wonder why the same argument form won’t make WAW way more important than factory farming, and lead you to conclude: “The tractability of trying to reduce wild animal suffering might be lower than work in tackling factory farming, but the scale is much much higher.”
Also, if you do use CLR and SI as your main evidence in believing that helping digital minds is tractable, I am afraid you might have to change another conclusion in your post. SI is not entirely optimistic that the future with digital minds is going to be positive (and from chatting with their people I believe they seem pessimistic), and CLR seems to think that astronomical suffering from digital minds is pretty much the default future scenario. If you put high credence in their views about digital minds, I can’t see how you would conclude that “reducing x-risks is much much more promising”. To be fair to SI and CLR, my understanding is that they are strongly opposed to holding extremely unpopular and disturbing ideas such as increasing X-risk for the reason that this will actually increase suffering-risks. I believe this is the correct position to hold for people who think the future is in expectation negative. But I think at the minimum, if you put high credence in SI and CLR’s views, you should probably be at least skeptical about the view that decreasing X-risk is a top priority.
NOTE 1: on the last paragraph: I struggled a lot in writing the last sentence because I am clearly being self-defeating by saying this sentence right after expressing what I called “the correct position”.)
NOTE 2: Some longtermists define X-risk as the extinction of intelligent lives OR the “permanent and drastic destruction of its potential for desirable future development”. In this definition S-risk seems quite clearly a form of X-risk. So it is possible for someone who solely cares about S-risk to claim that their priority is reducing X-risk. But operationally speaking it seems that S-risk and X-risk are used entirely separately.
NOTE 3: Personally I have a different argument against increasing extinction risk than cooperative reasons. Even if one holds that the future is in expectation negative, it doesn’t necessarily follow that it is better for earth-originated intelligent beings to go extinct now because it is possible that most suffering in the future will be caused by intelligent beings not originating from earth. In fact, if there are many non-earth-originated intelligent beings, it seems extremely likely that most of the future suffering (or well-being) will be created by them, not “us”. Given that we are a group of intelligent beings who are already thinking S-risk (alas, we have SI and CLR), and by that we proved to be a kind of intelligent being who could at least possibly develop into beings who care about S-risk, maybe this justifies humanity to continue under the negative future view.
Hi Fai, I appreciate your disagreements. Regarding the tractability of helping digital minds, I participated in CLR’s six week S-risk Intro Fellowship and I thought that their stuff is quite promising. For example, many digital minds s-risks come from an advanced AI that could be developed soon. CLR has connections with some of the organizations that might develop advanced AI. So it seems plausible to me that they could reduce s-risks by impacting how AI is developed. You can see CLR’s ideas on how to influence the development of AI to reduce s-risks in their publications page [edit 2023-02-21: actually, I’m unsure if it is easy to learn their ideas from that page, this is not how I learnt them, so I regret mentioning it]. Some of their other stuff seems promising to me too. I don’t see such powerful levers for animals in longtermism, perhaps you can convince me otherwise. I am not familiar with the work of SI. (I’ll address your other points separately)
I agree but I don’t think this changes much. There can be so many digital minds for so long, that in terms of expected value, I think that digital minds dominate even if you think that there is only a 10% chance that they can be sentient, and a 1% chance that they will exist in high numbers (which I think is unreasonable). I explain why I think that here. Although I’ve just skimmed it and I don’t think I did a great job at it, I remember reading a much better explanation somewhere, I’ll try to find it later.
For now, I’ll just add one more argument to it: Stuart Armstrong makes it seem like it’s not that difficult to build a Dyson Sphere by disassembling a planet like Mercury. I imagine that the materials and energy from disassembling planets could also be probably used to build A LOT of digital minds. Animals are only able to use resources from the surface layer of a small fraction of planets and they are not doing it that efficiently. Anyway, I want to look into this topic deeper myself, I may write more here when I do.
Thank you for your replies Saulius.
Participating in CLR’s fellowship does make you more informed about their internal views. Thank you for sharing that. I am personally not convinced by CLR’s open publications that those are things that would in expectation reduce s-risk substantially. But mabye that’s due to my lack of mathematical and computer science capabilities.
I would have the same conclusion if have the same probabilities you assigned, and the same meaning of “high numbers”. I believe my credence on this should depend on whether we are the only planet with civilization now. If yes, and if by high numbers it means >10,000x of that of the expected number of wild animals there will be in the universe, my current credence that there actually will be a high number of digital beings created is <1/10000 (in fact, contrary to what you believe, I think a significant portion of this would come from the urge to simulate the whole universe’s history of wild animals) BTW, I change my credence on this topics rapidly and with orders or magnitudes of changes, and there are many considerations related to this. So I might have changed my mind the next time we discuss this.
But I do have other considerations that would likely make me conclude that, if there are ways to reduce digital being suffering, this is a or even the priority. These considerations can be summarized to one question: If sentient digital beings can exist and will exist, how deeply will they suffer? It seems to me that on digital (or even non-biological analog) hardware, suffering can go much stronger and run much faster than on biological hardware.