Thank you for your comment. I appreciate that from you especially as someone with a specialized focus on AI safety.
Sometimes, I have the impression that perhaps most of those working in AI safety perceive only a small minority among themselves in the field as seriously considering the risk of adverse impacts on non-human life from advancing AI. I suspect it might even be a majority of those working in AI safety who afford some level of consideration to what impact advancing AI may have on other lifeforms. If that’s true, it may not be common knowledge only because some working in AI safety are too shy to opine based in a concern they won’t be taken seriously by their peers.
I’m not aware of any survey data that substantiates with precision either way the extent to which the welfare of non-human lifeforms may be a priority for those working in AI safety. Yet your comment here increases even more the visibility of the endorsement of cross-cause collaboration in EA. It can inspire others convinced of the same to speak up in ways catalyzing the positive feedback loop of such collaboration.
Given how much you care about this and how much it seems like you may still be gaining awareness of the intersection between animal welfare and AI safety, I’ll inform you of how an independent cause at that intersection has been gradually growing in the last several years.
David Pearce, Brian Tomasik and Andres Emilsson are three utilitarians who perhaps more than any others have for almost 20 years have inspired the launch of longtermist animal welfare as its own field. That has resulted in at least a few dozen researchers dedicated full time to that effort across a few organizations, perhaps most prominently at Rethink Priorities, and the Center for Long-Term Risk, as research institutes.
Among those in AI safety in the Bay Area whose work you may be more familiar with, Buck Shlegeris is one of the biggest proponent of this approach. Rob Bensinger, Andrew Critch and Nate Soares are two other prominent individuals who’ve publicly expressed their appreciation of others doing this work.
I don’t mention all this with an expectation you should take any significant amount of time away from your specialized work in AI safety to dive deeply into learning about this other research. I only figure you’d appreciate knowing the names of some individual researchers and also organizations for you or your curious peers to learn more when you might have spare time for that.
Thank you for your comment. I appreciate that from you especially as someone with a specialized focus on AI safety.
Sometimes, I have the impression that perhaps most of those working in AI safety perceive only a small minority among themselves in the field as seriously considering the risk of adverse impacts on non-human life from advancing AI. I suspect it might even be a majority of those working in AI safety who afford some level of consideration to what impact advancing AI may have on other lifeforms. If that’s true, it may not be common knowledge only because some working in AI safety are too shy to opine based in a concern they won’t be taken seriously by their peers.
I’m not aware of any survey data that substantiates with precision either way the extent to which the welfare of non-human lifeforms may be a priority for those working in AI safety. Yet your comment here increases even more the visibility of the endorsement of cross-cause collaboration in EA. It can inspire others convinced of the same to speak up in ways catalyzing the positive feedback loop of such collaboration.
Given how much you care about this and how much it seems like you may still be gaining awareness of the intersection between animal welfare and AI safety, I’ll inform you of how an independent cause at that intersection has been gradually growing in the last several years.
David Pearce, Brian Tomasik and Andres Emilsson are three utilitarians who perhaps more than any others have for almost 20 years have inspired the launch of longtermist animal welfare as its own field. That has resulted in at least a few dozen researchers dedicated full time to that effort across a few organizations, perhaps most prominently at Rethink Priorities, and the Center for Long-Term Risk, as research institutes.
Among those in AI safety in the Bay Area whose work you may be more familiar with, Buck Shlegeris is one of the biggest proponent of this approach. Rob Bensinger, Andrew Critch and Nate Soares are two other prominent individuals who’ve publicly expressed their appreciation of others doing this work.
I don’t mention all this with an expectation you should take any significant amount of time away from your specialized work in AI safety to dive deeply into learning about this other research. I only figure you’d appreciate knowing the names of some individual researchers and also organizations for you or your curious peers to learn more when you might have spare time for that.