Among effective altruists who believe the cause most worthy of concern is that of existential risk reduction, ensuring A.I. technology doesn’t one day destroy humanity is a priority. However, the oft-cited second greatest existential risk is that of a (genetically engineered) global pandemic.
An argument in favor of focusing upon developing safer A.I. technology, in particular from the Machine Intelligence Research Institute, is that a fully general A.I. which shared human values, and safeguarded us, would have the intelligence capacity to reduce existential risk better than the whole of humanity could anyway. Humanity would be building its own savior from itself, especially if a ‘Friendly’ A.I. could be built within the next century, when other existential risks might come to a head. For example, the threat of other potentially dangerous technologies would be neutralized when controlled by an A.G.I.(Artificial General Intelligence), the coordination problem of mitigating climate change damage will be solved by the A.G.I. peacefully. The A.G.I. could predict unforeseen events threatening humanity better than we could ourselves, and mitigate the threat. For example, a rogue solar storm that human scientists would be unable to detect given their current understanding, and state of technology, might be predicted by an A.G.I., who would recommend to humanity how to minimize loss.
However, the earliest predictions for when a A.G.I. could be completed are 2045. Given the state of biotechnology, and the rate of progress within the field, it seems plausible that a (genetically engineered) pathogen could go globally pandemic before humanity has a A.G.I. to act as the world’s greatest epidemiology computer. Considering that reducing existential risk is such a vocal cause area (currently) within effective altruism, I’m wondering why they, nor the rest of us, are paying more attention to the risk of a global pandemic.
I mean, obviously the existential risk reduction community is so concerned about A.G.I. is because of the work of the Machine Intelligence Research Institute, Eliezer Yudkowsky, Nick Bostrom, and the Future of Humanity Institute. That’s all fine, and my friends, and I can now cite predictions, and arguments, and explain with decent examples what this risk is all about.
However, I don’t know nearly as much about the risk of a global pandemic, or genetically engineered pathogens. I don’t know why we don’t have this information, or where to get it, or even how much we should raise awareness of this issue, because I don’t have that information, either. If there is at least one more existential risk at least having a cursory knowledge of, this seems to be the one.
I’m thinking about contacting Seth Baum of the Global Catastrophic Risks Institute about this on behalf of effective altruism to ask his opinion on this issue. Hopefully, him, or his colleagues, can help me find more information, give me an assessment of how experts rate this existential risk compared to others, and which organizations are doing research and/or are raising awareness about it. Maybe the GCRI will have a document we can share here, or they’d be willing to present one to effective altruism. If not, I’ll wrote something on it for this forum myself. If anyone wants has feedback, comments, or an interest in getting involved in this investigation process, reply publicly here, or in a private message.
There’s quite a bit of interest in pandemics at FHI. Most of the pandemic scenarios look like they would be ‘merely’ global catastrophes rather than existential catastrophes, but I don’t think we can rule the latter out entirely. The policy proposal I wrote up here was aimed primarily at reducing pandemic risk.
There’s more attention from governments already on questions of how synthetic biology should be regulated. It’s unclear what that means for the relative value of pursuing the question further, though.
We certainly talk about this a lot at FHI and do a fair amount of research and policy work on it. CSER is also interested in synthetic biology risk. I agree that it is talked about a lot less in wider EA circles though.
Among effective altruists who believe the cause most worthy of concern is that of existential risk reduction, ensuring A.I. technology doesn’t one day destroy humanity is a priority. However, the oft-cited second greatest existential risk is that of a (genetically engineered) global pandemic.
An argument in favor of focusing upon developing safer A.I. technology, in particular from the Machine Intelligence Research Institute, is that a fully general A.I. which shared human values, and safeguarded us, would have the intelligence capacity to reduce existential risk better than the whole of humanity could anyway. Humanity would be building its own savior from itself, especially if a ‘Friendly’ A.I. could be built within the next century, when other existential risks might come to a head. For example, the threat of other potentially dangerous technologies would be neutralized when controlled by an A.G.I.(Artificial General Intelligence), the coordination problem of mitigating climate change damage will be solved by the A.G.I. peacefully. The A.G.I. could predict unforeseen events threatening humanity better than we could ourselves, and mitigate the threat. For example, a rogue solar storm that human scientists would be unable to detect given their current understanding, and state of technology, might be predicted by an A.G.I., who would recommend to humanity how to minimize loss.
However, the earliest predictions for when a A.G.I. could be completed are 2045. Given the state of biotechnology, and the rate of progress within the field, it seems plausible that a (genetically engineered) pathogen could go globally pandemic before humanity has a A.G.I. to act as the world’s greatest epidemiology computer. Considering that reducing existential risk is such a vocal cause area (currently) within effective altruism, I’m wondering why they, nor the rest of us, are paying more attention to the risk of a global pandemic.
I mean, obviously the existential risk reduction community is so concerned about A.G.I. is because of the work of the Machine Intelligence Research Institute, Eliezer Yudkowsky, Nick Bostrom, and the Future of Humanity Institute. That’s all fine, and my friends, and I can now cite predictions, and arguments, and explain with decent examples what this risk is all about.
However, I don’t know nearly as much about the risk of a global pandemic, or genetically engineered pathogens. I don’t know why we don’t have this information, or where to get it, or even how much we should raise awareness of this issue, because I don’t have that information, either. If there is at least one more existential risk at least having a cursory knowledge of, this seems to be the one.
I’m thinking about contacting Seth Baum of the Global Catastrophic Risks Institute about this on behalf of effective altruism to ask his opinion on this issue. Hopefully, him, or his colleagues, can help me find more information, give me an assessment of how experts rate this existential risk compared to others, and which organizations are doing research and/or are raising awareness about it. Maybe the GCRI will have a document we can share here, or they’d be willing to present one to effective altruism. If not, I’ll wrote something on it for this forum myself. If anyone wants has feedback, comments, or an interest in getting involved in this investigation process, reply publicly here, or in a private message.
There’s quite a bit of interest in pandemics at FHI. Most of the pandemic scenarios look like they would be ‘merely’ global catastrophes rather than existential catastrophes, but I don’t think we can rule the latter out entirely. The policy proposal I wrote up here was aimed primarily at reducing pandemic risk.
There’s more attention from governments already on questions of how synthetic biology should be regulated. It’s unclear what that means for the relative value of pursuing the question further, though.
We certainly talk about this a lot at FHI and do a fair amount of research and policy work on it. CSER is also interested in synthetic biology risk. I agree that it is talked about a lot less in wider EA circles though.