The thing to see is if the media attention translates into action with more than a few hundred people working on the problem as such rather than getting distracted, and government prioritizing it in conflict with competing goals (like racing to the precipice). One might have thought Covid-19 meant that GCBR pandemics would stop being neglected, but that doesn’t seem right. The Biden administration has asked for Congressional approval of a pretty good pandemic prevention bill (very similar to what EAs have suggested) but it has been rejected because it’s still seen as a low priority. And engineered pandemics remain off the radar with not much improvement as a result of a recent massive pandemic.
AIS has always had outsized media coverage relative to people actually doing something about it, and that may continue.
I feel like this does not really address the question?
A possible answer to Rockwell’s question might be “If we have 15000 scientists working full-time on AIS, then I consider AIS to no longer be neglected” (this is hypothetical, I do not endorse it. And its also not as contextualized as Rockwell would want it).
But maybe I am interpreting the question too literally and you are making a reasonable guess what Rockwell wants to hear.
Do you have any thoughts on how the expected impact of the few hundred people working most directly on AGI safety compares with that of the rest of the world (on mitigating the risks from advanced AI)? I suppose a random person from the few hundred people will have much greater (positive/negative) impact than a random person, but it is not obvious to me that we can round the (positive/negative) impact of the rest of the world (on mitigating the risks from advanced AI) to zero, and increased awareness of the rest of the world will tend to increase its expected impact (for better or worse).
In terms of bio risk, even if covid did not result in more people working towards fighting existential pandemics, it may still have increased the number of people working on pandemics, which arguably has positive flow-through effects to mitigate the existential ones. In addition, covid may have increased the amount of resources that would be directed towards fighting a pandemic if one happens. It is unclear to me how large these effects are relative to a given increase in the number of people working most directly on reducing the risk from existential pandemics.
I think there are whole categories of activity that are not being tried by the broader world, but that people focused on the problem attend to, with big impacts in both bio and AI. It has its own diminishing returns curve.
The thing to see is if the media attention translates into action with more than a few hundred people working on the problem as such rather than getting distracted, and government prioritizing it in conflict with competing goals (like racing to the precipice). One might have thought Covid-19 meant that GCBR pandemics would stop being neglected, but that doesn’t seem right. The Biden administration has asked for Congressional approval of a pretty good pandemic prevention bill (very similar to what EAs have suggested) but it has been rejected because it’s still seen as a low priority. And engineered pandemics remain off the radar with not much improvement as a result of a recent massive pandemic.
AIS has always had outsized media coverage relative to people actually doing something about it, and that may continue.
I feel like this does not really address the question?
A possible answer to Rockwell’s question might be “If we have 15000 scientists working full-time on AIS, then I consider AIS to no longer be neglected” (this is hypothetical, I do not endorse it. And its also not as contextualized as Rockwell would want it).
But maybe I am interpreting the question too literally and you are making a reasonable guess what Rockwell wants to hear.
Hi Carl,
Do you have any thoughts on how the expected impact of the few hundred people working most directly on AGI safety compares with that of the rest of the world (on mitigating the risks from advanced AI)? I suppose a random person from the few hundred people will have much greater (positive/negative) impact than a random person, but it is not obvious to me that we can round the (positive/negative) impact of the rest of the world (on mitigating the risks from advanced AI) to zero, and increased awareness of the rest of the world will tend to increase its expected impact (for better or worse).
In terms of bio risk, even if covid did not result in more people working towards fighting existential pandemics, it may still have increased the number of people working on pandemics, which arguably has positive flow-through effects to mitigate the existential ones. In addition, covid may have increased the amount of resources that would be directed towards fighting a pandemic if one happens. It is unclear to me how large these effects are relative to a given increase in the number of people working most directly on reducing the risk from existential pandemics.
I think there are whole categories of activity that are not being tried by the broader world, but that people focused on the problem attend to, with big impacts in both bio and AI. It has its own diminishing returns curve.