I think at the very least, I’d expect non-neglected AI safety to look like the global campaigns against climate change, or the US military-industrial complex:
something that governments spend hundreds of billions of dollars on
a vast ecosystem of nonprofits, think tanks, etc, with potentially tens of thousands of people just thinking about strategy, crafting laws, etc
legions more people working on specific technologies that will help with various niche aspects of the problem, like more efficient solar panels or etc
lots of people who don’t think of themselves as doing anything altruistic at all, but they are helping because they are employed (directly or indirectly) by the vast system that is dedicated to solving the problem
a very wide variety of approaches, including backup options (like geoengineering in the case of climate)
there is more work to do, but it seems like we are at least plausibly on-track to deal with the problem in an adequate way
Just think about the incredible amount of effort that is put forward by, say, the United States in order to try and prevent Chinese military dominance and deter things like an invasion of Taiwan (and worse things like nuclear war), and how those efforts get filtered into thousands and thousands of individual R&D projects, institutions, agreements, purchases, etc… certainly some of the effort is misdirected and some projects/strategies are more impactful than others. Overall, I feel like this (or, similarly, the global effort to transition away from fossil fuels) is the benchmark for a pressing global problem being truly non-neglected in an objective sense.
People you hear in conversation might be using “non-neglected” to refer to a much higher bar, like “AI safety is no longer SO INCREDIBLY neglected that working on it is AUTOMATICALLY the most effective thing you can do, overruling other causes even if you have a big comparative advantage in some other promising area.” This might be true, depending on your personal situation and aptitudes! I certainly hope that AI safety becomes less neglected over time, and I think that has slowly been happening. But in a societal / objective sense I think we still need a ton more work on AI safety.
I think at the very least, I’d expect non-neglected AI safety to look like the global campaigns against climate change, or the US military-industrial complex:
something that governments spend hundreds of billions of dollars on
a vast ecosystem of nonprofits, think tanks, etc, with potentially tens of thousands of people just thinking about strategy, crafting laws, etc
legions more people working on specific technologies that will help with various niche aspects of the problem, like more efficient solar panels or etc
lots of people who don’t think of themselves as doing anything altruistic at all, but they are helping because they are employed (directly or indirectly) by the vast system that is dedicated to solving the problem
a very wide variety of approaches, including backup options (like geoengineering in the case of climate)
there is more work to do, but it seems like we are at least plausibly on-track to deal with the problem in an adequate way
Just think about the incredible amount of effort that is put forward by, say, the United States in order to try and prevent Chinese military dominance and deter things like an invasion of Taiwan (and worse things like nuclear war), and how those efforts get filtered into thousands and thousands of individual R&D projects, institutions, agreements, purchases, etc… certainly some of the effort is misdirected and some projects/strategies are more impactful than others. Overall, I feel like this (or, similarly, the global effort to transition away from fossil fuels) is the benchmark for a pressing global problem being truly non-neglected in an objective sense.
People you hear in conversation might be using “non-neglected” to refer to a much higher bar, like “AI safety is no longer SO INCREDIBLY neglected that working on it is AUTOMATICALLY the most effective thing you can do, overruling other causes even if you have a big comparative advantage in some other promising area.” This might be true, depending on your personal situation and aptitudes! I certainly hope that AI safety becomes less neglected over time, and I think that has slowly been happening. But in a societal / objective sense I think we still need a ton more work on AI safety.