Yea, that seems bad. It felt like there was a big push a few years ago to make a huge AI safety researcher pipeline, and now I’m nervous that we don’t actually have the funding to handle all of that pipeline, for example.
For sure. Not only the lack of funding to handle the pipeline, but there seems to be increasing concern around the benefits to harm tradeoff of technical AI research too.
Perhaps like with the heavy correction against earning to give a few years ago which now seems likely a mistake, maybe theres a lesson to be learned against overcorrecting against the status quo in any direction too quickly...
One obvious way that EA researchers could help improve the situation, is to use comments like these to highlight that it is lacking, and try to discuss where to improve things. :)
Yea, that seems bad. It felt like there was a big push a few years ago to make a huge AI safety researcher pipeline, and now I’m nervous that we don’t actually have the funding to handle all of that pipeline, for example.
For sure. Not only the lack of funding to handle the pipeline, but there seems to be increasing concern around the benefits to harm tradeoff of technical AI research too.
Perhaps like with the heavy correction against earning to give a few years ago which now seems likely a mistake, maybe theres a lesson to be learned against overcorrecting against the status quo in any direction too quickly...
One obvious way that EA researchers could help improve the situation, is to use comments like these to highlight that it is lacking, and try to discuss where to improve things. :)