Are there examples of EA causes that had EA credence and financial support but then lost both, and how did discussion of them change before and after? Also vice-versa, are there examples of causes that had neither EA credence nor support but then gained both?
I initially guessed that relevant experts had strong reasons for being unconcerned, and were simply not bothering to engage with people who argued for the importance of the risks in question. I believed that the tool-agent distinction was a strong candidate for such a reason. But as I got to know the AI and machine learning communities better, saw how Superintelligence was received, heard reports from the Future of Life Institute’s safety conference in Puerto Rico, and updated on a variety of other fronts, I changed my view.
Are there examples of EA causes that had EA credence and financial support but then lost both, and how did discussion of them change before and after? Also vice-versa, are there examples of causes that had neither EA credence nor support but then gained both?
The EA Survey has info on cause prio changes over time. Summary is:
Holden Karnofsky wrote Three Key Issues I’ve Changed My Mind About on the Open Philanthropy blog in 2016.
On AI safety, for example: