I currently lead EA funds.
Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.
Unless explicitly stated otherwise, opinions are my own, not my employer’s.
You can give me positive and negative feedback here.
I haven’t read the papers but I am surprised that you don’t think they are useful from an x-risk perspective. The second paper “A Model for Estimating the Economic Costs of Computer Vision Systems that use Deep Learning” seems highly relevant to forecasting AI progress which imo is one of the most useful AIS interventions.
The OP’s claim
Seems overstated and I’d guess that many people working on AI safety would disagree with them.