I’m the coworker in question, and to clarify a little, my position was more like “It’s probably quite useful to build expertise in some area or cluster of areas by building lots of content knowledge in that area/those areas. And this seems worth doing for a typical full-time EA researcher even at the cost of having less time available to work on building general reasoning skills.” And that in turn is partly because I’d guess that it’d be really hard for a typical full-time EA researcher to make substantial further progress on their general reasoning skills than on their content knowledge.
I’d agree there’s a major “undersupply” of general reasoning skills in the sense that all humans are way worse at general reasoning than would be ideal and than seems theoretically possible (if we stripped away all biases, added loads of processing power, etc.). I think Linch and I disagree more on how easy it is to make progress towards that ideal (for a typical full-time EA researcher), rather than on how valuable such progress would be.
(I think we also disagree on how important more content knowledge tends to be.)
And I don’t think I’d say this for most non-EAs. E.g., I think I might actually guess that most non-EAs would benefit more from either reading Rationality: AI to Zombies or absorbing the ideas from it in some other way more fitting for the person (e.g., workshops, podcasts, discussions), rather than spending the same amount of time learning facts and concepts from important domains. (Though I guess I feel unsure precisely what I’m saying or what it means. E.g., I’d feel tempted to put “learning some core concepts from economics and some examples of how they’re applied” in the “improving general reasoning” bucket in addition to the “improving content knowledge” bucket.)
In any case, all of my views here are vaguely stated and weakly held, and I’d be very interested to hear Ajeya’s thoughts on this!
In my reply to Linch, I said that most of my errors were probably in some sense “general reasoning” errors, and a lot of what I’m improving over the course of doing my job is general reasoning. But at the same time, I don’t think that most EAs should spend a large fraction of their time doing things that look like explicitly practicing general reasoning in an isolated or artificial way (for example, re-reading the Sequences, studying probability theory, doing calibration training, etc). I think it’s good to be spending most of your time trying to accomplish something straightforwardly valuable, which will often incidentally require building up some content expertise. It’s just that a lot of the benefit of those things will probably come through improving your general skills.
Apologies if I misrepresented your stance! Was just trying to give my own very rough overview of what you said. :)
Yeah, that makes sense, and no need to apologise. I think your question was already useful without me adding a clarification of what my stance happens to be. I just figured I may as well add that clarification.