Iām the coworker in question, and to clarify a little, my position was more like āItās probably quite useful to build expertise in some area or cluster of areas by building lots of content knowledge in that area/āthose areas. And this seems worth doing for a typical full-time EA researcher even at the cost of having less time available to work on building general reasoning skills.ā And that in turn is partly because Iād guess that itād be really hard for a typical full-time EA researcher to make substantial further progress on their general reasoning skills than on their content knowledge.
Iād agree thereās a major āundersupplyā of general reasoning skills in the sense that all humans are way worse at general reasoning than would be ideal and than seems theoretically possible (if we stripped away all biases, added loads of processing power, etc.). I think Linch and I disagree more on how easy it is to make progress towards that ideal (for a typical full-time EA researcher), rather than on how valuable such progress would be.
(I think we also disagree on how important more content knowledge tends to be.)
And I donāt think Iād say this for most non-EAs. E.g., I think I might actually guess that most non-EAs would benefit more from either reading Rationality: AI to Zombies or absorbing the ideas from it in some other way more fitting for the person (e.g., workshops, podcasts, discussions), rather than spending the same amount of time learning facts and concepts from important domains. (Though I guess I feel unsure precisely what Iām saying or what it means. E.g., Iād feel tempted to put ālearning some core concepts from economics and some examples of how theyāre appliedā in the āimproving general reasoningā bucket in addition to the āimproving content knowledgeā bucket.)
In any case, all of my views here are vaguely stated and weakly held, and Iād be very interested to hear Ajeyaās thoughts on this!
In my reply to Linch, I said that most of my errors were probably in some sense āgeneral reasoningā errors, and a lot of what Iām improving over the course of doing my job is general reasoning. But at the same time, I donāt think that most EAs should spend a large fraction of their time doing things that look like explicitly practicing general reasoning in an isolated or artificial way (for example, re-reading the Sequences, studying probability theory, doing calibration training, etc). I think itās good to be spending most of your time trying to accomplish something straightforwardly valuable, which will often incidentally require building up some content expertise. Itās just that a lot of the benefit of those things will probably come through improving your general skills.
Yeah, that makes sense, and no need to apologise. I think your question was already useful without me adding a clarification of what my stance happens to be. I just figured I may as well add that clarification.
Iām the coworker in question, and to clarify a little, my position was more like āItās probably quite useful to build expertise in some area or cluster of areas by building lots of content knowledge in that area/āthose areas. And this seems worth doing for a typical full-time EA researcher even at the cost of having less time available to work on building general reasoning skills.ā And that in turn is partly because Iād guess that itād be really hard for a typical full-time EA researcher to make substantial further progress on their general reasoning skills than on their content knowledge.
Iād agree thereās a major āundersupplyā of general reasoning skills in the sense that all humans are way worse at general reasoning than would be ideal and than seems theoretically possible (if we stripped away all biases, added loads of processing power, etc.). I think Linch and I disagree more on how easy it is to make progress towards that ideal (for a typical full-time EA researcher), rather than on how valuable such progress would be.
(I think we also disagree on how important more content knowledge tends to be.)
And I donāt think Iād say this for most non-EAs. E.g., I think I might actually guess that most non-EAs would benefit more from either reading Rationality: AI to Zombies or absorbing the ideas from it in some other way more fitting for the person (e.g., workshops, podcasts, discussions), rather than spending the same amount of time learning facts and concepts from important domains. (Though I guess I feel unsure precisely what Iām saying or what it means. E.g., Iād feel tempted to put ālearning some core concepts from economics and some examples of how theyāre appliedā in the āimproving general reasoningā bucket in addition to the āimproving content knowledgeā bucket.)
In any case, all of my views here are vaguely stated and weakly held, and Iād be very interested to hear Ajeyaās thoughts on this!
In my reply to Linch, I said that most of my errors were probably in some sense āgeneral reasoningā errors, and a lot of what Iām improving over the course of doing my job is general reasoning. But at the same time, I donāt think that most EAs should spend a large fraction of their time doing things that look like explicitly practicing general reasoning in an isolated or artificial way (for example, re-reading the Sequences, studying probability theory, doing calibration training, etc). I think itās good to be spending most of your time trying to accomplish something straightforwardly valuable, which will often incidentally require building up some content expertise. Itās just that a lot of the benefit of those things will probably come through improving your general skills.
Apologies if I misrepresented your stance! Was just trying to give my own very rough overview of what you said. :)
Yeah, that makes sense, and no need to apologise. I think your question was already useful without me adding a clarification of what my stance happens to be. I just figured I may as well add that clarification.