However, investing in general reasoning doesnāt often look like āexplicitly practicing general reasoningā (e.g. doing calibration training, studying probability theory or analytic philosophy, etc). Itās usually incidental improvement thatās happening over the course of a particular project (which will often involve developing plenty of content knowledge too).
Given that, could you say a bit more about how āinvesting from general reasoningā differs from ājust working on projects based on what I expect to be directly impactful /ā what my employer said I should doā, and from ātrying to learn content knowledge about some domain(s) while forming intuitions, theories, and predictions about those domain(s)ā?
I.e., concretely, what does your belief that āinvesting in general reasoningā is particularly valuable lead you to spend more or less time doing (compared to if you believed content knowledge was particularly valuable)?
Your other reply in this thread makes me think that maybe you actually think people should basically just spend almost all of their time directly working on projects they expect to be directly impactful, and trust that theyāll pick up both improvements in their general reasoning skills and content knowledge along the way?
For a concrete example: About a month ago, I started making something like 3-15 Anki cards a day as I do my research (as well as learning random things on the side, e.g. from podcasts), and Iām spending something like 10-30 minutes a day reviewing them. This will help with the specific, directly impactful things Iām working on, but itās not me directly working on those projectsāitās an activity thatās more directly focused on building content knowledge. What would be your views on the value of that sort of thing?
(Maybe the general reasoning equivalent would be spending 10-30 minutes a day making forecasts relevant to the domains one is also concurrently doing research projects on.)
Personally, I donāt do much explicit, dedicated practice or learning of either general reasoning skills (like forecasts) or content knowledge (like Anki decks); virtually all of my development on these axes comes from ājust doing my job.ā However, I donāt feel strongly that this is how everyone should beāIāve just found that this sort of explicit practice holds my attention less and subjectively feels like a less rewarding and efficient way to learn, so I donāt invest in it much. I know lots of folks who feel differently, and do things like Anki decks, forecasting practice, or both.
Oh, actually, that all mainly relates to just one underlying reason why the sort of question Linch and I have in mind matters, which is that it could inform how much time EA researchers spend on various different types of specific tasks in their day-to-day work, and what goals they set for themselves on the scale of weeks/āmonths.
Another reason this sort of question matters is that it could inform whether researchers/āorgs:
Invest time in developing areas of expertise based essentially around certain domains of knowledge (e.g., nuclear war, AI risk, politics & policy, consciousness), and try to mostly work within those domains (even when they notice a specific high-priority question outside of that domain which no one else is tackling, or when someone asks them to tackle a question outside of that domain, or similar)
Try to become skilled generalists, tackling whatever questions seem highest priority on the margin in a general sense (without paying too much attention to personal fit), or whatever questions people ask them to tackle, or similar, even if those questions are in domains they currently have very little expertise in
(This is of course really a continuum. And there could be other options that arenāt highlighted by the continuumāe.g. developing expertise in some broadly applicable skillsets like forecasting or statistics or maybe policy analysis, and then applying those skills wherever seems highest priority on the margin.)
So Iād be interested in your thoughts on that tradeoff as well. You suggesting that improving on general reasoning often (in some sense) matters more than improving on content knowledge would seem to maybe imply that you lean a bit more towards option 2 in many cases?
My answer to this one is going to be a pretty boring āit dependsā unfortunately. I was speaking to my own experience in responding to the top level question, and since I do a pretty āgeneralistā-y job, improving at general reasoning is likely to be more important for me. At least when restricting to areas that seem highly promising from a long-termist perspective, I think questions of personal fit and comparative advantage will end up determining the degree to which someone should be specialized in a particular topic like machine learning or biology.
I also think that often someone who is a generalist in terms of topic areas still specializes in a certain kind of methodology, e.g. researchers at Open Phil will often do āback of the envelope calculationsā (BOTECs) in several different domains, effective āspecializingā in the BOTEC skillset.
Interesting answer.
Given that, could you say a bit more about how āinvesting from general reasoningā differs from ājust working on projects based on what I expect to be directly impactful /ā what my employer said I should doā, and from ātrying to learn content knowledge about some domain(s) while forming intuitions, theories, and predictions about those domain(s)ā?
I.e., concretely, what does your belief that āinvesting in general reasoningā is particularly valuable lead you to spend more or less time doing (compared to if you believed content knowledge was particularly valuable)?
Your other reply in this thread makes me think that maybe you actually think people should basically just spend almost all of their time directly working on projects they expect to be directly impactful, and trust that theyāll pick up both improvements in their general reasoning skills and content knowledge along the way?
For a concrete example: About a month ago, I started making something like 3-15 Anki cards a day as I do my research (as well as learning random things on the side, e.g. from podcasts), and Iām spending something like 10-30 minutes a day reviewing them. This will help with the specific, directly impactful things Iām working on, but itās not me directly working on those projectsāitās an activity thatās more directly focused on building content knowledge. What would be your views on the value of that sort of thing?
(Maybe the general reasoning equivalent would be spending 10-30 minutes a day making forecasts relevant to the domains one is also concurrently doing research projects on.)
Personally, I donāt do much explicit, dedicated practice or learning of either general reasoning skills (like forecasts) or content knowledge (like Anki decks); virtually all of my development on these axes comes from ājust doing my job.ā However, I donāt feel strongly that this is how everyone should beāIāve just found that this sort of explicit practice holds my attention less and subjectively feels like a less rewarding and efficient way to learn, so I donāt invest in it much. I know lots of folks who feel differently, and do things like Anki decks, forecasting practice, or both.
Oh, actually, that all mainly relates to just one underlying reason why the sort of question Linch and I have in mind matters, which is that it could inform how much time EA researchers spend on various different types of specific tasks in their day-to-day work, and what goals they set for themselves on the scale of weeks/āmonths.
Another reason this sort of question matters is that it could inform whether researchers/āorgs:
Invest time in developing areas of expertise based essentially around certain domains of knowledge (e.g., nuclear war, AI risk, politics & policy, consciousness), and try to mostly work within those domains (even when they notice a specific high-priority question outside of that domain which no one else is tackling, or when someone asks them to tackle a question outside of that domain, or similar)
Try to become skilled generalists, tackling whatever questions seem highest priority on the margin in a general sense (without paying too much attention to personal fit), or whatever questions people ask them to tackle, or similar, even if those questions are in domains they currently have very little expertise in
(This is of course really a continuum. And there could be other options that arenāt highlighted by the continuumāe.g. developing expertise in some broadly applicable skillsets like forecasting or statistics or maybe policy analysis, and then applying those skills wherever seems highest priority on the margin.)
So Iād be interested in your thoughts on that tradeoff as well. You suggesting that improving on general reasoning often (in some sense) matters more than improving on content knowledge would seem to maybe imply that you lean a bit more towards option 2 in many cases?
My answer to this one is going to be a pretty boring āit dependsā unfortunately. I was speaking to my own experience in responding to the top level question, and since I do a pretty āgeneralistā-y job, improving at general reasoning is likely to be more important for me. At least when restricting to areas that seem highly promising from a long-termist perspective, I think questions of personal fit and comparative advantage will end up determining the degree to which someone should be specialized in a particular topic like machine learning or biology.
I also think that often someone who is a generalist in terms of topic areas still specializes in a certain kind of methodology, e.g. researchers at Open Phil will often do āback of the envelope calculationsā (BOTECs) in several different domains, effective āspecializingā in the BOTEC skillset.