Looking at the mistakes you’ve made in the past, what fraction of your (importance-weighted) mistakes would you classify the issue as being:
Not being aware of the relevant empirical details/facts (that is both in principle and in practice within your ability to find) versus
Being wrong about stuff due to reasoning errors (that is both in principle and in practice within your ability to correct for)
And what ratios would you assign to this for EAs/career EAs in general?
For context, a coworker and I recently had a discussion about, loosely speaking, whether it was more important for junior researchers within EA to build domain knowledge or general skills. Very very roughly, my coworker was more on the former case because he thought that EAs had an undersupply of domain knowledge over so-called “generalist skills.” However, I leaned more on the latter side of this debate because I weakly believe that more of my mistakes (and more of my most critical mistakes) were due to errors of cognition rather than insufficient knowledge of facts. (Obviously credit assignment is hard in both cases).
I think the inclusion of “in principle” makes the answer kind of boring—when we’re not thinking about practicality at all, I think I’d definitely prefer to know more facts (about e.g. the future of AI or what would happen in the world if we pursued strategy A vs strategy B) than to have better reasoning skills, but that’s not a very interesting answer.
In practice, I’m usually investing a lot more in general reasoning, because I’m operating in a domain (AI forecasting and futurism more generally) where it’s pretty expensive to collect new knowledge/facts, it’s pretty difficult to figure out how to connect facts about the present to beliefs about the distant future, and facts you could gather in 2021 are fairy likely to be obsoleted by new developments in 2022. So I would say most of my importance-weighted errors are going to be in the general reasoning domain. I think it’s fairy similar for most people at Open Phil, and most EAs trying to do global priorities research or cause prioritization, especially within long-termism. I think the more object-level your work is, the more likely it is that your big mistakes will involve being unaware of empirical details.
However, investing in general reasoning doesn’t often look like “explicitly practicing general reasoning” (e.g. doing calibration training, studying probability theory or analytic philosophy, etc). It’s usually incidental improvement that’s happening over the course of a particular project (which will often involve developing plenty of content knowledge too).
However, investing in general reasoning doesn’t often look like “explicitly practicing general reasoning” (e.g. doing calibration training, studying probability theory or analytic philosophy, etc). It’s usually incidental improvement that’s happening over the course of a particular project (which will often involve developing plenty of content knowledge too).
Given that, could you say a bit more about how “investing from general reasoning” differs from “just working on projects based on what I expect to be directly impactful / what my employer said I should do”, and from “trying to learn content knowledge about some domain(s) while forming intuitions, theories, and predictions about those domain(s)”?
I.e., concretely, what does your belief that “investing in general reasoning” is particularly valuable lead you to spend more or less time doing (compared to if you believed content knowledge was particularly valuable)?
Your other reply in this thread makes me think that maybe you actually think people should basically just spend almost all of their time directly working on projects they expect to be directly impactful, and trust that they’ll pick up both improvements in their general reasoning skills and content knowledge along the way?
For a concrete example: About a month ago, I started making something like 3-15 Anki cards a day as I do my research (as well as learning random things on the side, e.g. from podcasts), and I’m spending something like 10-30 minutes a day reviewing them. This will help with the specific, directly impactful things I’m working on, but it’s not me directly working on those projects—it’s an activity that’s more directly focused on building content knowledge. What would be your views on the value of that sort of thing?
(Maybe the general reasoning equivalent would be spending 10-30 minutes a day making forecasts relevant to the domains one is also concurrently doing research projects on.)
Personally, I don’t do much explicit, dedicated practice or learning of either general reasoning skills (like forecasts) or content knowledge (like Anki decks); virtually all of my development on these axes comes from “just doing my job.” However, I don’t feel strongly that this is how everyone should be—I’ve just found that this sort of explicit practice holds my attention less and subjectively feels like a less rewarding and efficient way to learn, so I don’t invest in it much. I know lots of folks who feel differently, and do things like Anki decks, forecasting practice, or both.
Oh, actually, that all mainly relates to just one underlying reason why the sort of question Linch and I have in mind matters, which is that it could inform how much time EA researchers spend on various different types of specific tasks in their day-to-day work, and what goals they set for themselves on the scale of weeks/months.
Another reason this sort of question matters is that it could inform whether researchers/orgs:
Invest time in developing areas of expertise based essentially around certain domains of knowledge (e.g., nuclear war, AI risk, politics & policy, consciousness), and try to mostly work within those domains (even when they notice a specific high-priority question outside of that domain which no one else is tackling, or when someone asks them to tackle a question outside of that domain, or similar)
Try to become skilled generalists, tackling whatever questions seem highest priority on the margin in a general sense (without paying too much attention to personal fit), or whatever questions people ask them to tackle, or similar, even if those questions are in domains they currently have very little expertise in
(This is of course really a continuum. And there could be other options that aren’t highlighted by the continuum—e.g. developing expertise in some broadly applicable skillsets like forecasting or statistics or maybe policy analysis, and then applying those skills wherever seems highest priority on the margin.)
So I’d be interested in your thoughts on that tradeoff as well. You suggesting that improving on general reasoning often (in some sense) matters more than improving on content knowledge would seem to maybe imply that you lean a bit more towards option 2 in many cases?
My answer to this one is going to be a pretty boring “it depends” unfortunately. I was speaking to my own experience in responding to the top level question, and since I do a pretty “generalist”-y job, improving at general reasoning is likely to be more important for me. At least when restricting to areas that seem highly promising from a long-termist perspective, I think questions of personal fit and comparative advantage will end up determining the degree to which someone should be specialized in a particular topic like machine learning or biology.
I also think that often someone who is a generalist in terms of topic areas still specializes in a certain kind of methodology, e.g. researchers at Open Phil will often do “back of the envelope calculations” (BOTECs) in several different domains, effective “specializing” in the BOTEC skillset.
I’m the coworker in question, and to clarify a little, my position was more like “It’s probably quite useful to build expertise in some area or cluster of areas by building lots of content knowledge in that area/those areas. And this seems worth doing for a typical full-time EA researcher even at the cost of having less time available to work on building general reasoning skills.” And that in turn is partly because I’d guess that it’d be really hard for a typical full-time EA researcher to make substantial further progress on their general reasoning skills than on their content knowledge.
I’d agree there’s a major “undersupply” of general reasoning skills in the sense that all humans are way worse at general reasoning than would be ideal and than seems theoretically possible (if we stripped away all biases, added loads of processing power, etc.). I think Linch and I disagree more on how easy it is to make progress towards that ideal (for a typical full-time EA researcher), rather than on how valuable such progress would be.
(I think we also disagree on how important more content knowledge tends to be.)
And I don’t think I’d say this for most non-EAs. E.g., I think I might actually guess that most non-EAs would benefit more from either reading Rationality: AI to Zombies or absorbing the ideas from it in some other way more fitting for the person (e.g., workshops, podcasts, discussions), rather than spending the same amount of time learning facts and concepts from important domains. (Though I guess I feel unsure precisely what I’m saying or what it means. E.g., I’d feel tempted to put “learning some core concepts from economics and some examples of how they’re applied” in the “improving general reasoning” bucket in addition to the “improving content knowledge” bucket.)
In any case, all of my views here are vaguely stated and weakly held, and I’d be very interested to hear Ajeya’s thoughts on this!
In my reply to Linch, I said that most of my errors were probably in some sense “general reasoning” errors, and a lot of what I’m improving over the course of doing my job is general reasoning. But at the same time, I don’t think that most EAs should spend a large fraction of their time doing things that look like explicitly practicing general reasoning in an isolated or artificial way (for example, re-reading the Sequences, studying probability theory, doing calibration training, etc). I think it’s good to be spending most of your time trying to accomplish something straightforwardly valuable, which will often incidentally require building up some content expertise. It’s just that a lot of the benefit of those things will probably come through improving your general skills.
Yeah, that makes sense, and no need to apologise. I think your question was already useful without me adding a clarification of what my stance happens to be. I just figured I may as well add that clarification.
Looking at the mistakes you’ve made in the past, what fraction of your (importance-weighted) mistakes would you classify the issue as being:
Not being aware of the relevant empirical details/facts (that is both in principle and in practice within your ability to find) versus
Being wrong about stuff due to reasoning errors (that is both in principle and in practice within your ability to correct for)
And what ratios would you assign to this for EAs/career EAs in general?
For context, a coworker and I recently had a discussion about, loosely speaking, whether it was more important for junior researchers within EA to build domain knowledge or general skills. Very very roughly, my coworker was more on the former case because he thought that EAs had an undersupply of domain knowledge over so-called “generalist skills.” However, I leaned more on the latter side of this debate because I weakly believe that more of my mistakes (and more of my most critical mistakes) were due to errors of cognition rather than insufficient knowledge of facts. (Obviously credit assignment is hard in both cases).
I think the inclusion of “in principle” makes the answer kind of boring—when we’re not thinking about practicality at all, I think I’d definitely prefer to know more facts (about e.g. the future of AI or what would happen in the world if we pursued strategy A vs strategy B) than to have better reasoning skills, but that’s not a very interesting answer.
In practice, I’m usually investing a lot more in general reasoning, because I’m operating in a domain (AI forecasting and futurism more generally) where it’s pretty expensive to collect new knowledge/facts, it’s pretty difficult to figure out how to connect facts about the present to beliefs about the distant future, and facts you could gather in 2021 are fairy likely to be obsoleted by new developments in 2022. So I would say most of my importance-weighted errors are going to be in the general reasoning domain. I think it’s fairy similar for most people at Open Phil, and most EAs trying to do global priorities research or cause prioritization, especially within long-termism. I think the more object-level your work is, the more likely it is that your big mistakes will involve being unaware of empirical details.
However, investing in general reasoning doesn’t often look like “explicitly practicing general reasoning” (e.g. doing calibration training, studying probability theory or analytic philosophy, etc). It’s usually incidental improvement that’s happening over the course of a particular project (which will often involve developing plenty of content knowledge too).
Interesting answer.
Given that, could you say a bit more about how “investing from general reasoning” differs from “just working on projects based on what I expect to be directly impactful / what my employer said I should do”, and from “trying to learn content knowledge about some domain(s) while forming intuitions, theories, and predictions about those domain(s)”?
I.e., concretely, what does your belief that “investing in general reasoning” is particularly valuable lead you to spend more or less time doing (compared to if you believed content knowledge was particularly valuable)?
Your other reply in this thread makes me think that maybe you actually think people should basically just spend almost all of their time directly working on projects they expect to be directly impactful, and trust that they’ll pick up both improvements in their general reasoning skills and content knowledge along the way?
For a concrete example: About a month ago, I started making something like 3-15 Anki cards a day as I do my research (as well as learning random things on the side, e.g. from podcasts), and I’m spending something like 10-30 minutes a day reviewing them. This will help with the specific, directly impactful things I’m working on, but it’s not me directly working on those projects—it’s an activity that’s more directly focused on building content knowledge. What would be your views on the value of that sort of thing?
(Maybe the general reasoning equivalent would be spending 10-30 minutes a day making forecasts relevant to the domains one is also concurrently doing research projects on.)
Personally, I don’t do much explicit, dedicated practice or learning of either general reasoning skills (like forecasts) or content knowledge (like Anki decks); virtually all of my development on these axes comes from “just doing my job.” However, I don’t feel strongly that this is how everyone should be—I’ve just found that this sort of explicit practice holds my attention less and subjectively feels like a less rewarding and efficient way to learn, so I don’t invest in it much. I know lots of folks who feel differently, and do things like Anki decks, forecasting practice, or both.
Oh, actually, that all mainly relates to just one underlying reason why the sort of question Linch and I have in mind matters, which is that it could inform how much time EA researchers spend on various different types of specific tasks in their day-to-day work, and what goals they set for themselves on the scale of weeks/months.
Another reason this sort of question matters is that it could inform whether researchers/orgs:
Invest time in developing areas of expertise based essentially around certain domains of knowledge (e.g., nuclear war, AI risk, politics & policy, consciousness), and try to mostly work within those domains (even when they notice a specific high-priority question outside of that domain which no one else is tackling, or when someone asks them to tackle a question outside of that domain, or similar)
Try to become skilled generalists, tackling whatever questions seem highest priority on the margin in a general sense (without paying too much attention to personal fit), or whatever questions people ask them to tackle, or similar, even if those questions are in domains they currently have very little expertise in
(This is of course really a continuum. And there could be other options that aren’t highlighted by the continuum—e.g. developing expertise in some broadly applicable skillsets like forecasting or statistics or maybe policy analysis, and then applying those skills wherever seems highest priority on the margin.)
So I’d be interested in your thoughts on that tradeoff as well. You suggesting that improving on general reasoning often (in some sense) matters more than improving on content knowledge would seem to maybe imply that you lean a bit more towards option 2 in many cases?
My answer to this one is going to be a pretty boring “it depends” unfortunately. I was speaking to my own experience in responding to the top level question, and since I do a pretty “generalist”-y job, improving at general reasoning is likely to be more important for me. At least when restricting to areas that seem highly promising from a long-termist perspective, I think questions of personal fit and comparative advantage will end up determining the degree to which someone should be specialized in a particular topic like machine learning or biology.
I also think that often someone who is a generalist in terms of topic areas still specializes in a certain kind of methodology, e.g. researchers at Open Phil will often do “back of the envelope calculations” (BOTECs) in several different domains, effective “specializing” in the BOTEC skillset.
I’m the coworker in question, and to clarify a little, my position was more like “It’s probably quite useful to build expertise in some area or cluster of areas by building lots of content knowledge in that area/those areas. And this seems worth doing for a typical full-time EA researcher even at the cost of having less time available to work on building general reasoning skills.” And that in turn is partly because I’d guess that it’d be really hard for a typical full-time EA researcher to make substantial further progress on their general reasoning skills than on their content knowledge.
I’d agree there’s a major “undersupply” of general reasoning skills in the sense that all humans are way worse at general reasoning than would be ideal and than seems theoretically possible (if we stripped away all biases, added loads of processing power, etc.). I think Linch and I disagree more on how easy it is to make progress towards that ideal (for a typical full-time EA researcher), rather than on how valuable such progress would be.
(I think we also disagree on how important more content knowledge tends to be.)
And I don’t think I’d say this for most non-EAs. E.g., I think I might actually guess that most non-EAs would benefit more from either reading Rationality: AI to Zombies or absorbing the ideas from it in some other way more fitting for the person (e.g., workshops, podcasts, discussions), rather than spending the same amount of time learning facts and concepts from important domains. (Though I guess I feel unsure precisely what I’m saying or what it means. E.g., I’d feel tempted to put “learning some core concepts from economics and some examples of how they’re applied” in the “improving general reasoning” bucket in addition to the “improving content knowledge” bucket.)
In any case, all of my views here are vaguely stated and weakly held, and I’d be very interested to hear Ajeya’s thoughts on this!
In my reply to Linch, I said that most of my errors were probably in some sense “general reasoning” errors, and a lot of what I’m improving over the course of doing my job is general reasoning. But at the same time, I don’t think that most EAs should spend a large fraction of their time doing things that look like explicitly practicing general reasoning in an isolated or artificial way (for example, re-reading the Sequences, studying probability theory, doing calibration training, etc). I think it’s good to be spending most of your time trying to accomplish something straightforwardly valuable, which will often incidentally require building up some content expertise. It’s just that a lot of the benefit of those things will probably come through improving your general skills.
Apologies if I misrepresented your stance! Was just trying to give my own very rough overview of what you said. :)
Yeah, that makes sense, and no need to apologise. I think your question was already useful without me adding a clarification of what my stance happens to be. I just figured I may as well add that clarification.