“To give examples of our target audience:
[...]
3. Aspiring generalist researchers at any stage in their career.”
I agree that writing up forecasting reasoning is one way for aspiring generalist researchers to build generalist-type research skill, but also want to highlight some other options:
From Linch Zhang’s shortform: “Deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.”
Apply for jobs/internships/research training programs (and view the process of writing written responses in your applications as skill-building)
Hey, thanks for sharing these other options. I agree that one of these choices makes more sense than forecasting in many cases, and likely (90%) the majority. But I still think forecasting is a solid contender and plausibly (25%) the best in the plurality of cases. Some reasons:
Which activity is best likely depends a lot on which is easiest to actually start doing, because I think the primary barrier to doing most of these usefully is “just” actually getting started and completing something. Forecasting may (40%)[1] be the most fun and least intimidating of these for many (33%+) prospective researchers because of the framing of competing on a leaderboard and the intrigue of trying to predict the future.
I think the EA community has relatively good epistemics, but there is still room for improvement, and more researchers getting a forecasting background is one way to help with this (due to both epistemic training and identifying prospective researchers with good epistemics).
Depending on the question, forecasting can look a lot like a bite-sized chunk of research, so I don’t think it’s mutually exclusive with some of the activities you listed and especially similar to summarizations/collections: for example, Ryan summarized relevant parts of papers then formed some semblance of an inside view in his winning entry.
Also, I was speaking from personal experience here; e.g. Misha and I both have forecasted for a few years and enjoyed it while building skills and a track record, and are now doing ~generalist research or had the opportunity to and seriously considered it, respectively.
I think this will become especially true as the UX of forecasting platforms improves; let’s say 55% this is true in 3 years from now, as I expect the UX here to improve more than the “UX” of other options like summarizing papers.
I agree that writing up forecasting reasoning is one way for aspiring generalist researchers to build generalist-type research skill, but also want to highlight some other options:
Summarize/Collect previous posts/articles/papers (I think this is the probably the best skill-building activity for an aspiring generalist researcher)
Read, then write book reviews (see posts tagged under ‘books,’ and also suggestions from Michael Aird and from Buck Shlegeris; also related is Holden Karnofsky’s ‘Reading books vs. engaging with them’)
Build inside views (see Holden Karnofsky’s ‘Learning by writing’ and Neel Nanda’s ‘How I formed my own views about AI safety’
From Linch Zhang’s shortform: “Deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.”
Apply for jobs/internships/research training programs (and view the process of writing written responses in your applications as skill-building)
Possibly other things suggested in Aird’s ‘Notes on EA-related research, writing, testing fit, learning, and the Forum’
Hey, thanks for sharing these other options. I agree that one of these choices makes more sense than forecasting in many cases, and likely (90%) the majority. But I still think forecasting is a solid contender and plausibly (25%) the best in the plurality of cases. Some reasons:
Which activity is best likely depends a lot on which is easiest to actually start doing, because I think the primary barrier to doing most of these usefully is “just” actually getting started and completing something. Forecasting may (40%)[1] be the most fun and least intimidating of these for many (33%+) prospective researchers because of the framing of competing on a leaderboard and the intrigue of trying to predict the future.
I think the EA community has relatively good epistemics, but there is still room for improvement, and more researchers getting a forecasting background is one way to help with this (due to both epistemic training and identifying prospective researchers with good epistemics).
Depending on the question, forecasting can look a lot like a bite-sized chunk of research, so I don’t think it’s mutually exclusive with some of the activities you listed and especially similar to summarizations/collections: for example, Ryan summarized relevant parts of papers then formed some semblance of an inside view in his winning entry.
Also, I was speaking from personal experience here; e.g. Misha and I both have forecasted for a few years and enjoyed it while building skills and a track record, and are now doing ~generalist research or had the opportunity to and seriously considered it, respectively.
I think this will become especially true as the UX of forecasting platforms improves; let’s say 55% this is true in 3 years from now, as I expect the UX here to improve more than the “UX” of other options like summarizing papers.