Commenting to try to figure out where you disagree with the original poster, and what your cruxes are.
It sounds like you’re saying: 1) Conditional on being able to do legibly impressive and good strategy research, it’s more valuable for junior people to do strategy research than to become an expert in a specific boring topic 2) Nonetheless, many people should become experts in specific boring topics 3) Over the long run, ideas influence the government more than the OP suggests they do (and probably more than the proposal/implementation of specific policies do)
Does that sound right to you?
I think there’s (at least) two ways to read the original post: either as a claim about the total comparative utility of boring/specific expertise vs. strategy work, or as a claim about the marginal utility of boring/specific expertise vs. strategy work. For example: A) As a general rule, if you’re a junior person who wants to get into policy, becoming an expert on a specific boring topic is more useful than attempting strategy work (“Instead, people that want to make the world safer with policy/governance should become experts on very specific and boring topics”) B) On the margin, a junior person having expertise on a specific boring topic is more useful than a junior person doing strategic work (“I’d be more excited about somebody pursuing one of these concrete, specific dead ends and getting real feedback from the world (and then pivoting[2]), rather than trying to do broad strategy work and risk ending up in a never-ending strategy spiral”)
It wasn’t clear to me whether you agree with 1), 2), both, or neither. Agreeing with A is compatible with accepting 1-3 (e.g. maybe most junior people can’t do legibly impressive and good strategy work), as is disagreeing with A and agreeing with B (e.g. maybe junior people likely can do legibly impressive and good strategy work, but the neglectedness of boring/specific expertise means that its marginal utility is higher than of strategy work). Where’d you stand on A and B, and why? And do you think it’s the case that many junior people could do legibly impressive and good strategy research?
(I’m sorry if this comment sounds spiky—it wasn’t meant to be! I’m interested in the topic and trying to get better models of where people disagree and why :) )
I think I just don’t have sufficiently precise models to know whether it’s more valuable for people to do implementation or strategy work on the current margin.
I think compared to a year ago implementation work has gone up in value because there appears to be an open policy window and so we want to have shovel-ready policies we think are, all things considered, good. I think we’ve also got a bit more strategic clarity than we had a year or so ago thanks to the strategy writing that Holden, Ajeya and Davidson have done.
On the other hand, I think there’s still a lot of strategic ambiguity and for lots of the most important strategy questions there’s like one report with massive uncertainty that’s been done. For instance, both bioanchors and Davidson’s takeoff speeds report assume we could get TAI by just by scaling up compute. This seems like a pretty big assumption. We have no idea what the scaling laws for robotics are, there are constant references to race dynamics but like one non-empirical paper from 2013 that’s modelled it at the firm level (although there’s another coming out.) The two recent Thorstad papers to come out I think are a pretty strong challenge to longtermism not grounded in digital minds being a big deal.
I think people, especially junior people, should be baised towards work with good feedback loops but I think this is a different axis from strategy vs implementation. Lots of epochs work is stratagy work but also has good feedback loops. The legal priorities project and GPI both do pretty high level work but I think both are great because they’re grounded in academic disciplines. Patiant philanthripy is probably the best example of really high level, purely conceptual, work that is great.
In AI in particualr so high level stuff that I think would be great would be: a book on what good post TAI futures look like, forcasting the growth of the Chinese economy under different political setups, scaling laws for robotics, modelling the elasticity of the semi-conductor supply chain, proposals for transfering ownership capital to the population more broadly, investigating different funding models for AI safety.
Thanks, I thought these were useful comments, particularly about the longer-term influence of big ideas (neoliberalism etc).
I would be interested in reading/skimming the Thorstad papers you refer to, where are they? I found https://onlinelibrary.wiley.com/doi/full/10.1111/papa.12248 which is presumably one of them. Do they have EA Forum versions, and if not do you know if David planning to put them up as such? Seems potentially valuable.
Commenting to try to figure out where you disagree with the original poster, and what your cruxes are.
It sounds like you’re saying:
1) Conditional on being able to do legibly impressive and good strategy research, it’s more valuable for junior people to do strategy research than to become an expert in a specific boring topic
2) Nonetheless, many people should become experts in specific boring topics
3) Over the long run, ideas influence the government more than the OP suggests they do (and probably more than the proposal/implementation of specific policies do)
Does that sound right to you?
I think there’s (at least) two ways to read the original post: either as a claim about the total comparative utility of boring/specific expertise vs. strategy work, or as a claim about the marginal utility of boring/specific expertise vs. strategy work.
For example:
A) As a general rule, if you’re a junior person who wants to get into policy, becoming an expert on a specific boring topic is more useful than attempting strategy work (“Instead, people that want to make the world safer with policy/governance should become experts on very specific and boring topics”)
B) On the margin, a junior person having expertise on a specific boring topic is more useful than a junior person doing strategic work (“I’d be more excited about somebody pursuing one of these concrete, specific dead ends and getting real feedback from the world (and then pivoting[2]), rather than trying to do broad strategy work and risk ending up in a never-ending strategy spiral”)
It wasn’t clear to me whether you agree with 1), 2), both, or neither. Agreeing with A is compatible with accepting 1-3 (e.g. maybe most junior people can’t do legibly impressive and good strategy work), as is disagreeing with A and agreeing with B (e.g. maybe junior people likely can do legibly impressive and good strategy work, but the neglectedness of boring/specific expertise means that its marginal utility is higher than of strategy work). Where’d you stand on A and B, and why? And do you think it’s the case that many junior people could do legibly impressive and good strategy research?
(I’m sorry if this comment sounds spiky—it wasn’t meant to be! I’m interested in the topic and trying to get better models of where people disagree and why :) )
I think I just don’t have sufficiently precise models to know whether it’s more valuable for people to do implementation or strategy work on the current margin.
I think compared to a year ago implementation work has gone up in value because there appears to be an open policy window and so we want to have shovel-ready policies we think are, all things considered, good. I think we’ve also got a bit more strategic clarity than we had a year or so ago thanks to the strategy writing that Holden, Ajeya and Davidson have done.
On the other hand, I think there’s still a lot of strategic ambiguity and for lots of the most important strategy questions there’s like one report with massive uncertainty that’s been done. For instance, both bioanchors and Davidson’s takeoff speeds report assume we could get TAI by just by scaling up compute. This seems like a pretty big assumption. We have no idea what the scaling laws for robotics are, there are constant references to race dynamics but like one non-empirical paper from 2013 that’s modelled it at the firm level (although there’s another coming out.) The two recent Thorstad papers to come out I think are a pretty strong challenge to longtermism not grounded in digital minds being a big deal.
I think people, especially junior people, should be baised towards work with good feedback loops but I think this is a different axis from strategy vs implementation. Lots of epochs work is stratagy work but also has good feedback loops. The legal priorities project and GPI both do pretty high level work but I think both are great because they’re grounded in academic disciplines. Patiant philanthripy is probably the best example of really high level, purely conceptual, work that is great.
In AI in particualr so high level stuff that I think would be great would be: a book on what good post TAI futures look like, forcasting the growth of the Chinese economy under different political setups, scaling laws for robotics, modelling the elasticity of the semi-conductor supply chain, proposals for transfering ownership capital to the population more broadly, investigating different funding models for AI safety.
Thanks, I thought these were useful comments, particularly about the longer-term influence of big ideas (neoliberalism etc). I would be interested in reading/skimming the Thorstad papers you refer to, where are they? I found https://onlinelibrary.wiley.com/doi/full/10.1111/papa.12248 which is presumably one of them. Do they have EA Forum versions, and if not do you know if David planning to put them up as such? Seems potentially valuable.
I took Nathan to be referring to https://globalprioritiesinstitute.org/david-thorstad-three-mistakes-in-the-moral-mathematics-of-existential-risk/ and https://globalprioritiesinstitute.org/david-thorstad-high-risk-low-reward-a-challenge-to-the-astronomical-value-of-existential-risk-mitigation/ -- Thorstad’s blog (https://ineffectivealtruismblog.com/) has some summaries/further discussion, so far as I remember.