I think I just don’t have sufficiently precise models to know whether it’s more valuable for people to do implementation or strategy work on the current margin.
I think compared to a year ago implementation work has gone up in value because there appears to be an open policy window and so we want to have shovel-ready policies we think are, all things considered, good. I think we’ve also got a bit more strategic clarity than we had a year or so ago thanks to the strategy writing that Holden, Ajeya and Davidson have done.
On the other hand, I think there’s still a lot of strategic ambiguity and for lots of the most important strategy questions there’s like one report with massive uncertainty that’s been done. For instance, both bioanchors and Davidson’s takeoff speeds report assume we could get TAI by just by scaling up compute. This seems like a pretty big assumption. We have no idea what the scaling laws for robotics are, there are constant references to race dynamics but like one non-empirical paper from 2013 that’s modelled it at the firm level (although there’s another coming out.) The two recent Thorstad papers to come out I think are a pretty strong challenge to longtermism not grounded in digital minds being a big deal.
I think people, especially junior people, should be baised towards work with good feedback loops but I think this is a different axis from strategy vs implementation. Lots of epochs work is stratagy work but also has good feedback loops. The legal priorities project and GPI both do pretty high level work but I think both are great because they’re grounded in academic disciplines. Patiant philanthripy is probably the best example of really high level, purely conceptual, work that is great.
In AI in particualr so high level stuff that I think would be great would be: a book on what good post TAI futures look like, forcasting the growth of the Chinese economy under different political setups, scaling laws for robotics, modelling the elasticity of the semi-conductor supply chain, proposals for transfering ownership capital to the population more broadly, investigating different funding models for AI safety.
Thanks, I thought these were useful comments, particularly about the longer-term influence of big ideas (neoliberalism etc).
I would be interested in reading/skimming the Thorstad papers you refer to, where are they? I found https://onlinelibrary.wiley.com/doi/full/10.1111/papa.12248 which is presumably one of them. Do they have EA Forum versions, and if not do you know if David planning to put them up as such? Seems potentially valuable.
I think I just don’t have sufficiently precise models to know whether it’s more valuable for people to do implementation or strategy work on the current margin.
I think compared to a year ago implementation work has gone up in value because there appears to be an open policy window and so we want to have shovel-ready policies we think are, all things considered, good. I think we’ve also got a bit more strategic clarity than we had a year or so ago thanks to the strategy writing that Holden, Ajeya and Davidson have done.
On the other hand, I think there’s still a lot of strategic ambiguity and for lots of the most important strategy questions there’s like one report with massive uncertainty that’s been done. For instance, both bioanchors and Davidson’s takeoff speeds report assume we could get TAI by just by scaling up compute. This seems like a pretty big assumption. We have no idea what the scaling laws for robotics are, there are constant references to race dynamics but like one non-empirical paper from 2013 that’s modelled it at the firm level (although there’s another coming out.) The two recent Thorstad papers to come out I think are a pretty strong challenge to longtermism not grounded in digital minds being a big deal.
I think people, especially junior people, should be baised towards work with good feedback loops but I think this is a different axis from strategy vs implementation. Lots of epochs work is stratagy work but also has good feedback loops. The legal priorities project and GPI both do pretty high level work but I think both are great because they’re grounded in academic disciplines. Patiant philanthripy is probably the best example of really high level, purely conceptual, work that is great.
In AI in particualr so high level stuff that I think would be great would be: a book on what good post TAI futures look like, forcasting the growth of the Chinese economy under different political setups, scaling laws for robotics, modelling the elasticity of the semi-conductor supply chain, proposals for transfering ownership capital to the population more broadly, investigating different funding models for AI safety.
Thanks, I thought these were useful comments, particularly about the longer-term influence of big ideas (neoliberalism etc). I would be interested in reading/skimming the Thorstad papers you refer to, where are they? I found https://onlinelibrary.wiley.com/doi/full/10.1111/papa.12248 which is presumably one of them. Do they have EA Forum versions, and if not do you know if David planning to put them up as such? Seems potentially valuable.
I took Nathan to be referring to https://globalprioritiesinstitute.org/david-thorstad-three-mistakes-in-the-moral-mathematics-of-existential-risk/ and https://globalprioritiesinstitute.org/david-thorstad-high-risk-low-reward-a-challenge-to-the-astronomical-value-of-existential-risk-mitigation/ -- Thorstad’s blog (https://ineffectivealtruismblog.com/) has some summaries/further discussion, so far as I remember.