I mostly share your position, except that I think that you would perhaps maximize the probability of solving the Riemann hypothesis by going into paths on the frontline of current research instead of starting something new (but I imagine that there are many promising paths currently, which may be the difference).
This planners vs Hayekian genre of dilemmas seems very important to me, and it might be a crux in my career trajectory or at least impact possible projects I’m taking. I intuitively think that this question can be dissolved quite easily to make it obvious when each strategy is better, how parts of the EA world-view influences the answer and perhaps how this impacts how we think about academic research. There is also a lot of existing literature on this matter, so there might already be a satisfying argument.
If someone here is up to a (possibly adversarial) collaboration on the topic, let’s do it!
The Planners vs Hayekian dillema seems related to some of the discussion in Realism about rationality, and especially this crux for Abram Demski and Rohin Shah.
Broadly, two types of strategies in technical AI alignment work are
Build a solid mathematical foundations on which to build further knowledge which would eventually serve to reason more clearly on AI alignment.
Focus on targeted problems we can see today which are directly related to risks from advanced AI, and do our best to solve these (by heuristics or tracing back to related mathematical questions).
Borrowing Vanessa’s analogy of understanding the world as a castle, each floor built on the one underneath representing knowledge hierarchically built, when one wants to build a castle with unknown materials and unknown set of rules for it’s construction with a specific tower top in mind, one can either start by building the groundwork well or by starting with some ideas of what can by directly below the tower top.
Planners start from the towers top, while Hayekians want to build a solid ground and add on as many well placed floors as they can.
I mostly share your position, except that I think that you would perhaps maximize the probability of solving the Riemann hypothesis by going into paths on the frontline of current research instead of starting something new (but I imagine that there are many promising paths currently, which may be the difference).
This planners vs Hayekian genre of dilemmas seems very important to me, and it might be a crux in my career trajectory or at least impact possible projects I’m taking. I intuitively think that this question can be dissolved quite easily to make it obvious when each strategy is better, how parts of the EA world-view influences the answer and perhaps how this impacts how we think about academic research. There is also a lot of existing literature on this matter, so there might already be a satisfying argument.
If someone here is up to a (possibly adversarial) collaboration on the topic, let’s do it!
The Planners vs Hayekian dillema seems related to some of the discussion in Realism about rationality, and especially this crux for Abram Demski and Rohin Shah.
Broadly, two types of strategies in technical AI alignment work are
Build a solid mathematical foundations on which to build further knowledge which would eventually serve to reason more clearly on AI alignment.
Focus on targeted problems we can see today which are directly related to risks from advanced AI, and do our best to solve these (by heuristics or tracing back to related mathematical questions).
Borrowing Vanessa’s analogy of understanding the world as a castle, each floor built on the one underneath representing knowledge hierarchically built, when one wants to build a castle with unknown materials and unknown set of rules for it’s construction with a specific tower top in mind, one can either start by building the groundwork well or by starting with some ideas of what can by directly below the tower top.
Planners start from the towers top, while Hayekians want to build a solid ground and add on as many well placed floors as they can.