Where in your taxonomy does the design of AI systems go – what high-level architecture to use (non-modular? modular with a perception model, world-model, evaluation model, planning model etc.?), what type of function approximators to use for the modules (ANNs? Bayesian networks? something else?), what decision theory to base it on, what algorithms to use to learn the different models occurring in these modules (RL? something else?), how to curate training data, etc.?
It’s not a separate approach, the non-theory agendas and even some of the theory agendas have their own answers to these questions. I can tell you that almost everyone besides CoEms and OAA are targeting NNs though.
“targeting NNs” sounds like work that takes a certain architecture (NNs) as a given rather than work that aims at actively designing a system.
To be more specific: under the proposed taxonomy, where would a project be sorted that designs agents composed of a Bayesian network as a world model and an aspiration-based probabilistic programming algorithm for planning?
Well there’s a lot of different ways to design an NN.
That sounds related to OAA (minus the vast verifier they also want to build), so depending on the ambition it could be “End to end solution” or “getting it to learn what we want” or “task decomp”. See also this cool paper from authors including Stuart Russell.
Where in your taxonomy does the design of AI systems go – what high-level architecture to use (non-modular? modular with a perception model, world-model, evaluation model, planning model etc.?), what type of function approximators to use for the modules (ANNs? Bayesian networks? something else?), what decision theory to base it on, what algorithms to use to learn the different models occurring in these modules (RL? something else?), how to curate training data, etc.?
It’s not a separate approach, the non-theory agendas and even some of the theory agendas have their own answers to these questions. I can tell you that almost everyone besides CoEms and OAA are targeting NNs though.
“targeting NNs” sounds like work that takes a certain architecture (NNs) as a given rather than work that aims at actively designing a system.
To be more specific: under the proposed taxonomy, where would a project be sorted that designs agents composed of a Bayesian network as a world model and an aspiration-based probabilistic programming algorithm for planning?
Well there’s a lot of different ways to design an NN.
That sounds related to OAA (minus the vast verifier they also want to build), so depending on the ambition it could be “End to end solution” or “getting it to learn what we want” or “task decomp”. See also this cool paper from authors including Stuart Russell.
What is OAA? And, more importantly: where now would you put it in your taxonomy?
https://www.lesswrong.com/posts/pHJtLHcWvfGbsW7LR/roadmap-for-a-collaborative-prototype-of-an-open-agency
I put it in “galaxy-brained end-to-end solutions” for its ambition but there are various places it could go.