I think the picture is somewhat correct, and we surprisingly should not be too concerned about the dynamic.
My model for this is:
1) there are some hard and somewhat nebulous problems “in the world”
2) people try to formalize them using various intuitions/framings/kinds of math; also using some “very deep priors”
3) the resulting agendas look at the surface level extremely different, and create the impression you have
but actually
4) if you understand multiple agendas deep enough, you get a sense
how they are sometimes “reflecting” the same underlying problem
if they are based on some “deep priors”, how deep it is, and how hard to argue it can be
how much they are based on “tastes” and “intuitions” ~ one model how to think about it is people having boxes comparable to policy net in AlphaZero: a mental black-box which spits useful predictions, but is not interpretable in language
Overall, given our current state of knowledge, I think running these multiple efforts in parallel is a better approach with higher chance of success that an idea that we should invest a lot in resolving disagreements/prioritizing, and everyone should work on the “best agenda”.
This seems to go against some core EA heuristic (“compare the options, take the best”) but actually is more in line with what rational allocation of resources in the face of uncertainty.
For example, CAIS and something like “classical superintelligence in a box picture” disagree a lot on the surface level. However, if you look deeper, you will find many similar problems. Simple to explain example: problem of manipulating the operator—which has (in my view) some “hard core” involving both math and philosophy, where you want the AI to somehow communicate with humans in a way which at the same time allows a) the human to learn from the AI if the AI knows something about the world b) the operator’s values are not “overwritten” by the AI c) you don’t want to prohibit moral progress. In CAIS language this is connected to so called manipulative services.
Or: one of the biggest hits of past year is the mesa-optimisation paper. However, if you are familiar with prior work, you will notice many of the proposed solutions with mesa-optimisers are similar/same solutions as previously proposed for so called ‘daemons’ or ‘misaligned subagents’. This is because the problems partially overlap (the mesa-optimisation framing is more clear and makes a stronger case for “this is what to expect by default”). Also while, for example, on the surface level there is a lot of disagreement between e.g. MIRI researchers, Paul Christiano and Eric Drexler, you will find a “distillation” proposal targeted at the above described problem in Eric’s work from 2015, many connected ideas in Paul’s work on distillation, and while find it harder to understand Eliezer I think his work also reflects understanding of the problem.
b)
For example: You can ask whether the space of intelligent systems is fundamentally continuous, or not. (I call it “the continuity assumption”). This is connected to many agendas—if the space is fundamentally discontinuous this would cause serious problems to some forms of IDA, debate, interpretability & more.
(An example of discontinuity would be existence of problems which are impossible to meaningfully factorize; there are many more ways how the space could be discontinuous)
There are powerful intuitions going both ways on this.
I think the picture is somewhat correct, and we surprisingly should not be too concerned about the dynamic.
My model for this is:
1) there are some hard and somewhat nebulous problems “in the world”
2) people try to formalize them using various intuitions/framings/kinds of math; also using some “very deep priors”
3) the resulting agendas look at the surface level extremely different, and create the impression you have
but actually
4) if you understand multiple agendas deep enough, you get a sense
how they are sometimes “reflecting” the same underlying problem
if they are based on some “deep priors”, how deep it is, and how hard to argue it can be
how much they are based on “tastes” and “intuitions” ~ one model how to think about it is people having boxes comparable to policy net in AlphaZero: a mental black-box which spits useful predictions, but is not interpretable in language
Overall, given our current state of knowledge, I think running these multiple efforts in parallel is a better approach with higher chance of success that an idea that we should invest a lot in resolving disagreements/prioritizing, and everyone should work on the “best agenda”.
This seems to go against some core EA heuristic (“compare the options, take the best”) but actually is more in line with what rational allocation of resources in the face of uncertainty.
Thanks for the reply! Could you give examples of:
a) two agendas that seem to be “reflecting” the same underlying problem despite appearing very different superficially?
b) a “deep prior” that you think some agenda is (partially) based on, and how you would go about working out how deep it is?
Sure
a)
For example, CAIS and something like “classical superintelligence in a box picture” disagree a lot on the surface level. However, if you look deeper, you will find many similar problems. Simple to explain example: problem of manipulating the operator—which has (in my view) some “hard core” involving both math and philosophy, where you want the AI to somehow communicate with humans in a way which at the same time allows a) the human to learn from the AI if the AI knows something about the world b) the operator’s values are not “overwritten” by the AI c) you don’t want to prohibit moral progress. In CAIS language this is connected to so called manipulative services.
Or: one of the biggest hits of past year is the mesa-optimisation paper. However, if you are familiar with prior work, you will notice many of the proposed solutions with mesa-optimisers are similar/same solutions as previously proposed for so called ‘daemons’ or ‘misaligned subagents’. This is because the problems partially overlap (the mesa-optimisation framing is more clear and makes a stronger case for “this is what to expect by default”). Also while, for example, on the surface level there is a lot of disagreement between e.g. MIRI researchers, Paul Christiano and Eric Drexler, you will find a “distillation” proposal targeted at the above described problem in Eric’s work from 2015, many connected ideas in Paul’s work on distillation, and while find it harder to understand Eliezer I think his work also reflects understanding of the problem.
b)
For example: You can ask whether the space of intelligent systems is fundamentally continuous, or not. (I call it “the continuity assumption”). This is connected to many agendas—if the space is fundamentally discontinuous this would cause serious problems to some forms of IDA, debate, interpretability & more.
(An example of discontinuity would be existence of problems which are impossible to meaningfully factorize; there are many more ways how the space could be discontinuous)
There are powerful intuitions going both ways on this.