For example, CAIS and something like “classical superintelligence in a box picture” disagree a lot on the surface level. However, if you look deeper, you will find many similar problems. Simple to explain example: problem of manipulating the operator—which has (in my view) some “hard core” involving both math and philosophy, where you want the AI to somehow communicate with humans in a way which at the same time allows a) the human to learn from the AI if the AI knows something about the world b) the operator’s values are not “overwritten” by the AI c) you don’t want to prohibit moral progress. In CAIS language this is connected to so called manipulative services.
Or: one of the biggest hits of past year is the mesa-optimisation paper. However, if you are familiar with prior work, you will notice many of the proposed solutions with mesa-optimisers are similar/same solutions as previously proposed for so called ‘daemons’ or ‘misaligned subagents’. This is because the problems partially overlap (the mesa-optimisation framing is more clear and makes a stronger case for “this is what to expect by default”). Also while, for example, on the surface level there is a lot of disagreement between e.g. MIRI researchers, Paul Christiano and Eric Drexler, you will find a “distillation” proposal targeted at the above described problem in Eric’s work from 2015, many connected ideas in Paul’s work on distillation, and while find it harder to understand Eliezer I think his work also reflects understanding of the problem.
b)
For example: You can ask whether the space of intelligent systems is fundamentally continuous, or not. (I call it “the continuity assumption”). This is connected to many agendas—if the space is fundamentally discontinuous this would cause serious problems to some forms of IDA, debate, interpretability & more.
(An example of discontinuity would be existence of problems which are impossible to meaningfully factorize; there are many more ways how the space could be discontinuous)
There are powerful intuitions going both ways on this.
Thanks for the reply! Could you give examples of:
a) two agendas that seem to be “reflecting” the same underlying problem despite appearing very different superficially?
b) a “deep prior” that you think some agenda is (partially) based on, and how you would go about working out how deep it is?
Sure
a)
For example, CAIS and something like “classical superintelligence in a box picture” disagree a lot on the surface level. However, if you look deeper, you will find many similar problems. Simple to explain example: problem of manipulating the operator—which has (in my view) some “hard core” involving both math and philosophy, where you want the AI to somehow communicate with humans in a way which at the same time allows a) the human to learn from the AI if the AI knows something about the world b) the operator’s values are not “overwritten” by the AI c) you don’t want to prohibit moral progress. In CAIS language this is connected to so called manipulative services.
Or: one of the biggest hits of past year is the mesa-optimisation paper. However, if you are familiar with prior work, you will notice many of the proposed solutions with mesa-optimisers are similar/same solutions as previously proposed for so called ‘daemons’ or ‘misaligned subagents’. This is because the problems partially overlap (the mesa-optimisation framing is more clear and makes a stronger case for “this is what to expect by default”). Also while, for example, on the surface level there is a lot of disagreement between e.g. MIRI researchers, Paul Christiano and Eric Drexler, you will find a “distillation” proposal targeted at the above described problem in Eric’s work from 2015, many connected ideas in Paul’s work on distillation, and while find it harder to understand Eliezer I think his work also reflects understanding of the problem.
b)
For example: You can ask whether the space of intelligent systems is fundamentally continuous, or not. (I call it “the continuity assumption”). This is connected to many agendas—if the space is fundamentally discontinuous this would cause serious problems to some forms of IDA, debate, interpretability & more.
(An example of discontinuity would be existence of problems which are impossible to meaningfully factorize; there are many more ways how the space could be discontinuous)
There are powerful intuitions going both ways on this.