I’ll start by stating that, while I have some intuitions about how the paper will be received, I don’t have much experience making crisp forecasts, and so I might be miscalibrated. That said:
In my experience it’s pretty common for ML researchers who are more interested in theory and general intelligence to find Solomonoff induction and AIXI to be useful theories. I think “Logical Induction” will be generally well-received among such people. Let’s say 70% chance that at least 40% of ML researchers who think AIXI is a useful theory, and who spend a couple hours thinking about “Logical Induction” (reading the paper / talking to people about it), will think that “Logical Induction” is at least 1⁄3 as interesting/useful as AIXI. I think ML researchers who don’t find Solomonoff induction relevant to their interests probably won’t find “Logical Induction” compelling either. This forecast is based on my personal experience of really liking Solomonoff induction and AIXI (long before knowing about MIRI) but finding theoretical gaps in them, many of which “Logical Induction” resolves nicely, and from various conversations with ML researchers who like Solomonoff induction and AIXI.
I have less-strong intuitions about mathematicians but more empirical data. “Logical Induction” has been quite well-received by Scott Aaronson, and I think the discussion at n-Category Cafe indicates that mathematicians find this paper and the overall topic interesting. I am quite uncertain about the numbers, but I expect something like 50% of mathematicians who are interested in Bayesianism and Gödel’s incompleteness theorem to think it’s quite an interesting result after thinking about it for a couple hours.
(these predictions might seem timid; I am adjusting for the low base rates of people finding things really interesting)
Would you like to state any crisp predictions for how your Logical Uncertainty paper will be received, and/or the impact it will have?
I’ll start by stating that, while I have some intuitions about how the paper will be received, I don’t have much experience making crisp forecasts, and so I might be miscalibrated. That said:
In my experience it’s pretty common for ML researchers who are more interested in theory and general intelligence to find Solomonoff induction and AIXI to be useful theories. I think “Logical Induction” will be generally well-received among such people. Let’s say 70% chance that at least 40% of ML researchers who think AIXI is a useful theory, and who spend a couple hours thinking about “Logical Induction” (reading the paper / talking to people about it), will think that “Logical Induction” is at least 1⁄3 as interesting/useful as AIXI. I think ML researchers who don’t find Solomonoff induction relevant to their interests probably won’t find “Logical Induction” compelling either. This forecast is based on my personal experience of really liking Solomonoff induction and AIXI (long before knowing about MIRI) but finding theoretical gaps in them, many of which “Logical Induction” resolves nicely, and from various conversations with ML researchers who like Solomonoff induction and AIXI.
I have less-strong intuitions about mathematicians but more empirical data. “Logical Induction” has been quite well-received by Scott Aaronson, and I think the discussion at n-Category Cafe indicates that mathematicians find this paper and the overall topic interesting. I am quite uncertain about the numbers, but I expect something like 50% of mathematicians who are interested in Bayesianism and Gödel’s incompleteness theorem to think it’s quite an interesting result after thinking about it for a couple hours.
(these predictions might seem timid; I am adjusting for the low base rates of people finding things really interesting)