Indeed, in some sense, Solomonoff Inductors are in a boat similar to the one that less computer-science-y Bayesians were in all along: you’ll plausibly converge on the truth, and resolve disagreements, eventually; but a priori, for arbitrary agents in arbitrary situations, it’s hard to say when. My main point here is that the Solomonoff Induction boat doesn’t seem obviously better.
Not necessarily true! See Scott Aaronson on this (but iirc, he makes some assumptions I disagreed with)
Not necessarily true! See Scott Aaronson on this (but iirc, he makes some assumptions I disagreed with)