I think I disagree with your claim that I’m implicitly assuming independence of the ball colourings.
I start by looking for the maximum entropy distribution within all possible probability distributions over the 2^100 possible colourings. Most of these probability distributions do not have the property that balls are coloured independently. For example, if the distribution was a 50% probability of all balls being red, and 50% probability of all balls being blue, then learning the colour of a single ball would immediately tell you the colour of all of the others.
But it just so happens that for the probability distribution which maximises the entropy, the ball colourings do turn out to be independent. If you adopt the maximum entropy distribution as your prior, then learning the colour of one tells you nothing about the others. This is an output of the calculation, rather than an assumption.
I think I agree with your last paragraph, although there are some real problems here that I don’t know how to solve. Why should we expect any of our existing knowledge to be a good guide to what we will observe in future? It has been a good guide in the past, but so what? 99 red balls apparently doesn’t tell us that the 100th will likely be red, for certain seemingly reasonable choices of prior.
I guess what I was trying to say in my first comment is that the maximum entropy principle is not a solution to the problem of induction, or even an approximate solution. Ultimately, I don’t think anyone knows how to choose priors in a properly principled way. But I’d very much like to be corrected on this.
As a side-note, the maximum entropy principle would tell you to choose the maximum entropy prior given the information you have, and so if you intuit the information that the balls are likely to be produced by the same process, you’ll get a different prior that if you don’t have that information.
I.e., your disagreement might stem from the fact that the maximum entropy principle gives different answers conditional on different information.
I.e., you actually have information to differentiate between drawing n balls and flipping a fair coin n times.
I think I disagree with your claim that I’m implicitly assuming independence of the ball colourings.
I start by looking for the maximum entropy distribution within all possible probability distributions over the 2^100 possible colourings. Most of these probability distributions do not have the property that balls are coloured independently. For example, if the distribution was a 50% probability of all balls being red, and 50% probability of all balls being blue, then learning the colour of a single ball would immediately tell you the colour of all of the others.
But it just so happens that for the probability distribution which maximises the entropy, the ball colourings do turn out to be independent. If you adopt the maximum entropy distribution as your prior, then learning the colour of one tells you nothing about the others. This is an output of the calculation, rather than an assumption.
I think I agree with your last paragraph, although there are some real problems here that I don’t know how to solve. Why should we expect any of our existing knowledge to be a good guide to what we will observe in future? It has been a good guide in the past, but so what? 99 red balls apparently doesn’t tell us that the 100th will likely be red, for certain seemingly reasonable choices of prior.
I guess what I was trying to say in my first comment is that the maximum entropy principle is not a solution to the problem of induction, or even an approximate solution. Ultimately, I don’t think anyone knows how to choose priors in a properly principled way. But I’d very much like to be corrected on this.
As a side-note, the maximum entropy principle would tell you to choose the maximum entropy prior given the information you have, and so if you intuit the information that the balls are likely to be produced by the same process, you’ll get a different prior that if you don’t have that information.
I.e., your disagreement might stem from the fact that the maximum entropy principle gives different answers conditional on different information.
I.e., you actually have information to differentiate between drawing n balls and flipping a fair coin n times.