In the “voice of God” example, we’re guaranteed to minimize error by applying this reasoning; i.e., if God asks this question to every possible human created, and they all answer this way, most of them will be right. Now, I’m really unsure about the following, but imagine each new human predicts Doomsday through DA reasoning; in that case, I’m not sure it minimizes error the same way. We often assume human population will increase exponentially and then suddenly go extinct; but then it seems like most people will end up mistaken in their predictions. Maybe we’re using the wrong priors?
In the “voice of God” example, we’re guaranteed to minimize error by applying this reasoning; i.e., if God asks this question to every possible human created, and they all answer this way, most of them will be right.
Now, I’m really unsure about the following, but imagine each new human predicts Doomsday through DA reasoning; in that case, I’m not sure it minimizes error the same way. We often assume human population will increase exponentially and then suddenly go extinct; but then it seems like most people will end up mistaken in their predictions. Maybe we’re using the wrong priors?