I wrote up my understanding of Popper’s argument on the impossibility of predicting one’s own knowledge (Chapter 22 of The Open Universe) that came up in one of the comment threads. I am still a bit confused about it and would appreciate people pointing out my misunderstandings.
Consider a predictor:
A1: Given a sufficiently explicit prediction task, the predictor predicts correctly
A2: Given any such prediction task, the predictor takes time to predict and issue its reply (the task is only completed once the reply is issued).
T1: A1,A2=> Given a self-prediction task, the predictor can only produce a reply after (or at the same time as) the predicted event
T2: A1,A2=> The predictor cannot predict future growth in its own knowledge
A3: The predictor takes longer to produce a reply, the longer the reply is
A4: All replies consist of a description of a physical system and use the same (standard) language.
A1 establishes implicit knowledge of the predictor about the task. A2, A3 and A4 are there to account for the fact that the machine needs to make its prediction explicit.
A5: Now, consider two identical predictors, Tell and Told. At t=0 give Tell the task to predict Told’s state (including it’s physically issued reply) at t=1 from Told’s state at t=0. Give Told the task to predict a third predictor’s state (this seems to later be interpreted as Tell’s state) at t = 1 from that predictor’s state at t=0 (such that Tell and Told will be in the exact same state at t=0).
If I understand correctly, this implies that Tell and Told will be in the same state all the time, as future states are just a function of the task and the initial state.
T3: If Told has not started issuing its reply at t=1, Tell won’t have completed its task at t=1
Argument: Tell must issue its reply to complete the task, but Tell has to go through the same states as Told in equal periods of time, so it cannot have started issuing its reply.
T4: If Told has completed its task at t=1, Tell will complete its task at t=1.
Argument: Tell and Told are identical machines
T5: Tell cannot predict its own future growth in knowledge
Argument: Completing the prediction would take until the knowledge is actually obtained.
A6: The description of the physical state of another description (that is for example written on a punch card) cannot be shorter than said other description.
T6: If Told has completed its task at t=1, Tell must have taken longer to complete its task
This is because its reply is longer than TOLD’s given that it needs to describe TOLD’s reply.
T6 contradicts T4, so some of the assumptions must be wrong.
A5 and A1 are some of the most shaky assumptions. If A1 fails, we cannot predict the future. If A5 fails, there is a problem with self-referential predictions.
Initial thoughts:
This seems to establish too little, as it is about deterministic predictions. Also, the argument does not seem to preclude partial predictions about certain aspects of the world’s state (for example, predictions that are not concerned with the other predictor’s physical output might go through). Less relevantly, the argument heavily relies on (pseudo) self-references and Popper distinguishes between explicit and implicit knowledge and only explicit knowledge seems to be affected by the argument. It is not clear to me that making an explicit prediction about the future necessarily requires me to make all of the knowledge gains I have until then explicit (If we are talking about determinstic predictions of the whole world’s state, I might have to, though, especially if I predict state-by-state ).
Then, if all of my criticism was invalid and the argument was true, I don’t see how we could predict anything in the future at all (like the sun’s existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions? (I agree that there is a quantitative one, and it seems quite plausible that some lontermists are undervaluing that.)
I am also slightly discounting the proof, as it uses a lot of words that can be interpreted in different ways. It seems like it is often easier to overlook problems and implicit assumptions in that kind of proof as opposed to a more formal/symbolic proof.
Popper’s ideas seem to have interesting overlap with MIRI’s work.
I don’t see how we could predict anything in the future at all (like the sun’s existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions?
Haha just gonna keep pointing you to places where Popper writes about this stuff b/c it’s far more comprehensive than anything I could write here :)
This question (and the questions re. climate change Max asked in another thread) are the focus of Popper’s book The Poverty of Historicism, where “historicism” here means “any philosophy that tries to make long-term predictions about human society” (i.e marxism, fascism, malthusianism, etc). I’ve attached a screenshot for proof-of-relevance:
(Ben and I discuss historicism here fwiw.) I have a pdf of this one, dm me if you want a copy :)
Popper’s ideas seem to have interesting overlap with MIRI’s work.
Yeah, I was also vaguely reminded of e.g. logical induction when I read the summary of Popper’s argument in the text Vaden linked elsewhere in this discussion.
Impressive write up! Fun historical note—in a footnote Popper says he got the idea of formulating the proof using prediction machines from personal communication with the “late Dr A. M. Turing”.
I wrote up my understanding of Popper’s argument on the impossibility of predicting one’s own knowledge (Chapter 22 of The Open Universe) that came up in one of the comment threads. I am still a bit confused about it and would appreciate people pointing out my misunderstandings.
Consider a predictor:
A1: Given a sufficiently explicit prediction task, the predictor predicts correctly
A2: Given any such prediction task, the predictor takes time to predict and issue its reply (the task is only completed once the reply is issued).
T1: A1,A2=> Given a self-prediction task, the predictor can only produce a reply after (or at the same time as) the predicted event
T2: A1,A2=> The predictor cannot predict future growth in its own knowledge
A3: The predictor takes longer to produce a reply, the longer the reply is
A4: All replies consist of a description of a physical system and use the same (standard) language.
A1 establishes implicit knowledge of the predictor about the task. A2, A3 and A4 are there to account for the fact that the machine needs to make its prediction explicit.
A5: Now, consider two identical predictors, Tell and Told. At t=0 give Tell the task to predict Told’s state (including it’s physically issued reply) at t=1 from Told’s state at t=0. Give Told the task to predict a third predictor’s state (this seems to later be interpreted as Tell’s state) at t = 1 from that predictor’s state at t=0 (such that Tell and Told will be in the exact same state at t=0).
If I understand correctly, this implies that Tell and Told will be in the same state all the time, as future states are just a function of the task and the initial state.
T3: If Told has not started issuing its reply at t=1, Tell won’t have completed its task at t=1
Argument: Tell must issue its reply to complete the task, but Tell has to go through the same states as Told in equal periods of time, so it cannot have started issuing its reply.
T4: If Told has completed its task at t=1, Tell will complete its task at t=1.
Argument: Tell and Told are identical machines
T5: Tell cannot predict its own future growth in knowledge
Argument: Completing the prediction would take until the knowledge is actually obtained.
A6: The description of the physical state of another description (that is for example written on a punch card) cannot be shorter than said other description.
T6: If Told has completed its task at t=1, Tell must have taken longer to complete its task
This is because its reply is longer than TOLD’s given that it needs to describe TOLD’s reply.
T6 contradicts T4, so some of the assumptions must be wrong.
A5 and A1 are some of the most shaky assumptions. If A1 fails, we cannot predict the future. If A5 fails, there is a problem with self-referential predictions.
Initial thoughts:
This seems to establish too little, as it is about deterministic predictions. Also, the argument does not seem to preclude partial predictions about certain aspects of the world’s state (for example, predictions that are not concerned with the other predictor’s physical output might go through). Less relevantly, the argument heavily relies on (pseudo) self-references and Popper distinguishes between explicit and implicit knowledge and only explicit knowledge seems to be affected by the argument. It is not clear to me that making an explicit prediction about the future necessarily requires me to make all of the knowledge gains I have until then explicit (If we are talking about determinstic predictions of the whole world’s state, I might have to, though, especially if I predict state-by-state ).
Then, if all of my criticism was invalid and the argument was true, I don’t see how we could predict anything in the future at all (like the sun’s existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions? (I agree that there is a quantitative one, and it seems quite plausible that some lontermists are undervaluing that.)
I am also slightly discounting the proof, as it uses a lot of words that can be interpreted in different ways. It seems like it is often easier to overlook problems and implicit assumptions in that kind of proof as opposed to a more formal/symbolic proof.
Popper’s ideas seem to have interesting overlap with MIRI’s work.
Haha just gonna keep pointing you to places where Popper writes about this stuff b/c it’s far more comprehensive than anything I could write here :)
This question (and the questions re. climate change Max asked in another thread) are the focus of Popper’s book The Poverty of Historicism, where “historicism” here means “any philosophy that tries to make long-term predictions about human society” (i.e marxism, fascism, malthusianism, etc). I’ve attached a screenshot for proof-of-relevance:
(Ben and I discuss historicism here fwiw.) I have a pdf of this one, dm me if you want a copy :)
Yeah, I was also vaguely reminded of e.g. logical induction when I read the summary of Popper’s argument in the text Vaden linked elsewhere in this discussion.
Yes! Exactly! Hence why I keep bringing him up :)
Impressive write up! Fun historical note—in a footnote Popper says he got the idea of formulating the proof using prediction machines from personal communication with the “late Dr A. M. Turing”.