I just looked up the proof of Fermat’s Last Theorem, and it came about from Andrew Wiles spotting that someone else had recently proven something which could plausibly be turned into a proof, and then working on it for seven years. This seems like a data point in favor of the end-to-end models approach.
Yes, agree. Though anecdotally my impression is that Wiles is an exception, and that his strategy was seen as quite weird and unusual by his peers.
I think I agree that in general there will almost always be a point at which it’s optimal to switch to a more end-to-end strategy. In Wiles’s case, I don’t think his strategy would have worked if he had switched as an undergraduate, and I don’t think it would have worked if he had lived 50 years earlier (because the conceptual foundations used in the proof had not been developed yet).
This can also be a back and forth. E.g. for Fermat’s Last Theorem, perhaps number theorists were justified in taking a more end-to-end approach in the 19th century because there had been little effort using then-modern tools; and indeed, I think partly stimulated by attempts to prove FLT (and actually proving it in some special cases), they developed some of the foundations of classical algebraic number theory. Maybe then people had understood that the conjecture resists attempts to prove it directly given then-current conceptual tools, and at this point it would have become more fruitful to spend more time on less direct approaches, though they could still be guided by heuristics like “it’s useful to further develop the foundations of this area of maths / our understanding of this kind of mathematical object because we know of a certain connection to FLT, even though we wouldn’t know how exactly this could help in a proof of FLT”. Then, perhaps in Wiles’s time, it was time again for more end-to-end attempts etc.
I’m not confident that this is a very accurate history of FLT, but reasonably confident that the rough pattern applies to a lot of maths.
The paper Architecting Discovery by Ed Boyden and Adam Marblestone also discusses how one can methodologically go about producing better scientific tools (which they used for Expansion Microscopy and Optogenetics).
I just looked up the proof of Fermat’s Last Theorem, and it came about from Andrew Wiles spotting that someone else had recently proven something which could plausibly be turned into a proof, and then working on it for seven years. This seems like a data point in favor of the end-to-end models approach.
Yes, agree. Though anecdotally my impression is that Wiles is an exception, and that his strategy was seen as quite weird and unusual by his peers.
I think I agree that in general there will almost always be a point at which it’s optimal to switch to a more end-to-end strategy. In Wiles’s case, I don’t think his strategy would have worked if he had switched as an undergraduate, and I don’t think it would have worked if he had lived 50 years earlier (because the conceptual foundations used in the proof had not been developed yet).
This can also be a back and forth. E.g. for Fermat’s Last Theorem, perhaps number theorists were justified in taking a more end-to-end approach in the 19th century because there had been little effort using then-modern tools; and indeed, I think partly stimulated by attempts to prove FLT (and actually proving it in some special cases), they developed some of the foundations of classical algebraic number theory. Maybe then people had understood that the conjecture resists attempts to prove it directly given then-current conceptual tools, and at this point it would have become more fruitful to spend more time on less direct approaches, though they could still be guided by heuristics like “it’s useful to further develop the foundations of this area of maths / our understanding of this kind of mathematical object because we know of a certain connection to FLT, even though we wouldn’t know how exactly this could help in a proof of FLT”. Then, perhaps in Wiles’s time, it was time again for more end-to-end attempts etc.
I’m not confident that this is a very accurate history of FLT, but reasonably confident that the rough pattern applies to a lot of maths.
The paper Architecting Discovery by Ed Boyden and Adam Marblestone also discusses how one can methodologically go about producing better scientific tools (which they used for Expansion Microscopy and Optogenetics).