Knowledge discovery cannot be known ahead of time, not even approximated.
Consider Andrew Wiles’ solution to Fermat’s Last Theorem—he was close to abandoning it, a lifelong obsession, the very morning that he solved it! That morning, Wiles’s priors were the most accurate in the world. Not only that, since he was on the cusp of the solution, his priors should have been on the cusp of being correct. And yet...
“Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed… he was having a final look to try and understand the fundamental reasons for why his approach could not be made to work, when he had a sudden insight.”
I disagree with the implications of your example, because Wiles wasn’t incentivized to be accurate, and wasn’t particularly making an effort to give an accurate probability.
He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was “likely enough to be worth it.” And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.
Imagine if a good EA stopped him in his moment of despair and encouraged him, with all the tools available, to create the most accurate estimate, I bet he’d still consider quitting. He might even be more convinced that it’s hopeless.
He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was “likely enough to be worth it.” And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.
This seems like it’s pretty weak evidence given that he did in fact continue.
I disagree with the implications of your example, because Wiles wasn’t incentivized to be accurate, and wasn’t particularly making an effort to give an accurate probability.
He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was “likely enough to be worth it.” And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.
Imagine if a good EA stopped him in his moment of despair and encouraged him, with all the tools available, to create the most accurate estimate, I bet he’d still consider quitting. He might even be more convinced that it’s hopeless.
This seems like it’s pretty weak evidence given that he did in fact continue.