Thanks for the thoughtful comment, Gavin. Note that the inverter problem also exists in the case where you are not quantifying at all, so quantification just brings it to the forefront.
In offline conversation, you mentioned that people are bad at overriding shitty models with their initially superior intuition, which is a problem if quantified models start out as being shitty. To this the answer from my part was that yeah, at this point, I would just posit or demand grantmakers who have the skill of combining models which have some error with their own intuitions. Otherwise, the situation would be pretty hopeless.
It’s not that intuition is superior: it is broad, latent, all-things-considered (where all formal models are some-things-considered). The smell test it enables is all we have against model error. (And inverters are just a nasty kind of model error.)
I consider naming particular [AGI timeline median] years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain’s native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them. What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one’s mental health, and I worry that other people seem to have weaker immune systems than even my own. But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050.
Thanks for the thoughtful comment, Gavin. Note that the inverter problem also exists in the case where you are not quantifying at all, so quantification just brings it to the forefront.
In offline conversation, you mentioned that people are bad at overriding shitty models with their initially superior intuition, which is a problem if quantified models start out as being shitty. To this the answer from my part was that yeah, at this point, I would just posit or demand grantmakers who have the skill of combining models which have some error with their own intuitions. Otherwise, the situation would be pretty hopeless.
It’s not that intuition is superior: it is broad, latent, all-things-considered (where all formal models are some-things-considered). The smell test it enables is all we have against model error. (And inverters are just a nasty kind of model error.)
Here’s Yudkowsky, even: