Nice! I realised that I canāt think of the last time I received low-quality criticism (but can think of a moderate amount of fairly high-quality criticism) so I am probably quite lucky in that regard, as my work/āwriting thus far has either been privately shared or public but not very provocative. (Of course the flipside is having more people engage with oneās writing is one way to increase impact.)
I hadnāt heard the āidea inoculationā term beforeāthat does seem like a useful framing. I wonder if that is part of the explanation for some of the AI safety/āx-risk backlash, that someone hears a third-hand snippet of an argument for why AGI/āTAI might be dangerous, or consumes some not-very-realistic fiction about this, and later is pretty reluctant to engage with more careful work on the subject.
Nice! I realised that I canāt think of the last time I received low-quality criticism (but can think of a moderate amount of fairly high-quality criticism) so I am probably quite lucky in that regard, as my work/āwriting thus far has either been privately shared or public but not very provocative. (Of course the flipside is having more people engage with oneās writing is one way to increase impact.)
I hadnāt heard the āidea inoculationā term beforeāthat does seem like a useful framing. I wonder if that is part of the explanation for some of the AI safety/āx-risk backlash, that someone hears a third-hand snippet of an argument for why AGI/āTAI might be dangerous, or consumes some not-very-realistic fiction about this, and later is pretty reluctant to engage with more careful work on the subject.