Moreover, even if a critic has a sufficiently high level of motivation in the abstract, it doesn’t follow that they will be incentivized to produce much (if any) “polite, charitable, good-faith, evidentiarily rigorous” work. (Many) critics want to be effective too—and they may reasonably (maybe even correctly!) think that effort devoted to producing castle memes produces a higher ROI than polishing, simplifying, promoting, and defending their more rigorous critiques.
For example, a committed e/acc’s top priority is arguably the avoidance of government regulation that seriously slows down AI development. Memes are more important for 90%, perhaps 99%, of the electorate—so “make EA / AI safety a topic of public scorn and ridicule” seems like a reasonable theory of change for the e/acc folks. When you’re mainly trying to tear someone else’s work down, you may plausibly see maintaining epistemic rigor in your own camp as relatively less important than if you were actually trying to build something.
Moreover, even if a critic has a sufficiently high level of motivation in the abstract, it doesn’t follow that they will be incentivized to produce much (if any) “polite, charitable, good-faith, evidentiarily rigorous” work. (Many) critics want to be effective too—and they may reasonably (maybe even correctly!) think that effort devoted to producing castle memes produces a higher ROI than polishing, simplifying, promoting, and defending their more rigorous critiques.
For example, a committed e/acc’s top priority is arguably the avoidance of government regulation that seriously slows down AI development. Memes are more important for 90%, perhaps 99%, of the electorate—so “make EA / AI safety a topic of public scorn and ridicule” seems like a reasonable theory of change for the e/acc folks. When you’re mainly trying to tear someone else’s work down, you may plausibly see maintaining epistemic rigor in your own camp as relatively less important than if you were actually trying to build something.