To be clear, mostly I’m not asking for “more work”, I’m asking people to use much better epistemic hygiene. I did use the phrase “work much harder on its epistemic standards”, but by this I mean please don’t make sweeping, confident claims as if they are settled fact when there’s informed disagreement on those subjects.
Nevertheless, some examples of the sort of informed disagreement I’m referring to:
The mere existence of many serious alignment researchers seriously optimistic about scalable oversight methods such as debate.
This post by Matthew Barnett arguing we’ve been able to specify values much more successfully than MIRI anticipated.
Shard theory, developed mostly by Alex Turner and Quintin Pope, calling into question the utility argmaxer framework which has been used to justify many historical concerns about instrumental convergence leading to AI takeover.
This comment by me arguing ChatGPT is pretty aligned compared to MIRI’s historical predictions, because it does what we mean and not what we say.
A detailed set of objections from Quintin Pope to Eliezer’s views, which Eliezer responded to by saying it’s “kinda long”, and engaged with extremely superficially before writing it off.
This by Stuhlmüller and Byun, as well as many other articles by others, arguing that process oversight is a viable alignment strategy, converging with rather than opposing capabilities.
Notably, the extreme doomer contingent has largely failed even to understand, never mind engage with, some of these arguments, frequently lazily pattern-matching and misrepresenting them as more basic misconceptions. A typical example is thinking Matthew Barnett and I have been saying that GPT understanding human values is evidence against the MIRI/doomer worldview (after all, “the AI knows what you want but does not care, as we’ve said all along”), when in fact we’re saying there’s evidence we have actually pointed GPT successfully at those values.
It’s fine if you have a different viewpoint. Just don’t express that viewpoint as if it’s self-evidently right when there’s serious disagreement on the matter among informed, thoughtful people. An article like the OP which claims that labs should shut down should at least try to engage with the views of someone who thinks the labs should not shut down, and not just pretend such people are fools unworthy of mention.
I’m calling for a six month pause on new font faces more powerful than Comic Sans.