It perhaps suggests that I should not accept it at face value and I should interrogate the claim, but it certainly doesn’t prove falsehood.
True, but you only have a finite amount of time to spend investigating claims of apocalypses. If you do a deep dive into the arguments of one of the main proponents of a theory, and find that it relies on dubious reasoning and poor science (like the “mix proteins to make diamondoid bacteria” scenario), then dismissal is a fairly understandable response.
If AI safety wants to avoid this sort of thing from happening, they should pick better arguments and better spokespeople, and be more willing to call out bad reasoning when it happens.
True, but you only have a finite amount of time to spend investigating claims of apocalypses. If you do a deep dive into the arguments of one of the main proponents of a theory, and find that it relies on dubious reasoning and poor science (like the “mix proteins to make diamondoid bacteria” scenario), then dismissal is a fairly understandable response.
If AI safety wants to avoid this sort of thing from happening, they should pick better arguments and better spokespeople, and be more willing to call out bad reasoning when it happens.