If that B52 had exploded, the death toll would probably have been smaller than that of the Hiroshima bomb. (It landed in a random bit of countryside, not a city). Unless you think that accident would have triggered all out nuclear war? Sure, it would have a large impact on American politics. Quite possibly lead to total nuclear disarmament, at least of America. But any anthropic or fine tuning argument doesn’t apply.
Suppose we were seeing lots of “near misses”. These were near misses of events that, had they happened, would have destroyed a random american town. Clearly this isn’t anthropic effects or similar. I would guess something about nuclear safety engineers being more/less competent in various ways. Or nuclear disarmament supporters in high places that want lots of near miss scares. Or the bombs are mostly duds, but the government doesn’t want to admit this.
The problem with “anthropomorphic AI” approaches is
The human mind is complicated and poorly understood.
Safety degrades fast with respect to errors.
Lets say you are fairly successful. You produce an AI that is really really close to the human mind in the space of all possible minds. A mind that wouldn’t be particularly out of place at a mental institution. They can produce paranoid ravings about the shapeshifting lizard conspiracy millions of times faster than any biological human.
Ok, so you make them a bit smarter. The paranoid conspiricies get more complicated and somewhat more plausible. But at some points, they are sane enough to attempt AI research and produce useful results. Their alignment plan is totally insane.
In order to be useful, anthropomorphic AI needs to not only make AI that thinks similarly to humans. They need to be able to target the more rational, smart and ethical portion of mind space.
Humans can chuck the odd insane person out of the AI labs. Sane people are more common and tend to think faster. A team of humans can stop any one of their number crowning themselves as world king.
In reality, I think your anthropomorphic AI approach gets an arguably kind of humanlike in some ways AI that takes over the world. Because it didn’t resemble the right parts of the right humans in the right ways closely enough in the places where it matters.