This is a linkpost to a recent blogpost from Michael Nielsen, who has previously written on EA among many other topics. This blogpost is adapted from a talk Nielsen gave to an audience working on AI before a screening of Oppenheimer. I think the full post is worth a read, but I’ve pulled out some quotes I find especially interesting (bolding my own)
I was at a party recently, and happened to meet a senior person at a well-known AI startup in the Bay Area. They volunteered that they thought “humanity had about a 50% chance of extinction” caused by artificial intelligence. I asked why they were working at an AI startup if they believed that to be true. They told me that while they thought it was true, “in the meantime I get to have a nice house and car”.
[...] I often meet people who claim to sincerely believe (or at least seriously worry) that AI may cause significant damage to humanity. And yet they are also working on it, justifying it in ways that sometimes seem sincerely thought out, but which all-too-often seem self-serving or self-deceiving.
Part of what makes the Manhattan Project interesting is that we can chart the arcs of moral thinking of multiple participants [...] Here are four caricatures:
Klaus Fuchs and Ted Hall were two Manhattan Project physicists who took it upon themselves to commit espionage, communicating the secret of the bomb to the Soviet Union. It’s difficult to know for sure, but both seem to have been deeply morally engaged and trying to do the right thing, willing to risk their lives; they also made, I strongly believe, a terrible error of judgment. I take it as a warning that caring and courage and imagination are not enough; they can, in fact, lead to very bad outcomes.
Robert Wilson, the physicist who recruited Richard Feynman to the project. Wilson had thought deeply about Nazi Germany, and the capabilities of German physics and industry, and made a principled commitment to the project on that basis. He half-heartedly considered leaving when Germany surrendered, but opted to continue until the bombings in Japan. He later regretted that choice; immediately after the Trinity Test he was disconsolate, telling an exuberant Feynman: “It’s a terrible thing that we made”.
Oppenheimer, who I believe was motivated in part by a genuine fear of the Nazis, but also in part by personal ambition and a desire for “success”. It’s interesting to ponder his statements after the War: while he seems to have genuinely felt a strong need to work on the bomb in the face of the Nazi threat, his comments about continuing to work up to the bombing of Hiroshima and Nagasaki contain many strained self-exculpatory statements about how you have to work on it as a scientist, that the technical problem is too sweet. It smells, to me, of someone looking for self-justification.
Joseph Rotblat, the one physicist who actually left the project after it became clear the Nazis were not going to make an atomic bomb. He was threatened by the head of Los Alamos security, and falsely accused of having met with Soviet agents. In leaving he was turning his back on his most important professional peers at a crucial time in his career. Doing so must have required tremendous courage and moral imagination. Part of what makes the choice intriguing is that he himself didn’t think it would make any difference to the success of the project. I know I personally find it tempting to think about such choices in abstract systems terms: “I, individually, can’t change systems outcomes by refusing to participate [‘it’s inevitable!’], therefore it’s okay to participate”. And yet while that view seems reasonable, Rotblat’s example shows it is incorrect. His private moral thinking, which seemed of small import initially, set a chain of thought in motion that eventually led to Rotblat founding the Pugwash Conferences, a major forum for nuclear arms control, one that both Robert McNamara and Mikhail Gorbachev identified as helping reduce the threat of nuclear weapons. Rotblat ultimately received the Nobel Peace Prize. Moral choices sometimes matter not only for their immediate impact, but because they are seeds for downstream changes in behavior that cannot initially be anticipated.
[Linkpost] Michael Nielsen remarks on ‘Oppenheimer’
Link post
This is a linkpost to a recent blogpost from Michael Nielsen, who has previously written on EA among many other topics. This blogpost is adapted from a talk Nielsen gave to an audience working on AI before a screening of Oppenheimer. I think the full post is worth a read, but I’ve pulled out some quotes I find especially interesting (bolding my own)