I don’t mean to suggest that any one of these possibilities is particularly likely, or they they are all plausible. I haven’t followed this incident closely. FWIW, my vague sense is that the Mechanize founders had all expressed skepticism about the standard AI safety arguments for a while, in a way that seems hard to reconcile with (1) or (2).
Intuitively, it seems we should respond differently depending on which of these three possibilities is true:
They think that what they are doing is negative for the world, but do it anyway, because it is good for themselves personally.
They do not think that what they are doing is negative for the world, but they believe this due to motivated cognition.
They do not think that what they are doing is negative for the world, and this belief was not formed in a way that seems suspect.
From an act consequentialist perspective, these differences do not matter intrinsically, but they are still instrumentally relevant.[1]
I don’t mean to suggest that any one of these possibilities is particularly likely, or they they are all plausible. I haven’t followed this incident closely. FWIW, my vague sense is that the Mechanize founders had all expressed skepticism about the standard AI safety arguments for a while, in a way that seems hard to reconcile with (1) or (2).