I think Matthew and Tamay think this is positive, since they think AI is positive.
I don’t see how this alleviates concern. Sure they’re acting consistently with their beliefs*, but that doesn’t change the fact that what they’re doing is bad.
I don’t mean to suggest that any one of these possibilities is particularly likely, or they they are all plausible. I haven’t followed this incident closely. FWIW, my vague sense is that the Mechanize founders had all expressed skepticism about the standard AI safety arguments for a while, in a way that seems hard to reconcile with (1) or (2).
it suggests the concern is an object level one, not a meta one. the underlying “vibe” I am getting from a lot of these discussions is that the people in question have somehow betrayed EA/the community/something else. That is a meta concern, one of norms. You could “betray” the community even if you are on the AI deceleration side things. If the people in question or Epoch made a specific commitment that they violated, that would be a “meta” issue, and would be one regardless of their “side” on the deceleration question. Perhaps they did do such a thing, but I haven’t seen convincing information suggesting this. I think that really the main explanatory variable here is in fact what “side” this suggests they are on. If that is the case, I think it is worth having clarity about it. People can do a bad thing because they are just wrong in their analysis of a situation or their decision-making. That doesn’t mean their actions constitute a betrayal.
I don’t see how this alleviates concern. Sure they’re acting consistently with their beliefs*, but that doesn’t change the fact that what they’re doing is bad.
*I assume, I don’t really know
Intuitively, it seems we should respond differently depending on which of these three possibilities is true:
They think that what they are doing is negative for the world, but do it anyway, because it is good for themselves personally.
They do not think that what they are doing is negative for the world, but they believe this due to motivated cognition.
They do not think that what they are doing is negative for the world, and this belief was not formed in a way that seems suspect.
From an act consequentialist perspective, these differences do not matter intrinsically, but they are still instrumentally relevant.[1]
I don’t mean to suggest that any one of these possibilities is particularly likely, or they they are all plausible. I haven’t followed this incident closely. FWIW, my vague sense is that the Mechanize founders had all expressed skepticism about the standard AI safety arguments for a while, in a way that seems hard to reconcile with (1) or (2).
it suggests the concern is an object level one, not a meta one. the underlying “vibe” I am getting from a lot of these discussions is that the people in question have somehow betrayed EA/the community/something else. That is a meta concern, one of norms. You could “betray” the community even if you are on the AI deceleration side things. If the people in question or Epoch made a specific commitment that they violated, that would be a “meta” issue, and would be one regardless of their “side” on the deceleration question. Perhaps they did do such a thing, but I haven’t seen convincing information suggesting this. I think that really the main explanatory variable here is in fact what “side” this suggests they are on. If that is the case, I think it is worth having clarity about it. People can do a bad thing because they are just wrong in their analysis of a situation or their decision-making. That doesn’t mean their actions constitute a betrayal.