In the case at hand, Matthew would have had to at some point represent himself as supporting slowing down or stopping AI progress. For at least the past 2.5 years, he has been arguing against doing that in extreme depth on the public internet. So I don’t really see how you can interpret him starting a company that aims to speed up AI as inconsistent with his publicly stated views, which seems like a necessary condition for him to be a “traitor”. If Matthew had previously claimed to be a pause AI guy, then I think it would be more reasonable for other adherents of that view to call him a “traitor.” I don’t think that’s raising the definitional bar so high that no will ever meet it—it seems like a very basic standard.
I have no idea how to interpret “sellout” in this context, as I have mostly heard that term used for such situations as rappers making washing machine commercials. Insofar as I am familiar with that word, it seems obviously inapplicable.
I’m obviously not Matthew, but the OED defines them like so:
sell-out: “a betrayal of one’s principles for reasons of expedience”
traitor: “a person who betrays [be gravely disloyal to] someone or something, such as a friend, cause, or principle”
Unless he is lying about what he believes—which seems unlikely—Matthew is not a sell-out, because according to him Mechanize is good or at minimum not bad for the world on his worldview. Hence, he is not betraying his own principles.
As for being a traitor, I guess the first question is, traitor of what? To EA principles? To the AI safety cause? To the EA or AI safety community? In order:
I don’t think Matthew is gravely disloyal to EA principles, as he explicitly says he endorses them and has explained how his decisions make sense on his worldview
I don’t think Matthew is gravely disloyal to the AI safety cause, as he’s been openly critical of many common AI doom arguments for some time, and you can’t be disloyal to a cause you never really bought into in the first place
Whether Matthew is gravely disloyal to the EA or AI safety communities feels less obvious to me. I’m guessing a bunch of people saw Epoch as an an AI safety organisation, and by extension its employees as members of the AI safety community, even if the org and its employees did not necessarily see itself or themselves that way, and felt betrayed for that reason. But it still feels off to me to call Matthew a traitor to the EA or AI safety communities, especially given that he’s been critical of common AI doom arguments. This feels more like a difference over empirical beliefs than a difference over fundamental values, and it seems wrong to me to call someone gravely disloyal to a community for drawing unorthodox but reasonable empirical conclusions and acting on those, while broadly having similar values. Like, I think people should be allowed to draw conclusions (or even change their minds) based on evidence—and act on those conclusions—without it being betrayal, assuming they broadly share the core EA values, and assuming they’re being thoughtful about it.
(Of course, it’s still possible that Mechanize is a net-negative for the world, even if Matthew personally is not a sell-out or a traitor or any other such thing.)
Yes, I understand the arguments against it applying here. My question is whether the threshold is being set at a sufficiently high level that it basically never applies to anyone. Hence why I was looking for examples which would qualify.
Sellout (in the context of Epoch) would apply to someone e.g. concealing data or refraining from publishing a report in exchange for a proposed job in an existing AI company.
As for traitor, I think the only group here that can be betrayed is humanity as a whole, so as long as one believes they’re doing something good for humanity I don’t think it’d ever apply.
As for traitor, I think the only group here that can be betrayed is humanity as a whole, so as long as one believes they’re doing something good for humanity I don’t think it’d ever apply.
Hmm, that seems off to me? Unless you mean “severe disloyalty to some group isn’t Ultimately Bad, even though it can be instrumentally bad”. But to me it seems useful to have a concept of group betrayal, and to consider doing so to be generally bad, since I think group loyalty is often a useful norm that’s good for humanity as a whole.
Specifically, I think group-specific trust networks are instrumentally useful for cooperating to increase human welfare. For example, scientific research can’t be carried out effectively without some amount of trust among researchers, and between researchers and the public, etc. And you need some boundary for these groups that’s much smaller than all humanity to enable repeated interaction, mutual monitoring, and norm enforcement. When someone is severely disloyal to one of those groups they belong to, they undermine the mutual trust that enables future cooperation, which I’d guess is ultimately often bad for the world, since humanity as a whole depends for its welfare on countless such specialised (and overlapping) communities cooperating internally.
It’s not that I’m ignoring group loyalty, just that the word “traitor” seems so strong to me that I don’t think there’s any smaller group here that’s owed that much trust. I could imagine a close friend calling me that, but not a colleague. I could imagine a researcher saying I “betrayed” them if I steal and publish their results as my own after they consulted me, but that’s a much weaker word.
[Context: I come from a country where you’re labeled a traitor for having my anti-war political views, and I don’t feel such usage of this word has done much good for society here...]
What do you think would constitute being a “sellout and traitor”?
In the case at hand, Matthew would have had to at some point represent himself as supporting slowing down or stopping AI progress. For at least the past 2.5 years, he has been arguing against doing that in extreme depth on the public internet. So I don’t really see how you can interpret him starting a company that aims to speed up AI as inconsistent with his publicly stated views, which seems like a necessary condition for him to be a “traitor”. If Matthew had previously claimed to be a pause AI guy, then I think it would be more reasonable for other adherents of that view to call him a “traitor.” I don’t think that’s raising the definitional bar so high that no will ever meet it—it seems like a very basic standard.
I have no idea how to interpret “sellout” in this context, as I have mostly heard that term used for such situations as rappers making washing machine commercials. Insofar as I am familiar with that word, it seems obviously inapplicable.
I’m obviously not Matthew, but the OED defines them like so:
sell-out: “a betrayal of one’s principles for reasons of expedience”
traitor: “a person who betrays [be gravely disloyal to] someone or something, such as a friend, cause, or principle”
Unless he is lying about what he believes—which seems unlikely—Matthew is not a sell-out, because according to him Mechanize is good or at minimum not bad for the world on his worldview. Hence, he is not betraying his own principles.
As for being a traitor, I guess the first question is, traitor of what? To EA principles? To the AI safety cause? To the EA or AI safety community? In order:
I don’t think Matthew is gravely disloyal to EA principles, as he explicitly says he endorses them and has explained how his decisions make sense on his worldview
I don’t think Matthew is gravely disloyal to the AI safety cause, as he’s been openly critical of many common AI doom arguments for some time, and you can’t be disloyal to a cause you never really bought into in the first place
Whether Matthew is gravely disloyal to the EA or AI safety communities feels less obvious to me. I’m guessing a bunch of people saw Epoch as an an AI safety organisation, and by extension its employees as members of the AI safety community, even if the org and its employees did not necessarily see itself or themselves that way, and felt betrayed for that reason. But it still feels off to me to call Matthew a traitor to the EA or AI safety communities, especially given that he’s been critical of common AI doom arguments. This feels more like a difference over empirical beliefs than a difference over fundamental values, and it seems wrong to me to call someone gravely disloyal to a community for drawing unorthodox but reasonable empirical conclusions and acting on those, while broadly having similar values. Like, I think people should be allowed to draw conclusions (or even change their minds) based on evidence—and act on those conclusions—without it being betrayal, assuming they broadly share the core EA values, and assuming they’re being thoughtful about it.
(Of course, it’s still possible that Mechanize is a net-negative for the world, even if Matthew personally is not a sell-out or a traitor or any other such thing.)
Yes, I understand the arguments against it applying here. My question is whether the threshold is being set at a sufficiently high level that it basically never applies to anyone. Hence why I was looking for examples which would qualify.
Sellout (in the context of Epoch) would apply to someone e.g. concealing data or refraining from publishing a report in exchange for a proposed job in an existing AI company.
As for traitor, I think the only group here that can be betrayed is humanity as a whole, so as long as one believes they’re doing something good for humanity I don’t think it’d ever apply.
Hmm, that seems off to me? Unless you mean “severe disloyalty to some group isn’t Ultimately Bad, even though it can be instrumentally bad”. But to me it seems useful to have a concept of group betrayal, and to consider doing so to be generally bad, since I think group loyalty is often a useful norm that’s good for humanity as a whole.
Specifically, I think group-specific trust networks are instrumentally useful for cooperating to increase human welfare. For example, scientific research can’t be carried out effectively without some amount of trust among researchers, and between researchers and the public, etc. And you need some boundary for these groups that’s much smaller than all humanity to enable repeated interaction, mutual monitoring, and norm enforcement. When someone is severely disloyal to one of those groups they belong to, they undermine the mutual trust that enables future cooperation, which I’d guess is ultimately often bad for the world, since humanity as a whole depends for its welfare on countless such specialised (and overlapping) communities cooperating internally.
It’s not that I’m ignoring group loyalty, just that the word “traitor” seems so strong to me that I don’t think there’s any smaller group here that’s owed that much trust. I could imagine a close friend calling me that, but not a colleague. I could imagine a researcher saying I “betrayed” them if I steal and publish their results as my own after they consulted me, but that’s a much weaker word.
[Context: I come from a country where you’re labeled a traitor for having my anti-war political views, and I don’t feel such usage of this word has done much good for society here...]