Perhaps unnecessary to say this, but in case it is helpful: The reason being that the way this is structured in relation to the preceding part on AI and humans could be perceived as equating Palestinians with (potentially dangerous) machines and Israelis with humans. The piece stands very well on its own without these 4 words.
Ulrik—I understand your point, sort of, but feel free to reverse any of these human-human alignment examples in whatever ways seem more politically palatable.
Personally, I’m fairly worried about agentic, open-source AGIs being used by Jihadist terrorists. But very few of the e/accs and AI devs advocating open-source AGI seem worried by such things.
I think this comment makes this even worse, some readers might perceive you as now equating Palestinians with terrorists. I really do not think this sort of language belongs on a forum with a diversity of people from all walks of life (and ideally does not belong anywhere). That people upvote your comment is also worrying. Let us try to keep the forum a place where as many people as possible feel comfortable and where we check our own biases and collaborate on creating an atmosphere reflecting wide ranging altruism.
I think you’re significantly misinterpreting what Geoffrey is trying to say and I don’t like the chilling effect caused by trying to avoid making an analogy that could be offensive to anyone who misinterprets you.
I agree that the chilling effect is unfortunate. I also think the chilling effect reaches further—there is a chilling effect on those that find this language, and perhaps even more so the moderators and forum members that condone it, troublesome. When I wrote my original comment, the author could simply have removed this particular analogy from their post. I would then have been ok to remove my comment and I would likely have even removed my downvote of the post—perhaps people would not even notice and the chilling effect would be much reduced. I also think humility is good to practice for EAs and taking onboard reasonable criticism and changing our behavior seems a good way to exercise such humility.
I would suggest striking the following:
Perhaps unnecessary to say this, but in case it is helpful: The reason being that the way this is structured in relation to the preceding part on AI and humans could be perceived as equating Palestinians with (potentially dangerous) machines and Israelis with humans. The piece stands very well on its own without these 4 words.
Ulrik—I understand your point, sort of, but feel free to reverse any of these human-human alignment examples in whatever ways seem more politically palatable.
Personally, I’m fairly worried about agentic, open-source AGIs being used by Jihadist terrorists. But very few of the e/accs and AI devs advocating open-source AGI seem worried by such things.
I think this comment makes this even worse, some readers might perceive you as now equating Palestinians with terrorists. I really do not think this sort of language belongs on a forum with a diversity of people from all walks of life (and ideally does not belong anywhere). That people upvote your comment is also worrying. Let us try to keep the forum a place where as many people as possible feel comfortable and where we check our own biases and collaborate on creating an atmosphere reflecting wide ranging altruism.
I think you’re significantly misinterpreting what Geoffrey is trying to say and I don’t like the chilling effect caused by trying to avoid making an analogy that could be offensive to anyone who misinterprets you.
I agree that the chilling effect is unfortunate. I also think the chilling effect reaches further—there is a chilling effect on those that find this language, and perhaps even more so the moderators and forum members that condone it, troublesome. When I wrote my original comment, the author could simply have removed this particular analogy from their post. I would then have been ok to remove my comment and I would likely have even removed my downvote of the post—perhaps people would not even notice and the chilling effect would be much reduced. I also think humility is good to practice for EAs and taking onboard reasonable criticism and changing our behavior seems a good way to exercise such humility.