Thanks for the link, Michael—I’ve missed that post and it’s indeed related to the current one.
Thanks, Joey, for writing this up. My worry is that making any hard rules for what counts as representative may do more harm than good, if only due to deep (rational) disagreements that may arise on any particular issue. The example Michael mentions is a case in point: for instance, while I may not necessarily disagree that research on AI safety is worthy of pursuit (though see the disagreements between Yann LeCun, the head of AI research at Facebook with Bostrom’s arguments), I find the transparency of the criteria used by EA organizations to make decisions which projects to fund unsatisfactory, to the point of endangering the EA movement and is reputation when it comes to the claim that EA is about effective paths of reducing suffering. The primary problem here, as I argued in this post is that it remains unclear why the currently funded projects should count as effective and efficient scientific research.
In view of this, I find it increasingly frustrating to associate myself with the EA movement and its recent development, especially since the issue of efficiency of scientific research is the very topic of my own research. The best I can do is to treat this as an issue of a peer disagreement, where I keep it open that I might be wrong after all. However, this also means we have to keep an open dialogue since either of the sides in the disagreement may turn out to be wrong, but this doesn’t seem easy. For instance, as soon as I mention any of these issues on this forum, a few downvotes tend to pop up, with no counterargument provided (edit: this current post ironically turned out to be another case in point ;)
So altogether, I’m not sure I feel comfy associating myself with the EA community, though I indeed deeply care about the idea of effective charity and effective reduction of suffering. And introducing a rule-book which would claim, for instance, that EAs support the funding of research on of AI safety would make me feel just as uncomfy, not because of this idea in principle, but because of its current execution.
EDIT: Just wanted to add that the proposal for community-building organizations to strive for cause indifference sounds like a nice solution.
Should probably mention I have raised similiar concerns before in this post: ‘the marketing gap and a plea for moral inclusivity’
Thanks for the link, Michael—I’ve missed that post and it’s indeed related to the current one.
Thanks, Joey, for writing this up. My worry is that making any hard rules for what counts as representative may do more harm than good, if only due to deep (rational) disagreements that may arise on any particular issue. The example Michael mentions is a case in point: for instance, while I may not necessarily disagree that research on AI safety is worthy of pursuit (though see the disagreements between Yann LeCun, the head of AI research at Facebook with Bostrom’s arguments), I find the transparency of the criteria used by EA organizations to make decisions which projects to fund unsatisfactory, to the point of endangering the EA movement and is reputation when it comes to the claim that EA is about effective paths of reducing suffering. The primary problem here, as I argued in this post is that it remains unclear why the currently funded projects should count as effective and efficient scientific research.
In view of this, I find it increasingly frustrating to associate myself with the EA movement and its recent development, especially since the issue of efficiency of scientific research is the very topic of my own research. The best I can do is to treat this as an issue of a peer disagreement, where I keep it open that I might be wrong after all. However, this also means we have to keep an open dialogue since either of the sides in the disagreement may turn out to be wrong, but this doesn’t seem easy. For instance, as soon as I mention any of these issues on this forum, a few downvotes tend to pop up, with no counterargument provided (edit: this current post ironically turned out to be another case in point ;)
So altogether, I’m not sure I feel comfy associating myself with the EA community, though I indeed deeply care about the idea of effective charity and effective reduction of suffering. And introducing a rule-book which would claim, for instance, that EAs support the funding of research on of AI safety would make me feel just as uncomfy, not because of this idea in principle, but because of its current execution.
EDIT: Just wanted to add that the proposal for community-building organizations to strive for cause indifference sounds like a nice solution.