I wonder if the forum shouldn’t encourage a class of post (basically like this one) that’s something like “are there effective giving opportunities in X context?” Although EA is cause-neutral, there’s no reason why members shouldn’t take the opportunity provided by serendipity to investigate highly specific scenarios and model “virtuous EA behavior.” This could be a way of making the forum friendlier to visitors like the OP, and a way for comments to introduce visitors to EA concepts in a way that’s emotionally relevant.
If I understand you correctly, I agree. I understand the reason for quoting GiveWell’s framework, however, I think that it is potentially discouraging to someone who is trying to do the most good in a context that they care about. That’s not to say that nobody should ever say ‘maybe there are more neglected causes that you may not have thought about’, but the EA community certainly shouldn’t be giving the impression that we follow some strict ideology that no-one can challenge.
I wonder if the forum shouldn’t encourage a class of post (basically like this one) that’s something like “are there effective giving opportunities in X context?” Although EA is cause-neutral, there’s no reason why members shouldn’t take the opportunity provided by serendipity to investigate highly specific scenarios and model “virtuous EA behavior.” This could be a way of making the forum friendlier to visitors like the OP, and a way for comments to introduce visitors to EA concepts in a way that’s emotionally relevant.
If I understand you correctly, I agree. I understand the reason for quoting GiveWell’s framework, however, I think that it is potentially discouraging to someone who is trying to do the most good in a context that they care about. That’s not to say that nobody should ever say ‘maybe there are more neglected causes that you may not have thought about’, but the EA community certainly shouldn’t be giving the impression that we follow some strict ideology that no-one can challenge.