I think this is actually a good example of the dynamic the author is pointing at.
While some people simply care about doing the most good, others will care about doing the most good in X area, and barring the second kind of person from EA is, in my opinion, both not optimal and not conducive to learning. More importantly, the assumption of cause neutrality in this fashion is precisely one of the differences between EA as a question and EA as an ideology.
Cause selection being strictly guided by neutral calculation will likely cause a lot of lost of potential, for reasons you’ve pointed to (I have some difficulty parsing this paragraph and am not sure where you think it’s appropriate or inappropriate to factor in personal fit and passion):
Personal fit and being passionate about what you do is absolutely important, but when we’re trying to compare causes and comparing actions/careers in terms of impact(or ITN), our answer shouldn’t be dependent on our personal interests and passions, but when we’re taking action based on those answers then we should think about personal fit and passions, as these prevent us from being miserable while we’re pursuing impact.
More importantly, the impact of many causes are a lot more difficult to measure quantifiably and definitely, let alone in a meaningful way. These causes are de facto left out of EA discussion or will “lose” to causes that allow for cleaner and easier quantitative analysis, which does not seem idea as it leads to a lot of lost potential.
If you’ll forgive the marketing terminology, I think cause neutrality is EA’s Unique Selling Point. It’s the main thing EA brings to the table, its value add, the thing that’s so hard to find anywhere else. It’s great that people committed to particular causes want to be as effective as possible within them—better than not caring much for effectiveness at all—but there are other places they can find company and support. EA can’t be for literally everyone otherwise it doesn’t mean anything, so it has to draw a line somewhere and I think that the most natural place is around the idea/behaviour/value that makes EA most distinctive (and, I would argue, most impactful).
To your second bullet point, I can’t think of an area where it’s more difficult to measure impact quantitatively and definitely than longtermism.
I agree that EA can’t be for everyone and I don’t think it should try to be, but I personally don’t think that cause neutrality is EA’s unique selling point or the main thing it brings to the table, although I do understand that there are different approaches to EA.
To your second bullet point, I can’t think of an area where it’s more difficult to measure impact quantitatively and definitely than longtermism.
I agree that longtermist impact isn’t really measurable, but this makes it hard for me reconcile cause neutrality with longtermism rather than feel like rigid cause neutrality would not have the effect I stated.
I think this is actually a good example of the dynamic the author is pointing at.
While some people simply care about doing the most good, others will care about doing the most good in X area, and barring the second kind of person from EA is, in my opinion, both not optimal and not conducive to learning. More importantly, the assumption of cause neutrality in this fashion is precisely one of the differences between EA as a question and EA as an ideology.
Cause selection being strictly guided by neutral calculation will likely cause a lot of lost of potential, for reasons you’ve pointed to (I have some difficulty parsing this paragraph and am not sure where you think it’s appropriate or inappropriate to factor in personal fit and passion):
More importantly, the impact of many causes are a lot more difficult to measure quantifiably and definitely, let alone in a meaningful way. These causes are de facto left out of EA discussion or will “lose” to causes that allow for cleaner and easier quantitative analysis, which does not seem idea as it leads to a lot of lost potential.
If you’ll forgive the marketing terminology, I think cause neutrality is EA’s Unique Selling Point. It’s the main thing EA brings to the table, its value add, the thing that’s so hard to find anywhere else. It’s great that people committed to particular causes want to be as effective as possible within them—better than not caring much for effectiveness at all—but there are other places they can find company and support. EA can’t be for literally everyone otherwise it doesn’t mean anything, so it has to draw a line somewhere and I think that the most natural place is around the idea/behaviour/value that makes EA most distinctive (and, I would argue, most impactful).
To your second bullet point, I can’t think of an area where it’s more difficult to measure impact quantitatively and definitely than longtermism.
I agree that EA can’t be for everyone and I don’t think it should try to be, but I personally don’t think that cause neutrality is EA’s unique selling point or the main thing it brings to the table, although I do understand that there are different approaches to EA.
I agree that longtermist impact isn’t really measurable, but this makes it hard for me reconcile cause neutrality with longtermism rather than feel like rigid cause neutrality would not have the effect I stated.