I don’t think my experience matches this split. For example, I don’t think that it’s obvious that the causes you specify match the attributes in points 2, 5, 6.
I am somewhat confused by the framing of this comment, you start by saying “there are two types of EA” but the points seem to all be about the properties of different causes.
I don’t think there are ‘two kinds’ of EAs in the sense you could easily tell which group people were going to fall into in advance, but that all of your characteristics just follow as practical considerations resulting from how important people find the longtermist view. (But I do think “A longtermist viewpoint leads to very different approach” is correct.)
I’m also not sure how similar the global poverty and farm animal welfare groups actually are. There seem to be significant differences in terms of the quality of evidence used and how established they are as areas. Points 3, 4, 7, 9 and 10 seem to have pretty noticeable differences between global poverty and farm animal welfare.
Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.
I agree that there are substantial differences between global poverty and farm animal welfare (with global poverty being more clearly Type 1). But it seems to me that those differences are more differences of degree, while the differences between global poverty/farm animal welfare and biosecurity/AI alignment are more differences of kind.
Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.
Ah I see. for some reason I got the other sense from reading your comment, but looking back at it I think that was just a failing of reading comprehension on my part.
I agree that the differences between global poverty and animal welfare are more matters of degree, but I also think they are larger than people seem to expect.
I also believe there are two broad types of EAs today. So this is interesting. Although, I am a little confused on some of your meaning. Can you make some of those into complete sentences?
2) How are these different between Type 1 and Type 2?
4) “Evidence is more direct” in what regard or context??
2) How are these different between Type 1 and Type 2?
To me, it cannot be seriously disputed that improving the lives of currently alive humans is good, that improving the welfare of current and future animals is good, and that preventing the existence of farm animals who would live overall negative lives is good.
By contrast, I think that you can make a plausible argument that there is no moral value to ensuring that a person who would live a happy life comes into existence (though as noted above, you can make the case for reducing global catastrophic risks without relying on that benefit).
4) “Evidence is more direct” in what regard or context??
It’s easier to measure the effectiveness of the program being implemented by a global health charity, the effectiveness of that charity at implementing the program, and the effectiveness of an animal charity at securing corporate pledges than it is to measure the impact of biosecurity and AI alignment organizations.
The shift from Doing Good Better to this handbook reinforces my sense that there are two types of EA:
Type 1:
Causes: global health, farm animal welfare
Moral patienthood is hard to seriously dispute
Evidence is more direct (RCTs, corporate pledges)
Charity evaluators exist (because evidence is more direct)
Earning to give is a way to contribute
Direct work can be done by people with general competence
Economic reasoning is more important (partly due to donations being more important)
More emotionally appealing (partly due to being more able to feel your impact)
Some public knowledge about the problem
More private funding and a larger preexisting community
Type 2:
Causes: AI alignment, biosecurity
Moral patienthood can be plausibly disputed (if you’re relying on the benefits to the long term future; however, these causes are arguably important even without considering the long term future)
Evidence is more speculative (making prediction more important)
Charity evaluation is more difficult (because impact is harder to measure)
Direct work is the way to contribute
Direct work seems to benefit greatly from specific skills/graduate education
Game theory reasoning is more important (of course, game theory is technically part of economics)
Less emotionally appealing (partly due to being less able to feel your impact)
Little public knowledge about the problem
Less private funding and a smaller preexisting community
I don’t think my experience matches this split. For example, I don’t think that it’s obvious that the causes you specify match the attributes in points 2, 5, 6.
I am somewhat confused by the framing of this comment, you start by saying “there are two types of EA” but the points seem to all be about the properties of different causes.
I don’t think there are ‘two kinds’ of EAs in the sense you could easily tell which group people were going to fall into in advance, but that all of your characteristics just follow as practical considerations resulting from how important people find the longtermist view. (But I do think “A longtermist viewpoint leads to very different approach” is correct.)
I’m also not sure how similar the global poverty and farm animal welfare groups actually are. There seem to be significant differences in terms of the quality of evidence used and how established they are as areas. Points 3, 4, 7, 9 and 10 seem to have pretty noticeable differences between global poverty and farm animal welfare.
Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.
I agree that there are substantial differences between global poverty and farm animal welfare (with global poverty being more clearly Type 1). But it seems to me that those differences are more differences of degree, while the differences between global poverty/farm animal welfare and biosecurity/AI alignment are more differences of kind.
Ah I see. for some reason I got the other sense from reading your comment, but looking back at it I think that was just a failing of reading comprehension on my part.
I agree that the differences between global poverty and animal welfare are more matters of degree, but I also think they are larger than people seem to expect.
What on Earth do you mean by “disputing moral patienthood”? If there are no moral patients then there is basically no reason for altruism whatsoever.
I also believe there are two broad types of EAs today. So this is interesting. Although, I am a little confused on some of your meaning. Can you make some of those into complete sentences?
2) How are these different between Type 1 and Type 2?
4) “Evidence is more direct” in what regard or context??
Lastly, the list seems skewed, favoring Type 2.
To me, it cannot be seriously disputed that improving the lives of currently alive humans is good, that improving the welfare of current and future animals is good, and that preventing the existence of farm animals who would live overall negative lives is good.
By contrast, I think that you can make a plausible argument that there is no moral value to ensuring that a person who would live a happy life comes into existence (though as noted above, you can make the case for reducing global catastrophic risks without relying on that benefit).
It’s easier to measure the effectiveness of the program being implemented by a global health charity, the effectiveness of that charity at implementing the program, and the effectiveness of an animal charity at securing corporate pledges than it is to measure the impact of biosecurity and AI alignment organizations.