I would suggest that the EA definition of “impact” has been developed to address a certain set of problems with measurable outcomes, making it useful but incredibly narrow.
My personal belief is that there is a lot of scope for other forms of impact that are mostly or entirely distinct from Effective Altruism. I know it’s been written that effective altruists love systemic change, and indeed many people affiliated with EA pursue such change, but it’s not the only (or, in my opinion, even the primary) mechanism by which systemic change can/will occur.
Gandhi and Mandela were political actors who achieved far-reaching impact by dint of their positioning within a particular institution or system, and then radically opposing it on principle. Many of their actions fall very far outside the framework of most EA organisations for reasons I don’t think I need to go into very much. A brief overview of either of their biographies with the question “would an EA philosophy have advised this decision?” illustrates this point.
Coming at it from the other direction, I see EA as a philosophy dedicated to applying certain rationalist ideas and approaches to specific moral and ethical problems. As an institution, though, what is the structure of CEA and its offshoots? How does this affect the questions it attempts to address, and what are the assumptions of these questions?
80k provides the easiest example of what I mean; it’s very clearly aimed at university graduates, overwhelmingly from wealthy countries, with enough material, cultural, and intellectual resources to achieve some measure of change through an impactful career. This is excellent, but a rather specific way of achieving impact, and it operates with a number of prerequisites.
There is nothing inherently wrong with EA’s existence, and I fully support the basic idea of rationally figuring out to do the most good if you are coming from a particular starting position. I also think that EA currently cannot begin to address the kind of questions that motivate actors to become extraordinarily impactful in other ways, like the standout non-EAs on your list.
To be clear: one form of impact doesn’t preclude pursuing another. If I were to give advice it would be to pursue impact on multiple levels, ‘EA’ and not, quantifiable and not.
I would suggest that the EA definition of “impact” has been developed to address a certain set of problems with measurable outcomes, making it useful but incredibly narrow.
My personal belief is that there is a lot of scope for other forms of impact that are mostly or entirely distinct from Effective Altruism. I know it’s been written that effective altruists love systemic change, and indeed many people affiliated with EA pursue such change, but it’s not the only (or, in my opinion, even the primary) mechanism by which systemic change can/will occur.
Gandhi and Mandela were political actors who achieved far-reaching impact by dint of their positioning within a particular institution or system, and then radically opposing it on principle. Many of their actions fall very far outside the framework of most EA organisations for reasons I don’t think I need to go into very much. A brief overview of either of their biographies with the question “would an EA philosophy have advised this decision?” illustrates this point.
Coming at it from the other direction, I see EA as a philosophy dedicated to applying certain rationalist ideas and approaches to specific moral and ethical problems. As an institution, though, what is the structure of CEA and its offshoots? How does this affect the questions it attempts to address, and what are the assumptions of these questions?
80k provides the easiest example of what I mean; it’s very clearly aimed at university graduates, overwhelmingly from wealthy countries, with enough material, cultural, and intellectual resources to achieve some measure of change through an impactful career. This is excellent, but a rather specific way of achieving impact, and it operates with a number of prerequisites.
There is nothing inherently wrong with EA’s existence, and I fully support the basic idea of rationally figuring out to do the most good if you are coming from a particular starting position. I also think that EA currently cannot begin to address the kind of questions that motivate actors to become extraordinarily impactful in other ways, like the standout non-EAs on your list.
To be clear: one form of impact doesn’t preclude pursuing another. If I were to give advice it would be to pursue impact on multiple levels, ‘EA’ and not, quantifiable and not.