Consider the premise that the current instantiation of Effective Altruism is defective, and one of the only solutions is some action by Open Philanthropy.
By “defective”, I mean:
A. EA struggles to engage even a base of younger “HYPS” and “FAANG”, much less millions of altruistic people with free time and resources. Also, EA seems like it should have more acceptance in the “wider non-profit world” than it has.
B. The precious projects funded or associated with Open Philanthropy and EA often seem to merely “work alongside EA”. Some constructs or side effects of EA, such as the current instantiation of Longtermism and “AI Safety″ have negative effects on community development.
Elaborations on A:
Posts like “Bycatch”, “Mistakes on road” and “Really, really, hard” seem to suggest serious structural issues and underuse of large numbers of valuable and highly engaged people
Interactions in meetings with senior people in philanthropy indicates low buy-in: For example, in a private, high-trust meeting, a leader mentions skepticism of EA, and when I ask for elaboration, the leader pauses, visibly shifts uncomfortably in the Zoom screen, and begins slowly, “Well, they spend time in rabbit holes…”. While anecdotal, it also hints perhaps that widespread “data” is not available due to reluctance (to be clear, fear of offending institutions associated with large amounts of funding).
Elaboration on B:
Consider Longtermism and AI as either manifestations or intermediate reasons for these issues:
The value of present instantiations of “Longtermism” and “AI″ is far more modest than they appear.
This is because they amount to rephrasing of existing ideas and their work usually treads inside a specific circle of competence. This means that no matter how stellar, their activities contribute little to execution of the actual issues.
This is not benign because these activities (unintentionally) are allowing backing in of worldviews that encroach upon the culture and execution of EA in other areas and as a whole. It produces “shibboleths” that run into the teeth of EA’s presentation issues. It also takes attention and interest from under-provisioned cause areas that are esoteric and unpopularized.
Aside: This question would benefit from sketches of solutions and sketches of the counterfactual state of EA. But this isn’t workable as this question is already lengthy, may be contentious, and contains flaws. Another aside: causes are not zero-sum and it is not clear the question contains a criticism of Longtermism or AI as a concern, even stronger criticism can be consistent with say, ten times current funding.
In your role in setting strategy for Open Philanthropy, will you consider the above premise and the three questions below:
To what degree would you agree with the characterizations above or (maybe unfair to ask) similar criticisms?
What evidence would cause you to change your mind to the answer to question #1 (e.g. if you believed EA was defective, what would make disprove this in your mind? Or, if you disagreed with the premise, what evidence would be required for you to agree?)
If there is a structural issue in EA, and in theory Open Philanthropy could intervene to remedy it, is there any reason that would prevent intervention? For example, from an entity/governance perspective or from a practical perspective?
Consider the premise that the current instantiation of Effective Altruism is defective, and one of the only solutions is some action by Open Philanthropy.
By “defective”, I mean:
A. EA struggles to engage even a base of younger “HYPS” and “FAANG”, much less millions of altruistic people with free time and resources. Also, EA seems like it should have more acceptance in the “wider non-profit world” than it has.
B. The precious projects funded or associated with Open Philanthropy and EA often seem to merely “work alongside EA”. Some constructs or side effects of EA, such as the current instantiation of Longtermism and “AI Safety″ have negative effects on community development.
Elaborations on A:
Posts like “Bycatch”, “Mistakes on road” and “Really, really, hard” seem to suggest serious structural issues and underuse of large numbers of valuable and highly engaged people
Interactions in meetings with senior people in philanthropy indicates low buy-in: For example, in a private, high-trust meeting, a leader mentions skepticism of EA, and when I ask for elaboration, the leader pauses, visibly shifts uncomfortably in the Zoom screen, and begins slowly, “Well, they spend time in rabbit holes…”. While anecdotal, it also hints perhaps that widespread “data” is not available due to reluctance (to be clear, fear of offending institutions associated with large amounts of funding).
Elaboration on B:
Consider Longtermism and AI as either manifestations or intermediate reasons for these issues:
The value of present instantiations of “Longtermism” and “AI″ is far more modest than they appear.
This is because they amount to rephrasing of existing ideas and their work usually treads inside a specific circle of competence. This means that no matter how stellar, their activities contribute little to execution of the actual issues.
This is not benign because these activities (unintentionally) are allowing backing in of worldviews that encroach upon the culture and execution of EA in other areas and as a whole. It produces “shibboleths” that run into the teeth of EA’s presentation issues. It also takes attention and interest from under-provisioned cause areas that are esoteric and unpopularized.
Aside: This question would benefit from sketches of solutions and sketches of the counterfactual state of EA. But this isn’t workable as this question is already lengthy, may be contentious, and contains flaws. Another aside: causes are not zero-sum and it is not clear the question contains a criticism of Longtermism or AI as a concern, even stronger criticism can be consistent with say, ten times current funding.
In your role in setting strategy for Open Philanthropy, will you consider the above premise and the three questions below:
To what degree would you agree with the characterizations above or (maybe unfair to ask) similar criticisms?
What evidence would cause you to change your mind to the answer to question #1 (e.g. if you believed EA was defective, what would make disprove this in your mind? Or, if you disagreed with the premise, what evidence would be required for you to agree?)
If there is a structural issue in EA, and in theory Open Philanthropy could intervene to remedy it, is there any reason that would prevent intervention? For example, from an entity/governance perspective or from a practical perspective?