Such a norm would make intellectual progress impossible. Weād just spend all day accusing each other of vague COIs. (E.g.: āThorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. Itās worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.ā)
Thereās a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.
I donāt think academic philosophy is the right frame of reference here.
We can imagine a range of human pursuits that form a continuum of concern about COIs. On the one end, chess is a game of perfect information trivially obtained by chess critics. Even if COIs somehow existed in chess, thinking about them is really unlikely to add value because evaluating the playerās moves will ~always be easier and more informative.[1] On the other hand, a politician may vote on the basis of classified information, very imperfect information, and considerations for which it is very difficult to display reasoning transparency. I care about COIs a lot there!
Iām not a professional (or even amateur) philosopher, but philosophical discourse strikes me as much closer to the chess side of the continuum. Being a billionaire philanthropist seems closer to the middle of the continuum. If we were grading EA/āOP/āGV by academic philosophy norms, I suspect we would fail some of their papers. As Thorstad has mentioned, there is little public discussion of key biorisk information on infohazard grounds (and he was unsuccessful in obtaining the information privately either). We lack informationāsuch as a full investigation into various concerns that have been raisedāto fully evaluate whether GV has acted wisely in channeling tens of millions of dollars into CEA and other EVF projects. The recent withdrawal from certain animal-welfare subareas was not a paragon of reasoning transparency.
To be clear, it would be unfair to judge GV (or billionaire philanthropists more generally) by the standards of academic philosophy or chess. Thereās a good reason that the practice of philanthropy involves consideration of non-public (even sensitive) information and decisions that are difficult to convey with reasoning transparency. But I donāt think it is appropriate to then apply those standardsāwhich are premised on the ready availability of information and very high reasoning transparencyāto the critics of billionaire philanthropists.
In the end, I donāt find the basic argument for a significant COI against āanti-capitalistā interventions by a single random billionaire philanthropist (or by Dustin and Cari specifically) to be particularly convincing. But I do find the argument stronger on a class basis of billionaire philanthropists. I donāt think thatās because I am anti-capitalistāI would also be skeptical of a system in which university professors controlled large swaths of the philanthropic funding base (they might be prone to dismissing the downsides of the university-industrial complex) or in which people who had made their money through crypto did (I expect they would be quite prone to dismissing the downsides of crypto).
~~~~
As for we non-billionaires, the effect of (true and non-true) beliefs about what funders will /ā wonāt fund on what gets proposed and what gets done seems obvious. Thereās on-Forum evidence that being too far away from GVās political views (i.e., being āright-codedā) is seen as a liability. So that doesnāt seem like psychologizing or a proposition that needs much support.
One quick reason for thinking that academic philosophy norms should apply to the āinstitutional critiqueā is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as theyāre writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, Iām claiming that they havenāt done a competent job of arguing for their thesis.
Such a norm would make intellectual progress impossible. Weād just spend all day accusing each other of vague COIs. (E.g.: āThorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. Itās worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.ā)
Thereās a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.
I donāt think academic philosophy is the right frame of reference here.
We can imagine a range of human pursuits that form a continuum of concern about COIs. On the one end, chess is a game of perfect information trivially obtained by chess critics. Even if COIs somehow existed in chess, thinking about them is really unlikely to add value because evaluating the playerās moves will ~always be easier and more informative.[1] On the other hand, a politician may vote on the basis of classified information, very imperfect information, and considerations for which it is very difficult to display reasoning transparency. I care about COIs a lot there!
Iām not a professional (or even amateur) philosopher, but philosophical discourse strikes me as much closer to the chess side of the continuum. Being a billionaire philanthropist seems closer to the middle of the continuum. If we were grading EA/āOP/āGV by academic philosophy norms, I suspect we would fail some of their papers. As Thorstad has mentioned, there is little public discussion of key biorisk information on infohazard grounds (and he was unsuccessful in obtaining the information privately either). We lack informationāsuch as a full investigation into various concerns that have been raisedāto fully evaluate whether GV has acted wisely in channeling tens of millions of dollars into CEA and other EVF projects. The recent withdrawal from certain animal-welfare subareas was not a paragon of reasoning transparency.
To be clear, it would be unfair to judge GV (or billionaire philanthropists more generally) by the standards of academic philosophy or chess. Thereās a good reason that the practice of philanthropy involves consideration of non-public (even sensitive) information and decisions that are difficult to convey with reasoning transparency. But I donāt think it is appropriate to then apply those standardsāwhich are premised on the ready availability of information and very high reasoning transparencyāto the critics of billionaire philanthropists.
In the end, I donāt find the basic argument for a significant COI against āanti-capitalistā interventions by a single random billionaire philanthropist (or by Dustin and Cari specifically) to be particularly convincing. But I do find the argument stronger on a class basis of billionaire philanthropists. I donāt think thatās because I am anti-capitalistāI would also be skeptical of a system in which university professors controlled large swaths of the philanthropic funding base (they might be prone to dismissing the downsides of the university-industrial complex) or in which people who had made their money through crypto did (I expect they would be quite prone to dismissing the downsides of crypto).
~~~~
As for we non-billionaires, the effect of (true and non-true) beliefs about what funders will /ā wonāt fund on what gets proposed and what gets done seems obvious. Thereās on-Forum evidence that being too far away from GVās political views (i.e., being āright-codedā) is seen as a liability. So that doesnāt seem like psychologizing or a proposition that needs much support.
I set aside the question of whether someone is throwing matches or otherwise colluding.
One quick reason for thinking that academic philosophy norms should apply to the āinstitutional critiqueā is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as theyāre writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, Iām claiming that they havenāt done a competent job of arguing for their thesis.