A kinder concept than bias would be conflict of interest. In the broader society, we normally don’t expect a critic to prove actual biased decision-making to score a point; identifying a meaningful conflict of interest is enough. And it’s not generally considered “psychologizing those [one] disagrees with” to point to a possible COI, even if the identification is mediated by assumptions about the person’s internal mental functions.
Such a norm would make intellectual progress impossible. We’d just spend all day accusing each other of vague COIs. (E.g.: “Thorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. It’s worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.”)
There’s a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.
I don’t think academic philosophy is the right frame of reference here.
We can imagine a range of human pursuits that form a continuum of concern about COIs. On the one end, chess is a game of perfect information trivially obtained by chess critics. Even if COIs somehow existed in chess, thinking about them is really unlikely to add value because evaluating the player’s moves will ~always be easier and more informative.[1] On the other hand, a politician may vote on the basis of classified information, very imperfect information, and considerations for which it is very difficult to display reasoning transparency. I care about COIs a lot there!
I’m not a professional (or even amateur) philosopher, but philosophical discourse strikes me as much closer to the chess side of the continuum. Being a billionaire philanthropist seems closer to the middle of the continuum. If we were grading EA/OP/GV by academic philosophy norms, I suspect we would fail some of their papers. As Thorstad has mentioned, there is little public discussion of key biorisk information on infohazard grounds (and he was unsuccessful in obtaining the information privately either). We lack information—such as a full investigation into various concerns that have been raised—to fully evaluate whether GV has acted wisely in channeling tens of millions of dollars into CEA and other EVF projects. The recent withdrawal from certain animal-welfare subareas was not a paragon of reasoning transparency.
To be clear, it would be unfair to judge GV (or billionaire philanthropists more generally) by the standards of academic philosophy or chess. There’s a good reason that the practice of philanthropy involves consideration of non-public (even sensitive) information and decisions that are difficult to convey with reasoning transparency. But I don’t think it is appropriate to then apply those standards—which are premised on the ready availability of information and very high reasoning transparency—to the critics of billionaire philanthropists.
In the end, I don’t find the basic argument for a significant COI against “anti-capitalist” interventions by a single random billionaire philanthropist (or by Dustin and Cari specifically) to be particularly convincing. But I do find the argument stronger on a class basis of billionaire philanthropists. I don’t think that’s because I am anti-capitalist—I would also be skeptical of a system in which university professors controlled large swaths of the philanthropic funding base (they might be prone to dismissing the downsides of the university-industrial complex) or in which people who had made their money through crypto did (I expect they would be quite prone to dismissing the downsides of crypto).
~~~~
As for we non-billionaires, the effect of (true and non-true) beliefs about what funders will / won’t fund on what gets proposed and what gets done seems obvious. There’s on-Forum evidence that being too far away from GV’s political views (i.e., being “right-coded”) is seen as a liability. So that doesn’t seem like psychologizing or a proposition that needs much support.
One quick reason for thinking that academic philosophy norms should apply to the “institutional critique” is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as they’re writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, I’m claiming that they haven’t done a competent job of arguing for their thesis.
A kinder concept than bias would be conflict of interest. In the broader society, we normally don’t expect a critic to prove actual biased decision-making to score a point; identifying a meaningful conflict of interest is enough. And it’s not generally considered “psychologizing those [one] disagrees with” to point to a possible COI, even if the identification is mediated by assumptions about the person’s internal mental functions.
Such a norm would make intellectual progress impossible. We’d just spend all day accusing each other of vague COIs. (E.g.: “Thorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. It’s worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.”)
There’s a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.
I don’t think academic philosophy is the right frame of reference here.
We can imagine a range of human pursuits that form a continuum of concern about COIs. On the one end, chess is a game of perfect information trivially obtained by chess critics. Even if COIs somehow existed in chess, thinking about them is really unlikely to add value because evaluating the player’s moves will ~always be easier and more informative.[1] On the other hand, a politician may vote on the basis of classified information, very imperfect information, and considerations for which it is very difficult to display reasoning transparency. I care about COIs a lot there!
I’m not a professional (or even amateur) philosopher, but philosophical discourse strikes me as much closer to the chess side of the continuum. Being a billionaire philanthropist seems closer to the middle of the continuum. If we were grading EA/OP/GV by academic philosophy norms, I suspect we would fail some of their papers. As Thorstad has mentioned, there is little public discussion of key biorisk information on infohazard grounds (and he was unsuccessful in obtaining the information privately either). We lack information—such as a full investigation into various concerns that have been raised—to fully evaluate whether GV has acted wisely in channeling tens of millions of dollars into CEA and other EVF projects. The recent withdrawal from certain animal-welfare subareas was not a paragon of reasoning transparency.
To be clear, it would be unfair to judge GV (or billionaire philanthropists more generally) by the standards of academic philosophy or chess. There’s a good reason that the practice of philanthropy involves consideration of non-public (even sensitive) information and decisions that are difficult to convey with reasoning transparency. But I don’t think it is appropriate to then apply those standards—which are premised on the ready availability of information and very high reasoning transparency—to the critics of billionaire philanthropists.
In the end, I don’t find the basic argument for a significant COI against “anti-capitalist” interventions by a single random billionaire philanthropist (or by Dustin and Cari specifically) to be particularly convincing. But I do find the argument stronger on a class basis of billionaire philanthropists. I don’t think that’s because I am anti-capitalist—I would also be skeptical of a system in which university professors controlled large swaths of the philanthropic funding base (they might be prone to dismissing the downsides of the university-industrial complex) or in which people who had made their money through crypto did (I expect they would be quite prone to dismissing the downsides of crypto).
~~~~
As for we non-billionaires, the effect of (true and non-true) beliefs about what funders will / won’t fund on what gets proposed and what gets done seems obvious. There’s on-Forum evidence that being too far away from GV’s political views (i.e., being “right-coded”) is seen as a liability. So that doesn’t seem like psychologizing or a proposition that needs much support.
I set aside the question of whether someone is throwing matches or otherwise colluding.
One quick reason for thinking that academic philosophy norms should apply to the “institutional critique” is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as they’re writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, I’m claiming that they haven’t done a competent job of arguing for their thesis.