epistemic status: Borderline schizopost, not sure I’ll be able to elaborate much better on this, but posting anyway, since people always write that one shouldpost on the forum. Feel free to argue against. But: Don’t let this be the only thing you read that I’ve written.
Effective Altruism is a Pareto Frontier of Truth and Power
In order to be effective in the world one needs to coordinate
(exchange evidence, enact plans in groups, find shared descriptions
of the world) and interact with hostile entities (people who lie,
people who want to steal your resources, subsystems of otherwise
aligned people who want to do those things, engage in public
relations or zero-sum conflict). Solving those often requires trading
off truth for “power” on the margin, e.g. by nudging members to
“just accept” conclusions for action believed to be a basis for
effective action (since making elaborate arguments common knowledge is costly and agreement converges slowly to ε difference with O(1ε2) bits on evidence-sharing), by misrepresenting beliefs to other actors to make them more favorable towards effective
altruism, or by choosing easy-communicable Schelling categories that minmax utility to the lowest-bounded agents.
On the one side of the Pareto frontier one would have an even more
akrasia-plagued version of the rationality community with excellent
epistemics but which would be universally hated, on the other hand one
would have the attendants of this party.
Members of effective altruism seem not explicitely aware of this tradeoff or tension between truth-seeking and effectiveness/power (maybe for power-related reasons?) or at least don’t talk about it, even though it appears to be relevant.
epistemic status: Borderline schizopost, not sure I’ll be able to elaborate much better on this, but posting anyway, since people always write that one should post on the forum. Feel free to argue against. But: Don’t let this be the only thing you read that I’ve written.
Effective Altruism is a Pareto Frontier of Truth and Power
In order to be effective in the world one needs to coordinate (exchange evidence, enact plans in groups, find shared descriptions of the world) and interact with hostile entities (people who lie, people who want to steal your resources, subsystems of otherwise aligned people who want to do those things, engage in public relations or zero-sum conflict). Solving those often requires trading off truth for “power” on the margin, e.g. by nudging members to “just accept” conclusions for action believed to be a basis for effective action (since making elaborate arguments common knowledge is costly and agreement converges slowly to ε difference with O(1ε2) bits on evidence-sharing), by misrepresenting beliefs to other actors to make them more favorable towards effective altruism, or by choosing easy-communicable Schelling categories that minmax utility to the lowest-bounded agents.
On the one side of the Pareto frontier one would have an even more akrasia-plagued version of the rationality community with excellent epistemics but which would be universally hated, on the other hand one would have the attendants of this party.
Members of effective altruism seem not explicitely aware of this tradeoff or tension between truth-seeking and effectiveness/power (maybe for power-related reasons?) or at least don’t talk about it, even though it appears to be relevant.
In general, the thinking having come out of Lesswrong in the last couple of years strongly suggests that while (for ideal agents) there’s no such tension in individual rationality (because true beliefs are convergently instrumental), this does not hold for groups of humans (and maybe also not for groups of bounded agents in general, although there’s some people who believe strong coordination is easy for highly capable bounded agents).
There is also the thing where having more truth leads to more power, for instance by realizing that in some particular case the EMH is false.