This project seems relevant—an app to track COVID-19. Especially given the lack of testing in e.g. the US (and anecdotal evidence from my own social circle suggests it’s already more prevalent than official statistics suggest) simple data-gathering seems relevant.
BenHoffman
This isn’t a coherent rationalization for reasons covered in tedious detail in the longer series.
The series is long and boring precisely because it tried to address pretty much every claim like that at once. In this case GiveWell’s on record as not wanting their cost per life saved numbers to be held to the standard of “literally true” (one side of that disjunction) so I don’t see the point in going through that whole argument again.
Drowning children are rare
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You’re also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.
I’ve talked with few people who seemed under the impression that the EA orgs making recommendations were performing some sort of quantitative optimization to maximize some sort of goodness metric, and used those recommendations on that basis, because they themselves accepted some form of normative utilitarianism.
Academia has influence on policymakers when it can help them achieve their goals, that doesn’t mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.
I agree with Keynes on this, you disagree, and neither of us has really offered much in the way of an argument or evidence, you’ve just asserted a contrary position.
The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it’s not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don’t vanish just because you rephrase it in the language of utilitarianism and AGI.
So, no one should try this, it would be crazy to try, and besides we don’t know whether it’s possible because we haven’t tried, and also competent people who know what they’re doing are working on it already so we shouldn’t reinvent the wheel? It doesn’t seem like you tried to understand the argument before trying to criticize it, it seems like you’re just throwing up a bunch of contradictory objections.
The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it’s possible for alternative or backchannel efforts to be positive, they are far from being the “obvious” choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.
All these seem like straightforward objections to supporting things like GiveWell or the global development EA Fund (vs joining or supporting establishment aid orgs or states which have more competence in meddling in less powerful countries’ internal affairs).
Second, as long as your actions impact everything, a totalizing metric might be useful.
Wait, is your argument seriously “no one does this so it’s a strawman, and also it makes total sense to do for many practical purposes”? What’s really going on here?
actual totalitarian governments have existed and they have not used such a metric (AFAIK).
Linear programming was invented in the Soviet Union to centrally plan production with a single computational optimization.
The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy’s familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here.
Some claim to, others don’t.
I worked at GiveWell / Open Philanthropy Project for a year. I wrote up some of those reports. It’s explicitly not scoring all recommendations on a unified metric—I linked to the “Sequence vs Cluster Thinking” post which makes this quite clear—but at the time, there were four paintings on the wall of the GiveWell office illustrating the four core GiveWell values, and one was titled “Utilitarianism,” which is distinguished from other moral philosophies (and in particular from the broader class “consequentialism”) by the claim that you should use a single totalizing metric to assess right action.
Should Effective Altruism be at war with North Korea?
“Compared to a Ponzi scheme” seems like a pretty unfortunate compression of what I actually wrote. Better would be to say that I claimed that a large share of ventures, including a large subset of EA, and the US government, have substantial structural similarities to Ponzi schemes.
Maybe my criticism would have been better received if I’d left out the part that seems to be hard for people to understand; but then it would have been different and less important criticism.
retry the original case with double jeopardy
This sort of framing leads to publication bias. We want double jeopardy! This isn’t a criminal trial, where the coercive power of a massive state is being pitted against an individual’s limited ability to defend themselves. This is an intervention people are spending loads of money on, and it’s entirely appropriate to continue checking whether the intervention works as well as we thought.
As I understand the linked page, it’s mostly about retroactive rather than prospective observational studies, and usually for individual rather than population-level interventions. A plan to initiate mass bednet distribution on a national scale is pretty substantially different from that, and doesn’t suffer from the same kind of confounding.
Of course it’s mathematically possible that the data is so noisy relative to the effect size of the supposedly most cost-effective global health intervention out there, that we shouldn’t expect the impact of the intervention to show up. But, I haven’t seen evidence that anyone at GiveWell actually did the relevant calculation to check whether this was the case for bednet distributions.
If they did the followups and malaria rates held stable or increased, you would not then believe that the bednets do not work; if it takes randomized trials to justify spending on bednets, it cannot then take only surveys to justify not spending on bed nets, as the causal question is identical.
It’s hard for me to believe that the effect of bednets is large enough to show an effect in RCTs, but not large enough to show up more often than not as a result of mass distribution of bednets. If absence of this evidence really isn’t strong evidence of no effect, it should be possible to show it with specific numbers and not just handwaving about noise. And I’d expect that to be mentioned in the top-level summary on bed net interventions, not buried in a supplemental page.
One simple example: https://en.wikipedia.org/wiki/Grade_inflation
More generally, things like the profusion of makework designed to facially resemble teaching, instead of optimizing for outcomes.
We should also expect this to mean that countries such as Australia and China that heavily weight a national exam system when advancing students at crucial stages will have less corrupt educational systems than countries like the US which weight locally assessed factors like grades heavily.
(Of course, there can be massive downsides to standardization as well.)
I think the thing to do is try to avoid thinking of “bureaucracy” as a homogeneous quantity, and instead attend to the details of institutions involved. Of course, as a foreigner with respect to every country but one’s own, this is going to be difficult to evaluate when giving abroad. This is one of the many reasons why giving effectively on a global scale is hard, and why it’s so important to have information feedback of the kind GiveDirectly is working on. Long-term follow-up seems really important too, and even then there’s going to be some substantial justified uncertainty.
I know of one related effort: https://twitter.com/webdevMason/status/1234216664113135616
http://www.coepi.org/