There is not a clear distinction between preparations for offense and preparations for defense. The absence of this distinction is precisely what gives rise to threats and instability in cases like North Korea. The ambiguity is due to structural problems with limited information and the nature of military forces, not ideologies in the current milleu.
The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it’s possible for alternative or backchannel efforts to be positive, they are far from being the “obvious” choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.
The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it’s not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don’t vanish just because you rephrase it in the language of utilitarianism and AGI.
Academia has influence on policymakers when it can help them achieve their goals, that doesn’t mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You’re also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.
Talking about people or countries as rational agents with utility functions does not mean we have to pretend that they act on the basis of moral theories like utilitarianism.
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You’re also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.
I’ve talked with few people who seemed under the impression that the EA orgs making recommendations were performing some sort of quantitative optimization to maximize some sort of goodness metric, and used those recommendations on that basis, because they themselves accepted some form of normative utilitarianism.
It is perceived, that doesn’t mean the perception is beneficial. It’s better if people perceive EA as having weaker philosophical claims, like maximizing welfare in the context of charity, as opposed to taking on the full utilitarian theory and all it says about trolleys and torturing terrorists and so on. Quantitative optimization should be perceived as a contextual tool that comes bottom-up to answer practical questions, not tied to a whole moral theory. That’s really how cost-benefit analysis has already been used.
The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it’s possible for alternative or backchannel efforts to be positive, they are far from being the “obvious” choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.
All these seem like straightforward objections to supporting things like GiveWell or the global development EA Fund (vs joining or supporting establishment aid orgs or states which have more competence in meddling in less powerful countries’ internal affairs).
It wasn’t obvious to make GiveWell, until people noticed a systematic flaw (lack of serious impact analysis) that warranted a new approach. In this case, we would need to identify a systematic flaw in the way that regular diplomacy and deterrence efforts are approaching things. Professionals do regard North Korea as a threat, but not in a naive “oh they’re just evil and crazy aggressors” sort of sense, they already know that deterrence is a mutual problem. I can see why one might be cynical about US government efforts, but there are more players besides the US government.
The Logan Act doesn’t present an obstacle to aid efforts. You’re not intervening in a dispute with the US government, you’re just supporting the foreign country’s local programs.
EAs have a perfectly good working understanding of the microeconomic impacts of aid. At least, Givewell etc do. Regarding macroeconomic and institutional effects, OK not as much, but I still feel more confident there than I do when it comes to international relations and strategic policy. We have lots of economists, very few international relations people. And I think EAs show more overconfidence when they talk about nuclear security and foreign policy.
Academia has influence on policymakers when it can help them achieve their goals, that doesn’t mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.
I agree with Keynes on this, you disagree, and neither of us has really offered much in the way of an argument or evidence, you’ve just asserted a contrary position.
The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it’s not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don’t vanish just because you rephrase it in the language of utilitarianism and AGI.
So, no one should try this, it would be crazy to try, and besides we don’t know whether it’s possible because we haven’t tried, and also competent people who know what they’re doing are working on it already so we shouldn’t reinvent the wheel? It doesn’t seem like you tried to understand the argument before trying to criticize it, it seems like you’re just throwing up a bunch of contradictory objections.
It’s different because they have the right approach on how to compromise. They work on compromises that are grounded in political interests rather than moral values, and they work on compromises that solve the task at hand rather than setting the record straight on everything. And while they have failures, the reasons for those failures are structural (problems of commitment, honesty, political constraints, uncertainty) so you cannot avoid them just by changing up the ideologies.
There are many problems here:
There is not a clear distinction between preparations for offense and preparations for defense. The absence of this distinction is precisely what gives rise to threats and instability in cases like North Korea. The ambiguity is due to structural problems with limited information and the nature of military forces, not ideologies in the current milleu.
The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it’s possible for alternative or backchannel efforts to be positive, they are far from being the “obvious” choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.
The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it’s not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don’t vanish just because you rephrase it in the language of utilitarianism and AGI.
Academia has influence on policymakers when it can help them achieve their goals, that doesn’t mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You’re also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.
Talking about people or countries as rational agents with utility functions does not mean we have to pretend that they act on the basis of moral theories like utilitarianism.
I’ve talked with few people who seemed under the impression that the EA orgs making recommendations were performing some sort of quantitative optimization to maximize some sort of goodness metric, and used those recommendations on that basis, because they themselves accepted some form of normative utilitarianism.
It is perceived, that doesn’t mean the perception is beneficial. It’s better if people perceive EA as having weaker philosophical claims, like maximizing welfare in the context of charity, as opposed to taking on the full utilitarian theory and all it says about trolleys and torturing terrorists and so on. Quantitative optimization should be perceived as a contextual tool that comes bottom-up to answer practical questions, not tied to a whole moral theory. That’s really how cost-benefit analysis has already been used.
All these seem like straightforward objections to supporting things like GiveWell or the global development EA Fund (vs joining or supporting establishment aid orgs or states which have more competence in meddling in less powerful countries’ internal affairs).
It wasn’t obvious to make GiveWell, until people noticed a systematic flaw (lack of serious impact analysis) that warranted a new approach. In this case, we would need to identify a systematic flaw in the way that regular diplomacy and deterrence efforts are approaching things. Professionals do regard North Korea as a threat, but not in a naive “oh they’re just evil and crazy aggressors” sort of sense, they already know that deterrence is a mutual problem. I can see why one might be cynical about US government efforts, but there are more players besides the US government.
The Logan Act doesn’t present an obstacle to aid efforts. You’re not intervening in a dispute with the US government, you’re just supporting the foreign country’s local programs.
EAs have a perfectly good working understanding of the microeconomic impacts of aid. At least, Givewell etc do. Regarding macroeconomic and institutional effects, OK not as much, but I still feel more confident there than I do when it comes to international relations and strategic policy. We have lots of economists, very few international relations people. And I think EAs show more overconfidence when they talk about nuclear security and foreign policy.
I agree with Keynes on this, you disagree, and neither of us has really offered much in the way of an argument or evidence, you’ve just asserted a contrary position.
So, no one should try this, it would be crazy to try, and besides we don’t know whether it’s possible because we haven’t tried, and also competent people who know what they’re doing are working on it already so we shouldn’t reinvent the wheel? It doesn’t seem like you tried to understand the argument before trying to criticize it, it seems like you’re just throwing up a bunch of contradictory objections.
It’s different because they have the right approach on how to compromise. They work on compromises that are grounded in political interests rather than moral values, and they work on compromises that solve the task at hand rather than setting the record straight on everything. And while they have failures, the reasons for those failures are structural (problems of commitment, honesty, political constraints, uncertainty) so you cannot avoid them just by changing up the ideologies.