Yeah despite having studied philosophy I also found this a little impenetrable. It keeps saying things like,
values are simultaneously woven into the fabric of reality and such that we require particular sensitivities to recognise them
and that these views came from some women philosophers at Oxford and Durham, but never really explaining what they mean.
To the extent I felt I understood it, this was only by pattern-matching to the usual criticisms of EA and utilitarianism, like ‘too impersonal’ and ‘not left wing enough’. But this means I wasn’t able to get much new from it.
Unfortunately most cost-effectiveness estimates are calculated by focusing on the specific intervention the charity implements, a method which is a poor fit for large diversified charities.
he is asking you to consider how it typically feels like to listen to muzak and eat potatoes
I always found this very confusing. Potatoes are one of my favourite foods!
Can we at least have a consensus and commitment that we go back to the previous norm after this election, to prevent a slippery slope where engaging in partisan politics becomes increasingly acceptable in EA?
Unfortunately I expect that in four years time partisans will decide that 2024 is the new most important election in history and hence would renege on any such agreement.
I wonder to what extent this springs from the fact that most pastors do not expect most of their congregants to achieve great things. Presumably if you are a successful missionary who converts multiple people, your instrumental value significantly exceeds your intrinsic value, so I wonder if they have the same feelings. An extreme case would be someone like Moses, whose intrinsic value presumably paled into insignificance compared to his instrumental value as a saviour of the Israelites and passing on the Word of God.
In any case, I think there is a strong case to be made for spending resources on yourself for non-instrumental reasons. Even if you don’t think you matter more than anyone else, you definitely don’t matter less than them! And you have a unique advantage in spending resources to generate your own welfare: an intimate understanding of your own circumstances and preferences. When we give to help others, it can be very difficult to figure out what they want and how to best achieve that. In contrast, I know very well which things I have been fixated on!
I didn’t downvote, but I could imagine someone thinking Halstead had been ‘tricked’ - forced into compliance with a rule that was then revoked without notifying him. If he had been notified he might have wanted to post his own job adverts in the last few years.
Personally I share your intuitions that the occasional interesting job offer is good, but I don’t know how this public goods problem could be solved. No job ads might be the best solution, for all that I enjoyed this one.
While economics is often derided as the dismal science, I believe that economists have done much to improve policymaking in the world.
In keeping with the abolitionists origins of the phrase:
Carlyle’s target was … economists such as John Stuart Mill, who argued that it was institutions, not race, that explained why some nations were rich and others poor. Carlyle attacked Mill … for supporting the emancipation of slaves. It was this fact—that economics assumed that people were basically all the same, and thus all entitled to liberty—that led Carlyle to label economics “the dismal science.”
It seems from your description that part of the problem is that the same body invents projects for itself to work on. Do you think things would be significantly improved if, after coming up with a research project, they had to invite external bids for the project, and only do it in-house if they won the tendering process? Perhaps this would be prohibitively hard to implement in practice.
This was a really interesting article on a subject I’d never heard of before, thanks very much. I assume similar issues affect government research organisations in other countries as well.
When asking the person to rephrase their comment, it can be useful suggest a rewrite yourself.
Example: Someone noticed a commenter who appeared to be name calling another person. This is how they might have rewritten the comment: “I have this point of view because of this reason. I see other people with this different approach and I find it odd because it seems so much in conflict with what I’ve learned. I wonder how they got to that conclusion.”
I found this suggestion kind of surprising upon re-reading. Do you have experiencing of it working well? I worry it could easily come across as somewhat patronising.
Is there any legal reason the OP couldn’t paypal money to someone else who then makes a donation on his behalf? I agree their accepting paypal is the ideal solution but maybe this is an acceptable short term workaround
Nobel Cause Corruption
Is this about how the Peace Prize is given out to either warmongers or ineffective activists rather than professional diplomats and international supply chain managers?
I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice.
It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don’t necessarily think that malaria and AI risk aren’t important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.
I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.
EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You’re basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people’s thought processes, in which case this is not so much of a surprise.
But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I’m not sure how one would cache out the limits on ‘atrocious’ views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!
This is an interesting idea. You might need to change the design a bit; my impression is that the experiment focused on getting people to donate vs not donating, whereas the concern with longtermism is more about prioritisation between different donation targets. Someone’s decision to keep the money wouldn’t necessarily mean they were being short-termist: they might be going to invest that money, or they might simply think that the (necessarily somewhat speculative) longtermist charities being offered were unlikely to improve long-term outcomes.
As always, thanks very much for writing up this detailed report. I really appreciate the transparency and insight into your thought processes, especially as I realise doing this is not necessarily easy! Great job.
(It’s possible that I might have some more detailed comments later, but in case I don’t I didn’t want to miss the chance to give you some positive feedback!)
People do bring this up a fair bit—see for example some previous related discussion on Slatestarcodex here and the EA forum here.
I think most AI alignment people would be relatively satisfied with an outcome where our controls over AI outcomes were as strong as our current control over corporations: optimisation for a criteria that requires continual human input from a broad range of people, while keeping humans in-the-loop of decision making inside the optimisation process, and with the ability to impose additional external constrains at run-time (regulations).
Thanks for the effort that went into this post. However, I thought there was a conspicuous lack of any discussion of Optimal Taxation Theory.
Quoting from Mankiw’s excellent review article, we can see why this part of economics is highly relevant to the issue: it is directly concerned with what type of tax system maximises utility:
The standard theory of optimal taxation posits that a tax system should be chosen to maximize a social welfare function subject to a set of constraints. The literature on optimal taxation typically treats the social planner as a utilitarian: that is, the social welfare function is based on the utilities of individuals in the society. … one would not go far wrong in thinking of the social planner as a classic “linear” utilitarian.
I’m not sure I could put it better than he does, so I hope you forgive the repeated quotations. One of the main findings of this field is that taxes on capital should be zero:
Perhaps the most prominent result from dynamic models of optimal taxation is that the taxation of capital income ought to be avoided. This result, controversial from its beginning in the mid-1980s, has been modified in some subtle ways and challenged directly in others, but its strong underlying logic has made it the benchmark.
Why? There are several reasons, and I encourage you to read the whole article, but the third justification he lists should be especially appealing to longtermist EAs: capital taxation reduces investment, which makes everyone poorer in the long run: even those who do not own any capital.
A third intuition for a zero capital tax comes from elaborations of the tax problem considered by Frank Ramsey (1928). In important papers, Chamley (1986) and Judd (1985) examine optimal capital taxation in this model. They find that, in the short run, a positive capital tax may be desirable because it is a tax on old capital and, therefore, is not distortionary. In the long run, however, a zero tax on capital is optimal. In the Ramsey model, at least some households are modeled as having an infinite planning horizon (for example, they may be dynasties whose generations are altruistically connected as in Barro, 1974). Those households determine how much to save based on their discounting of the future and the return to capital in the economy. In the long-run equilibrium, their saving decisions are perfectly elastic with respect to the after-tax rate of return. Thus, any tax on capital income will leave the after-tax return to capital unchanged but raise the pre-tax return to capital, reducing the size of the capital stock and aggregate output in the economy. This distortion is so large as to make any capital income taxation suboptimal compared with labor income taxation, even from the perspective of an individual with no savings. [emphasis added]
There has been a lot of work on the subject since then—for example here and here—but I think of Chamley-Judd as being a core result that the rest of the field is responding to. Some find that capital taxes should be positive or high, and some find that they should be negative—that we should subsidise investment—but the negative effects of capital taxes on investment, growth and aggregate welfare is clearly an important topic that can not be dispensed with without comment!
The above is concerned with capital taxation, but corporate taxes specifically are I think even worse. They essentially function as capital taxation, but typically allow interest expense to be deducted, hence distorting financing decisions away from equity and towards debt—contributing to systemic risk. (This problem was partly addressed in the US by the 2017 tax reform). To the extent that they only apply to legal corporations, and not other types of entity, they also distort organisational choice, which is also bad.
As a result, it seems that corporate taxes are harmful, and it would be better for the world (and the long term future) if they did not exist. Unfortunately they do exist—probably due to exactly the problems with institutional decision making that longtermist EAs are concerned about (e.g. short planning horizons, high discount rates, and capture by special interests). Fortunately, international tax competition provides something of a remedy, by encouraging countries to lower their corporate taxes to closer to the ideal level. Contra your suggestion that it ‘damages both “winners” and losers’, it acts as a beneficial check on the ability of countries to institute harmful policies. We should be supporting tax havens and praising their effects, not seeking to destroy them.
Despite having a section on ‘Objections’, the article does not really address this argument. You do sort of get at this issue here:
Tax havens are necessary structures in encouraging investment in developing countries. …
But the response misses the point:
Response: Agreed -- developing countries need to build both legal and tax system capacity. Development Financing Institutes and other investors require developing countries to honour and enforce contracts and to refrain from arbitrary seizure of assets.
Getting rid of tax havens degrades our ability to resist arbitrary seizure of assets. This is no small deal—many of the worst disasters in history have been intimately tied with governments’ seizures of assets and resultant damage to productive capacity. If we get rid of one check on this problem, we should have something else in place that can serve a similar job. The mere threat of losing access to financial markets for a while is insufficient. There are possible alternatives—once upon a time the west used gunboat diplomacy to this effect—but we should not remove our current solution without first instigating a new one.
Indeed, I think this article actually showcases the problem to a small degree. You write:
[tax havens] cost governments worldwide at least $500B/year in lost tax revenue
It is true that current investments, if subject to a higher level of taxation, would lead to higher tax revenues for governments (in the short run). But these investments were made by individuals and companies who were expecting to pay lower taxes! If taxes had been higher, fewer of these investments would have been made. To point out now that there is a lot of capital out there that could be taxed more if we changed the rules is precisely the sort of ex post asset seizure that people are worried about.
This section also sort of hints at the problem:
Tax havens promote economic growth in high-tax countries, especially those located near tax havens. US multinationals’ use of tax havens shifts tax revenue from foreign governments to the US by reducing the foreign tax credits they claim against US tax payable. As a result of the 1996 Puerto Rico tax haven phaseout mentioned above, employment by affected firms dropped not just in Puerto Rico, but in the US as a whole; affected firms reduced investment globally.
But again the response misunderstands:
Response: If curbing tax havens reduces growth and taxes in developed countries for the benefit of developing countries, that is likely a trade-off many EAs would be willing to make (see below). Abbott Laboratories and other multinationals affected by the Puerto Rico phaseout may have reduced global investment, but increased investment and jobs in developing countries such as India. Given that US dollars go a lot further in less developed countries, a reduction in global investment by specific firms could also reflect better value for money.
The problem is not so much that getting rid of tax havens will reduce investment in the west specifically, but that this will result in a global increase in effective tax rates, and as such will reduce investment globally.