Really glad you did this. I see some similarities with my work as a journalist. I’ve previously argued that journalism has never attempted systematic evaluation of government, e.g. department by department, so it’s fantastic to see someone attempt this. Your problems regarding domain knowledge, slow or unhelpful responses from officials, inconsistent transparency, etc. are spot on and well known to reporters. Keep up the good work!
jonathanstray
A technical note: Bayesianism is not logic, statistics is not rationality
The limits of RCTs in international development
Several of these might be summed up under the heading “high risk.” There is a notion that this is exactly what philanthropy (as opposed to governments) ought to be doing.
One area I think hits many of these: global income inequality.
Well, Russell believed it could be developed through education. One exercise which can help is comparing an abstract number of people to something that relates to daily experience, such as the number of people in your school or your city.
Here’s a similar scale which was developed to communicate risk values
Bertrand Russell on statistical empathy
P-hacking explained through an interactive visualization
MacAskil discusses this in a section titled “international labor mobility” but does not mention “open borders” or draw the distinction you have. He writes:
“Increased levels of migration from poor to rich countries would provide substantial benefits for the poorest people in the world, as well as substantial increases in global economic output. However, almost all developed countries pose heavy restrictions on who can enter the country to work. … Tractability: Not very tractable. Increased levels of immigration are incredibly unpopular in developed countries, with the majority of people in Germany, Italy, the Netherlands, Norway, Sweden, and the United Kingdom favoring reduced immigration.”
In “Doing Good Better” MacAskil rates labor mobility as “intractable.” I agree it’s difficult, but I think this a specific example of the wide blindness of EA to the mechanics of political change. All of the issues you have raised are fundamentally political problems, not technical problems, and would require political strategies, for which we will not have evidence from RCTs.
This is a weakness of the “progressive” philanthropic tradition in general, which tends to think in terms of technical solutions to specific problems. It has a lot less to say about the broader shifts in values and networks that enable high level political change
More on that: http://www.insidephilanthropy.com/home/2015/7/22/is-too-much-funding-going-to-social-entrepreneursand-too-lit.html
In other words, I am glad to see this post. I think we need to be looking in these sorts of directions.
“The fact there seems to be missing the way by which effective altruism determines which moral goals are worth pursuing … That seems to be the role of meta-ethics in effective altruism.”
Maybe the answer is not to be found in meta-ethics or in analysis generally, but in politics, that is, the raw realities of what people believe and want any given moment, and how consensus forms or doesn’t.
In other words, I think the answer to “what goals are worth pursuing” is, broadly, ask the people you propose to help what it is they want. Luckily, this happens regularly in all sorts of ways, including global scale surveys. This is part of what the value of “democracy” means to me.
A man named Horst Rittel—who also coined “wicked problem”—wrote a wonderful essay on the relationship between planning for solving social problems and politics which seems appropriate here http://www.cc.gatech.edu/~ellendo/rittel/rittel-reasoning.pdf
tl;dr some kinds of knowledge are instrumental, but visions for the future are unavoidably subjective and political.
“EA has very different needs than much of the non-profit world.” In what way?
I also have to say that there is something very insider-y about this analysis. Much of the advice seems like it boils down to “don’t waste your time with non-EA people.”
If I understand you correctly I think you make two interesting points here:
the potential of EA as a political vehicle for financial charity
The current EA advice has to be the marginal advice
When I wrote “isn’t that the fundamental claim of EA” I suppose more properly I am referring to the claims that 1) EA is a suitable moral philosophy 2) the consensus answers in the real existing EA community correspond to this philosophy. In other words that EA is, broadly speaking, “right” to do.
Yes. But then, shouldn’t all arguments about what is appropriate for EA’s to do generalize to what it is appropriate for everyone to do? Isn’t that the fundamental claim of the EA philosophy?
Here’s a completely different route for arguing that giving money may be one of the most effective possibilities for improving the lives of others.
Income inequality is at historic high levels, both globally and in the US (see e.g. http://www.networkideas.org/networkideas/pdfs/global_inequality_ortiz_cummins.pdf)
Income inequality is robustly correlated with unhappiness (see e.g. http://www.lisdatacenter.org/wps/liswps/614.pdf)
Therefore, there may be a large opportunity in income redistribution.
I realize this is not a quantitative analysis, partially because “happiness” is so difficult to quantify in a meaningful way. In particular I don’t know how to relate the various happiness measures in use to something like QALY (which suggests to me that QALY is not an ideal utilitarian metric.) Also, the correlational analyses could be muddled by confounders, meaning we could decrease inequality and still have a sad population for other reasons. However, I note that distributional issues have been at the center of politics for as long as there have been politics, so it’s something that humans seem to care about a lot.
Previous generations’ answers to the distributional problem have included e.g. democracy, pensions, Marxism, and universal health care. Advocating earning to give could be a seen as an individual-level redistribution strategy. But one could also advocate for political reforms that might address these inequalities—they could have very large upside as well.
Ethical Fourier Transform
Doesn’t this all depend on assuming we are trying maximize average happiness? That seems like a very questionable assumption to me. Rawls argued against it explicitly, for example. He phrased his arguments in terms of “fairness” and there are nice links here to the relationship between happiness and comparison to others. The mathematical implication is we need some more sophisticated function which maps the distribution to a scalar. And then of course there are the non-consumption variables. If you’re a well fed woman who is married to the man who raped you (see e.g. the issues surrounding article 308 of the Jordan criminal code) I don’t think total consumption is what matters to you...
Relevant to the issue of identity: I think it’s telling that the empathetic advice here is described as “try ideological Turing tests” rather than “try to argue the other side convincingly,” which is a much older principle and much more generally understandable.
Should making EA legible to the majority of the worlds’ citizens, who are not and will never be computer scientists, be a goal? If so, we need to work on the language we use to discuss these issues.
Seems like a good project, but why rot13 the topics?
Bayesian stats is not the panacea of logic it is often held out to be; I say this as someone who practices statistics for the purpose of social betterment (see e.g. https://projects.propublica.org/surgeons/ for an example of what I get up to)
First, my experience is that quantification is really, really hard. Here are a few reasons why.
I have seen few discussions, within EA, of the logistics of data collection in developing countries, which is a HUGE problem. For example, how do you get people to talk to you? How do you know if they’re telling you the truth? These folks have often talked to wave after wave of well meaning foreigners over their lives and would rather ignore or lie to you and your careful survey. The people I know who actually collect data in field have all sorts of nasty things to say about the realities of working in fluid environments.
Even worse: for a great many outcomes there just ISN’T a way to get good indicator data. Consider the problem of attribution of outcomes to interventions. We can’t even reliably solve the problem of attributing a purchase to an ad in the digital advertising industry, where all actions are online and therefore recorded somewhere. How then do we solve attribution at the social intervention level? The answers revolve around things like theories of change and qualitative indicators, neither of which the EA community takes seriously. But often this is the ONLY type of evidence we can get.
Second, Bayesian stats is built entirely on a single equation that follows from the axioms of probability. All of this update, learning, rationality stuff is an interpretation we put on top of it. Andrew Gelman and Cosma Shalizi have the clearest exposition of this, from “Philosophy and the Practice of Bayesian Statistics”,
“A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.”
Bayesianism is not rationality. It’s a particular mathematical model of rationality. I like to analogize it to propositional logic: it captures some important features of successful thinking, but it’s clearly far short of the whole story.
We need much more sophisticated frameworks for analytical thinking. This is my favorite general purpose approach, which applies to mixed quant/qual evidence, and was developed by consideration of cognitive biases at the CIA:
But of course this isn’t rationality either. It’s never been codified completely, and probably cannot be.
So here we go. EA’s do not generally think seriously about political action. Is it time?