AppliedDivinityStudies
Why Hasn’t Effective Altruism Grown Since 2015?
Responses and Testimonies on EA Growth
Base Rates on United States Regime Collapse
I agree with every claim made in this paper. And yet, its publication strikes me as odd and inappropriate.
Consider the argument from Agnes Callard that philosophers should not sign petitions. She writes: “I am not saying that philosophers should refrain from engaging in political activity; my target is instead the politicization of philosophy itself. I think that the conduct of the profession should be as bottomless as its subject matter: If we are going to have professional, intramural discussions about the ethics of the profession, we should do so philosophically and not by petitioning one another. We should allow ourselves the license to be philosophical all the way down.” https://www.nytimes.com/2019/08/13/opinion/philosophers-petitions.html
The article in question here is not exactly a petition, but it’s not a research paper either. Had it not be authored by so many distinguished names, it would not have been deemed fit for publication. By its own admission, the purpose of this article is not to make an original research contribution. Rather, its purpose is to claim that “the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research”.
Is this a good principle to publish by? Is the role of philosophers in the near-future to sign off in droves on many-authored publications, all for the sake of shifting the focus of attention?
Of course philosophers should refute the arguments they disagree with. But that doesn’t seem to be what’s occurring here.
This risks being an overly-heated debate, so I’ll stop there. I would just ask you to consider whether or not this is what the practice of philosophy ought to look like, and if it constitutes a desirable precedent for academic publishing.
Separating this question from my main comment to avoid confusion.
Your medium post reads: “Tyler Cowen, calling for faster technological growth for a better future, dismissed the Repugnant Conclusion as a constraint: “I say full steam ahead.””
Linking to this MR post: https://marginalrevolution.com/marginalrevolution/2018/08/preface-stubborn-attachments-book-especially-important.html
The MR post does not mention the Repugnant Conclusion, nor does it contain the words “full steam ahead”. Did. you perhaps link to the wrong post? I searched the archives briefly, but was unable to find a MR post that dismisses the Repugnant Conclusion: https://marginalrevolution.com/?s=repugnant+conclusion
This is a super interesting exercise! I do worry how much it might bias you, especially in the absence of equally rigorously evaluated alternatives.
Consider the multiple stage fallacy: https://forum.effectivealtruism.org/posts/GgPrbxdWhyaDjks2m/the-multiple-stage-fallacy
If I went through any introductory EA work, I could probably identify something like 20 claims, all of which must hold for the conclusions to have moral force. It would. then feel pretty reasonable to assign each of those claims somewhere between 50% and 90% confidence.
That all seems fine, until you start to multiply it out. 70%^20 is 0.08%. And yet my actual confidence in the basic EA framework is probably closer to 50%. What explains the discrepancy?
Lack of superior alternatives. I’m not sure if I’m a moral realist, but I’m also pretty unsure about moral nihilism. There’s lots of uncertainty all over the place, and we’re just trying to find the best working theory, even if it’s overall pretty unlikely. As Tyler Cowen once put it: “The best you can do is to pick what you think is right at 1.05 percent certainty, rather than siding with what you think is right at 1.03 percent. ”
Ignoring correlated probabilities
Bias towards assigning reasonable sounding probabilities
Assumption that the whole relies on each detail. E.g. even if utilitarianism is not literally correct, we may still find that pursuing a Longtermist agenda is reasonable under improved moral theories
Low probabilities are counter-acted by really high possible impacts. If the probability of longtermism being right is ~20%, that’s still a really really compelling case.
I think the real question is, selfishly speaking, how much more do you gain from playing video games than from working on longtermism? I play video games sometimes, but find that I have ample time to do so in my off hours. Playing video games so much that I don’t have time for work doesn’t sound pleasurable to me anyway, although you might enjoy it for brief spurts on weekends and holidays.
Or consider these notes from Nick Beckstead on Tyler Cowen’s view: “his own interest in these issues is a form of consumption, though one he values highly.” https://drive.google.com/file/d/1O—V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view
I received a nice reply from Dean which I’ve asked if I can share. Assuming he says yes, I’ll have a more thought out response to this point soon.
Here are some quick thoughts: There are many issues in all academic fields, the vast majority of which are not paid the appropriate amount of attention. Some are overvalued, some are unfairly ignored. That’s too bad, and I’m very glad that movements like EA exist to call more attention to pressing research questions that might otherwise get ignored.
What I’m afraid of is living in a world where researchers see it as part of their charter to correct each of these attentional inexactitudes, and do so by gathering bands of other academics to many-author a paper which basically just calls for a greater/lesser amount of attention to be paid to some issue.
Why would that be bad?
It’s not a balanced process. Unlike the IGM Experts Panel, no one is being surveyed and there’s no presentation of disagreement or distribution of beliefs over the field. How do we know there aren’t 30 equally prominent people willing to say the Repugnant Conclusion is actually very important? Should they go out and many-author their own paper?
A lot of this is very subjective, you’re just arguing that an issue receives more/less attention than is merited. That’s fine as a personal judgement, but it’s hard for anyone else to argue against on an object-level. This risks politicization.
There are perverse incentives. I’m not claiming that’s what’s at play here, but it’s a risk this precedent sets. When academics argue for the (un)importance of various research questions, they are also arguing for their own tenure, departmental funding, etc. This is an unavoidable part of the academic career, but it should be limited to careerist venues, not academic publications.
Again, those are some quick thoughts from an outsider, so I wouldn’t attach too much credence to them. But I hope that help explains why this strikes me as somewhat perilous.
Once shared, I think Dean’s response will show that my concerns are, in practice, not very serious.
Thanks Dean! Good to hear from you.
I hope you don’t feel like I’m misrepresenting this paper. To be clear, I am referring to “What Should We Agree on about the Repugnant Conclusion?”, which includes the passages:
“We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature.”
“It is not simply an academic exercise, and we should not let it be governed by undue attention to one consideration. ”
That is from the introduction and conclusion. I’m not sure if that constitutes the “main claim”. I may have been overreaching to say that it “basically” only serves as a call for less attention. As I noted in the comment, my intention was never to lend too much credence to that particular claim.
I fully agree with your points on the interdisciplinary of population ethics and the unavoidability of incentives.
That’s a good way of framing it. I absolutely agree that individuals and groups should reflect on whether or not their time is being spent wisely.
Here are some possible failure modes. I am not saying that any of these are occurring in this particular situation. As a naive outsider looking in, this is merely what springs to mind when I consider what might happen if this type of publishing were to become commonplace.
-
Imagine I am a mildly prominent academic. One day, a colleague sends me a draft of a paper, asking if I would like to co-author it. He tells me that the other co-authors include Yew-Kwang Ng, Toby Ord, Hilary Greaves and other superstars. I haven’t given the object-level claims much thought, but I’m eager to associate with high-status academics and get my name on a publication in Utilitas. I go ahead and sign off.
-
Imagine I am a junior academic. One day, I have an insight that may lead to an important advance in population ethics, but it relies on some discussion of the Repugnant Conclusion. As I discuss this idea with colleagues, I’m directed to this many-authored paper indicating that we should not pay too much attention to the Repugnant Conclusion. I don’t take issue with any of the paper’s object-level claims, I simply believe that my finding is important whether or not it’s in an subfield that has received “too much focus”. My colleagues have no opinion on the matter at hand, but keep referring me to the many-authored paper anyway, mumbling something about expert consensus. In the end, I’m persuaded not to publish.
-
Imagine I am a very prominent academic with a solid reputation. I now want to raise more grant funding for my department, so I write a short draft making the claim that my subfield has received too little focus. I pass this around to mildly prominent academics, who sign off on the paper in order to associate with me and get their name on a publication in Utilitas. With 30 prominent academics on the paper, no journal would dare deny me publication.
Again, my stance here is not as an academic. These are speculative failure modes, not real scenarios I’ve seen, and certainly not real accusations I’m making of the specific authors in question here. My goal is to express what I believe to be a reasonable discomfort, and seek clarification on how the academic institutions at play actually function.
-
Hey Jason, I share the same thoughts on pascal-mugging type arguments.
Having said that, The Precipice convincingly argues that the x-risk this century is around ~1/6, which is really not very low. Even if you don’t totally believe Toby, it seems reasonable to put the odds at that order of magnitude, and it shouldn’t fall into the 1-e6 type of argument.
I don’t think the Deutsch quotes apply either. He writes “Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology”.
That might be true when it comes to warring human civilizations, but not when it comes to global catastrophes. In the past, there was no way to say “let’s not move on to the bronze age quite yet”, so any individual actor who attempted to stagnate would be dominated by more aggressive competitors.
But for the first time in history, we really do have the potential for species-wide cooperation. It’s difficult, but feasible. If the US and China manage to agree to a joint AI resolution, there’s no third party that will suddenly sweep in and dominate with their less cautious approach.
As Peter notes, I written about the issue of x-risk within Progress Studies at length here: https://applieddivinitystudies.com/moral-progress/
I’ve gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.
For what it’s worth, I do think there are compelling arguments, I just haven’t seen them made elsewhere. For example:
If the US/UK research community doesn’t progress rapidly in AI development, we may be overtaken by less careful actors
I see myself as straddling the line between the two communities. More rigorous arguments at the end, but first, my offhand impressions of what I think the median EA/XR person beliefs:
Ignoring XR, economic/technological progress is an immense moral good
Considering XR, economic progress is somewhat good, neutral at worst
The solution to AI risk is not “put everything on hold until we make epistemic progress”
The solution to AI risk is to develop safe AI
In the meantime, we should be cautious of specific kinds of development, but it’s fine if someone wants to go and improve crop yields or whatever
As Bostrom wrote in 2003: “In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.”
“However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.” https://www.nickbostrom.com/astronomical/waste.html
With regards to poverty reduction, you might also like this post in favor of growth: http://reflectivedisequilibrium.blogspot.com/2018/10/flow-through-effects-of-innovation.html
Good to hear!
In the abstract, yes, I would trade 10,000 years for 0.001% reduction in XR.
In practice, I think the problem with this kind of Pascal Mugging argument is that it’s really hard to know what a 0.001% reduction looks like, and really easy to do some fuzzy Fermi estimate math. If someone were to say “please give me one billion dollars, I have this really good idea to prevent XR by pursuing Strategy X”, they could probably convince me that they have at least a 0.001% chance of succeeding. So my objections to really small probabilities are mostly practical.
Thanks for clarifying, the delta thing is a good point. I’m not aware of anyone really trying to estimate “what are the odds that MIRI prevents XR”, though there is one SSC sort of on the topic: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
I absolutely agree with all the other points. This isn’t an exact quote, but from his talk with Tyler Cowen, Nick Beckstead notes: “People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later… the philosophical side of this seems like ineffective posturing.
Tyler wouldn’t necessarily recommend that these people switch to other areas of focus because people motivation and personal interests are major constraints on getting anywhere. For Tyler, his own interest in these issues is a form of consumption, though one he values highly.” https://drive.google.com/file/d/1O—V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view
That’s a bit harsh, but this was in 2014. Hopefully Tyler would agree efforts have gotten somewhat more serious since then. I think the median EA/XR person would agree that there is probably a need for the movement to get more hands on and practical.
R.e. safety for something that hasn’t been invented: I’m not an expert here, but my understanding is that some of it might be path dependent. I.e. research agendas hope to result in particular kinds of AI, and it’s not necessarily a feature you can just add on later. But it doesn’t sound like there’s a deep disagreement here, and in any case I’m not the best person to try to argue this case.
Intuitively, one analogy might be: we’re building a rocket, humanity is already on it, and the AI Safety people are saying “let’s add life support before the rocket takes off”. The exacerbating factor is that once the rocket is built, it might take off immediately, and no one is quite sure when this will happen.
You might be familiar with Bostrom’s Fable of the Dragon Tyrant https://www.nickbostrom.com/fable/dragon.html
And of course, Yudkowsky’s fiction, while not exactly EA, was inspiring to many people.
In some ways, the EA creed requires being against empathy in an important way. We can’t just care for those close to us, or those with sympathetic stories. But of course that kind of impartiality is also a story. So at the very least, fiction is useful as a kind of reverse mind-control or intuition pump.
For what it’s worth, in this particular instance, I don’t find “impartiality” to be a useful source of emotional motivation. Working on animal welfare for example, you might find it more helpful to develop selective empathy post-hoc.
That sounds silly, but it’s basically just reverse of what people typically do. Normally we form emotional judgements and then rationalize them after the fact, there’s no reason you can’t do the opposite.
Thanks for these notes! I found the chapter on Fanaticism notable as well. The authors write:
A better response is simply to note that this problem arises under empirical uncertainty as well as under moral uncertainty. One should not give 0 credence to the idea that an infinitely good heaven exists, which one can enter only if one goes to church; or that it will be possible in the future through science to produce infinitely or astronomically good outcomes. This is a tricky issue within decision theory and, in our view, no wholly satisfactory solution has been provided. But it is not a problem that is unique to moral uncertainty. And we believe whatever is the best solution to the fanaticism problem under empirical uncertainty is likely to be the best solution to the fanaticism problem under moral uncertainty. This means that this issue is not a distinctive problem for moral uncertainty.
I agree with their meta-argument, but it is still a bit worrying. Even if you reduce the unsolvable problems if your field to unsolvable problems in another field, I’m still left feeling concerned that we’re missing something important.
In the conclusion, the authors call for more work on really fundamental questions, noting:
But it’s plausible that the most important problem really lies on the meta-level: that the greatest priority for humanity, now, is to work out what matters most, in order to be able to truly know what are the most important problems we face.
Moral atrocities such as slavery, the subjection of women, the persecution of non-heterosexuals, and the Holocaust were, of course, driven in part by the self-interest of those who were in power. But they were also enabled and strengthened by the common-sense moral views of society at the time about what groups were worthy of moral concern.
Given the importance of figuring out what morality requires of us, the amount of investment by society into this question is astonishingly small. The world currently has an annual purchasing-power-adjusted gross product of about $127 trillion. Of that amount, a vanishingly small fraction—probably less than 0.05%—goes to directly addressing the question: What ought we to do?
I do wonder, given the historical examples they cite, if purely philosophical progress was the limiting factor. Mary Wollstonecraft and Jeremy Bentham made compelling arguments for women’s rights in the 1700s, but it took another couple hundred years for process to occur in legal and socioeconomic spheres.
Maybe it’s a long march, and progress simply takes hundreds of years. The more pessimistic argument is that moral progress arises as a function of economic and technological progress, and can’t occur in isolation. We didn’t give up slaves until it was economically convenient to do so, and likely won’t give up meat until we have cost and flavor competitive alternatives.
It’s tempting to wash away our past atrocities under the guise of ignorance, but I’m worried humanity just knowingly does the wrong thing.
Thanks for the writeup Holden, I agree that this is a useful alternative to the 80k approach.
On the conceptual research track, you note “a year of full-time independent effort should be enough to mostly reach these milestones”. How do you think this career evolves as the researcher becomes more senior? For example, Scott Alexander seems to be doing about the same thing now as he was doing 8 years ago. Is the endgame for this track simply that you become better at doing a similar set of things?
Thanks! I think that’s a good summary of possible views.
FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven’t been quite ready to express them publicly, and I don’t think they’re endorsed by other members of the Progress community.
Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I’m heavily paraphrasing there.
He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway.
Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won’t speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like:
Ramp up high skilled immigration (especially from China, especially in AI, biotech, EE and physics) by expanding visa access and proactively recruiting scientists
I mostly agree, though I would add: spending a couple years at Google is not necessarily going to be super helpful for starting a project independently. There’s a pretty big difference between being good at using Google tooling and making incremental improvements on existing software versus building something end-to-end and from scratch. That’s not to say it’s useless, but if someone’s medium-term goal is doing web development for EA orgs, I would push working at a small high-quality startup. Of course, the difficulty is that those are harder to identify.
Hey thanks for asking, it’s the paragraphs from “Looking back” to “raw base rates to consider”
In some ways this feels like a silly throwback, on the other hand I think it is actually more worth reading now that we’re not caught up in the heat of the moment. More selfishly, I didn’t post on EA Forum when I first wrote this, but have since been encouraged to share old posts that might not have been seen.