I am a PhD candidate in Economics at Stanford University. Within effective altruism, I am interested in broad longtermism, long-term institutions and values, and animal welfare. In economics, my areas of interest include political economy, behavioral economics, and public economics.
zdgroff
I’m an ACE board member, so full disclosure on that, though what I say here is in my personal capacity.
I’m very glad about a number of improvements to the eval process that are not obvious from this post. In particular, there are now numeric cost-effectiveness ratings that I found clarifying, overall explanations for each recommendation, and clearer delineation of the roles the “programs” and “cost-effectiveness” sections play in the reviews. I expect these changes to make recommendations more scope sensitive. This leaves me grateful for and confident in the new review framework.
- 27 Dec 2022 12:38 UTC; 16 points) 's comment on Effective Animal Charity recommendations? by (
It doesn’t seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they’re approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they’re ignoring.
Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it’s fairly commonly known that it’s hard to find ongoing, large-scale biases in financial markets.
Great post, and thanks for writing it. One note: if polarization is defined as “more extreme views on each issue” (e.g. more people wanting extremely high or extremely low taxes), then it does not seem to be happening according to some research. The sort of polarization happening in the U.S. is more characterized as ideological sorting. That is, views on any particular issue (abortion, affirmative action, gun control) don’t have more mass on the extremes than before, but the views in each political party are less mixed.
This is nonetheless important, and I don’t think it radically changes much of what you said. Affect toward the opposite party is still much more negative than before. But it might suggest we should be more concerned about the conflict between the parties itself (e.g. abusing constitutional norms, cancellation) and less concerned about their policies per se.
I just wanted to give major kudos for evaluating a prediction you made and very publicly sharing the results even though they were not fully in line with your prediction.
Thanks for writing this. I think it’s very valuable to be having this discussion. Longtermism is a novel, strange, and highly demanding idea, so it merits a great deal of scrutiny. That said, I agree with the thesis and don’t currently find your objections against longtermism persuasive (although in one case I think they suggest a specific set of approaches to longtermism).
I’ll start with the expected value argument, specifically the note that probabilities here are uncertain and therefore random valuables, whereas in traditional EU they’re constant. To me a charitable version of Greaves and MacAskill’s argument is that, taking the expectation over the probabilities times the outcomes, you have a large future in expectation. (What you need for the randomness of probabilities to sink longtermism is for the probabilities to correlate inversely and strongly with the size of the future.) I don’t think they’d claim the probabilities are certain.
Maybe the claim you want to make, then, is that we should treat random probabilities differently from certain probabilities, i.e. you should not “take expectations” over probabilities in the way I’ve described. The problem with this is that (a) alternatives to taking expectations over probabilities have been explored in the literature, and they have a lot of undesirable features; and (b) alternatives to taking expectations over probabilities do not necessarily reject longtermism. I’ll discuss (b), since it involves providing an example for (a).
(b) In economics at least, Gilboa and Schmeidler (1989) propose what’s probably the best-known alternative to EU when the probabilities are uncertain, which involves maximizing expected utility for the prior according to which utility is the lowest, sort of a meta-level risk aversion. They prove that this is the optimal decision rule according to some remarkably weak assumptions. If you take this approach, it’s far from clear you’ll reject longtermism: more likely, you end up with a sort of longtermism focused on averting long-term suffering, i.e. focused on maximizing expected value according to the most pessimistic probabilities. There’s a bunch of other approaches, but they tend to have similar flavors. So alternatives on EU may agree on longtermism and just disagree on the flavor of it.
(a) Moving away from EU leads to a lot of problems. As I’m sure you know given your technical background, EU derives from a really nice set of axioms (The Savage Axioms). Things go awry when you leave it. Al-Najjar and Weinstein (2009) offer a persuasive discussion of this (H/T Phil Trammell). For example, non-EU models imply information aversion. Now, a certain sort of information aversion might make sense in the context of longtermism. In line with your Popper quote, it might make sense to avoid information about the feasibility of highly-specific future scenarios. But that’s not really the sort of information non-EU models imply aversion to. Instead, they imply aversion to info that would shift you toward the option that currently has a lot of ambiguity about it because you dislike it based on its current ambiguity.
So I don’t think we can leave behind EU for another approach to evaluating outcomes. The problems, to me, seem to lie elsewhere. I think there are problems with the way we’re arriving at probabilities (inventing subjective ones that invite biases and failing to adequately stick to base rates, for example). I also think there might be a point to be made about having priors on unlikely conclusions so that, for example, the conclusion of strong longtermism is so strange that we should be disinclined to buy into it based on the uncertainty about probabilities feeding into the claim. But the approach itself seems right to me. I honestly spent some time looking for alternative approaches because of these last two concerns I mentioned and came away thinking that EU is the best we’ve got.
I’d note, finally, that I take the utopianism point well and wold like to see more discussion of this. Utopian movements have a sordid history, and Popper is spot-on. Longtermism doesn’t have to be utopian, though. Avoiding really bad outcomes, or striving for a middling outcome, is not utopian. This seems to me to dovetail with my proposal in the last paragraph to improve our probability estimates. Sticking carefully to base rates and things we have some idea about seems to be a good way to avoid utopianism and its pitfalls. So I’d suggest a form of longtermism that is humble about what we know and strives to get the least-bad empirical data possible, but I still think longtermism comes out on top.
There is nothing special about longtermism compared to any other big desideratum in this regard.
I’m not sure this is the case. E.g. Steven Pinker in Better Angels makes the case that utopian movements systematically tend to commit atrocities because this all-important end goal justifies anyting in the medium term. I haven’t rigorously examined this argument and think it would be valuable for someone to do so, but much of longtermism in the EA community, especially of the strong variety, is based on something like utopia.
One reason why you might intuitively think there would be a relationship is that shorter-term impacts are typically somewhat more bounded; e.g. if thousands of American schoolchildren are getting suboptimal lunches, this obviously doesn’t justify torturing hundreds of thousands of people. With the strong longtermist claims it’s much less clear that there’s any sort of upper bound, so to draw a firm line against atrocities you end up looking to somewhat more convoluted reasoning (e.g. some notion of deontological restraint that isn’t completely absolute but yet can withstand astronomical consequences, or a sketchy and loose notion that atrocities have an instrumental downside).
Advocacy for digital minds
Artificial Intelligence, Values and Reflective Processes, Effective Altruism
Digital sentience is likely to be widespread in the most important future scenarios. It may be possible to shape the development and deployment of artificially sentient beings in various ways, e.g. through corporate outreach and lobbying. For example, constitutions can be drafted or revised to grant personhood on the basis of sentience; corporate charters can include responsibilities to sentient subroutines; and laws regarding safe artificial intelligence can be tailored to consider the interests of a sentient system. We would like to see an organization dedicated to identifying and pursuing opportunities to protect the interests of digital minds. There could be one or multiple organizations. We expect foundational research to be crucial here; a successful effort would hinge on thorough research into potential policies and the best ways of identifying digital suffering.
Yeah, I agree the facile use of “white supremacy” here is bad, and I do want to keep ad hominems out of EA discourse. Thanks for explaining this.
I guess I still think it makes important enough arguments that I’d like to see engagement, though I agree it would be better said in a more cautious and less accusatory way.
Thanks for writing this post. I have similar concerns and am glad to see this composed. I particularly like the note about the initial design of space colonies. A couple things:
My sense is that the dominance of digital minds (which you mention as a possible issue) is actually the main reason many longtermists think factory farming is likely to be small relative to the size of the future. You’re right to note that this means future human welfare is also relatively unimportant, and my sense is that most would admit that. Humanity is instrumentally important, however, since it will create those digital minds. I do think it’s an issue that a lot of discussion of the future treats it as the future “of humanity” when that’s not really what it’s about. I suspect that part of this is just a matter of avoiding overly weird messaging.
It would be good to explore how your argument changes when you weight animals in different ways, e.g. by neuron count, since that [does appear to change things](https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive). I think we should probably take a variety of approaches and place some weight on each, although there’s a sort of Pascalian problem with considering the possibility that each animal mind has equal weight in that it feels somewhat plausible but also leads to wild and seemingly wrong conclusions (e.g. that it’s all about insect larvae). But in general, this seems like a central issue worth adjusting for.
As I noted on the original post, I am grateful this dialogue is happening so respectfully this time around.
Thanks for writing this. I think about these sorts of things a lot. Given the title, do you know of examples of movements that did not start academic disciplines and appear to have suffered as a result?
The Global Priorities Institute and clusters of work around that do work in economics, including welfare economics. I’d also be curious to hear what you think they should do differently.
I’m really excited to see this and look into it. I’m working on some long-term persistence issues, and this is largely in line with my intuitive feel for the literature. I haven’t looked at the Church-WEIRDness one, though, and now I’m eager to read that one.
I found this informative:
Are you more funding- or talent-constrained?
Oscar: There are lots of researchers out there who would work on this if we offered them funding to do so.
Michelle: Wild Animal Initiative is primarily funding-constrained. Hiring can also be challenging, but not as much.
Peter: Funding-constrained. We have had to turn away talented people we didn’t have the funds to hire.
Given that most of the messaging in the EA community for a couple years has been that human capital constraints are greater than funding constraints, I was surprised to see this. I know there have been objections that this messaging is focused on longtermist and movement-building work and less representative of farmed animal advocacy, for example, but this is an update for me.
It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I’m not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.
One thing that’s sad and perhaps not obvious to people is that, as I understand it, Nathan Robinson was initially sympathetic to EA (and this played a role in at-times vocal advocacy for animals). I don’t know that there’s much to be done about this. I think the course of events was perhaps inevitable, but that’s relevant context for other Forum readers who see this.
- 26 Sep 2022 14:48 UTC; 13 points) 's comment on “Defective Altruism” by Nathan J. Robinson in Current Affairs by (
Research institute focused on civilizational lock-in
Values and Reflective Processes, Economic Growth, Space Governance, Effective Altruism
One source of long-term risks and potential levers to positively shape the future is the possibility that certain values or social structures get locked in, such as via global totalitarianism, self-replicating colonies, or widespread dominance of a single set of values. Though organizations exist dedicated to work on risks of human extinction, we would like to see an academic or independent institute focused on other events that could have an impact on the order of millions of years or more. Are such events plausible, and which ones should be of most interest and concern? Such an institute might be similar in structure to FHI, GPI, or CSER, drawing on the social sciences, history, philosophy, and mathematics.
I had not read through the CEA mistakes page before (linked in your post), and I am very impressed with it. I wanted to note that I’m pleased and kind of touched that the page lists neglect of animal advocacy in the 2015 and 2016 EAGs. I was one of the advocates who was unhappy, and I was not sure whether there was recognition of this, so it was really meaningful to see CEA admit this and detail steps that are taken.
I believe 4 years is very conservative. I’m working on a paper due November that should basically answer the question in part 1, but suffice it to say I think the ballot measures should look many times more cost-effective than corporate campaigns.
I take 5%-60% as an estimate of how much of human civilization’s future value will depend on what AI systems do, but it does not necessarily exclude human autonomy. If humans determine what AI systems do with the resources they acquire and the actions they take, then AI could be extremely important, and humans would still retain autonomy.
I don’t think this really left me more or less concerned about losing autonomy over resources. It does feel like this exercise made it starker that there’s a large chance of AI reshaping the world beyond human extinction. It’s not clear how much of that means the loss of human autonomy. I’m inclined to think in rough, nebulous terms that AI will erode human autonomy over 10% of our future, taking 10% as a sort of midpoint between the extinction likelihood and the degree of AI influence over our future. I think my previous views would have been in that ballpark.
The exercise did lead me to think the importance of AI is higher than I previously did and the likelihood of extinction per se is lower (though my final beliefs place all these probabilities higher than the priors in the report).
I’m excited to see this! One thing I’d mention on the historian path and its competitiveness is you could probably do a lot of this sort of work as an economic historian with a PhD in economics. Economic historians study everything from gender roles to religion and do ambitious if controversial quantitative analyses of long-term trends. While economists broadly may give little consideration to historical context, the field of economic history prides itself on actually caring about history for its own sake as well, so you can spend time doing traditional historian things, like working with archival documents (see the Preface to the Oxford Encyclopedia of Economic History for a discussion of the field’s norms).
The good thing here is it probably allows for greater outside options and potentially less competitiveness than a history PhD given the upsides of an economics PhD. You could also probably do similar work in political science.
>> Our impression is that although many of these topics have received attention from historians (examples: 1, 2, 3, 4, 5), some are comparatively neglected within the subject, especially from a more quantitative or impact-focused perspective.
I’d also note that some of these cites I don’t think are history—2 and 4 are written by anthropologists. (I think in the former case he’s sometimes classified as a biologist, psychologist, or economist too.)
I really do hope we have EAs studying history and fully support it, and I just wanted to give some closely related options!