I am a PhD candidate in Economics at Stanford University. Within effective altruism, I am interested in broad longtermism, long-term institutions and values, and animal welfare. In economics, my areas of interest include political economy, behavioral economics, and public economics.
zdgroff
A few things:
I do find these patterns when I look at a few different types of policies (referendums, legislation, state vs. Congress, U.S. vs. international), so there’s some reason to think it’s not just state referendums.
There’s a paper on the repeals of executive orders that finds an even lower rate of repeals there, but that doesn’t tell us the counterfactual (i.e., would someone else have done this if the president in question did not).
There’s suggestive evidence that when policies are more negotiable, there’s less persistence. In my narrative/case study look at Congress, I find that failed policies often pass in a weaker form later on. There’s a similar result for taxes.
So my guess would be there is a somewhat qualitatively similar pattern here, but there’s probably a somewhat greater chance of winding up with a weaker form of the regulation you want later on than there is for failed referendum policies.
I think there are probably ways to tackle that but don’t have anything shovel-ready. I’d want to look at the general evidence on campaign spending and what methods have been used there, then see if any of those would apply (with some adaptations) to this case.
Thanks a lot! And good luck on the job market to you—let’s connect when we’re through with this (or if we have time before then).
How Long Do Policy Changes Matter? New Paper
I’m very glad to see you working and thinking about this—it seems pretty neglected within the EA community. (I’m aware of and agree with the thought that speeding up space settlement is not a priority, but making sure it goes well if it happens does seem important.)
Oh, that’s a good idea. I had thought of something quite different and broader, but this also seems like a promising approach.
Yeah, I think that would reduce the longevity in expectation, maybe by something like 2x. My research includes things that could hypothetically fall under congressional authority and occasionally do. (Anything could fall under congressional authority, though some might require a constitutional amendment.) So I don’t think this is dramatically out of sample, but I do think it’s worth keeping in mind.
The former, though I don’t have estimates of the counterfactual timeline of corporate campaigns. (I’d like to find a way to do that and have toyed with it a bit but currently don’t have one.)
I believe 4 years is very conservative. I’m working on a paper due November that should basically answer the question in part 1, but suffice it to say I think the ballot measures should look many times more cost-effective than corporate campaigns.
From what I can tell, the climate change one seems like the one with the most support in the literature. I’m not sure how much the consensus in favor of the human cause of megafauna extinctions (which I buy) generalizes to the extinction of other species in the Homo genus. Most of the Homo extinctions happened much earlier than the megafauna ones. But it could be—I have not given much thought to whether this consensus generalizes.
The other thing is that “extinction” sometimes happened in the sense that the species interbred with the larger population of Homo sapiens, and I would not count that as the relevant sort of extinction here.
Yeah, this is an interesting one. I’d basically agree with what you say here. I looked into it and came away thinking (a) it’s very unclear what the actual base rate is, but (b) it seems like it probably roughly resembles the general species one I have here. Given (b), I bumped up how much weight I put on the species reference class, but I did not include the human subspecies as a reference class here given (a).
From my exploration, it looked like there had been loose claims about many of them going extinct because of Homo sapiens, but it seemed like this was probably not true in the relevant sense of “extinct” except possibly in the cases of Neanderthals and Homo floresiensis. By relevant sense of “extinct”, I mean dying off/ceasing to reproduce rather than interbreeding. This seems to be the best paper on the topic, concluding that climate change drove most of the extinctions: https://www.sciencedirect.com/science/article/pii/S2590332220304760
As that paper says, Homo sapiens may have contributed to the extinction of the Neanderthals. I found suggestions in the case of Homo floresiensis to be pretty rough. So my take was that there was one species in the Homo genus that might have gone extinct because of Homo sapiens out of ~18 or so. That looks pretty similar to the means I take away from species extinctions (0.5-6%), but I felt it was too unclear to put a number on that gave added value.
Very strong +1 to all this. I honestly think it’s the most neglected area relative to its importance right now. It seems plausible that the vast majority of future beings will be digital, so it would be surprising if longtermism does not imply much more attention to the issue.
I take 5%-60% as an estimate of how much of human civilization’s future value will depend on what AI systems do, but it does not necessarily exclude human autonomy. If humans determine what AI systems do with the resources they acquire and the actions they take, then AI could be extremely important, and humans would still retain autonomy.
I don’t think this really left me more or less concerned about losing autonomy over resources. It does feel like this exercise made it starker that there’s a large chance of AI reshaping the world beyond human extinction. It’s not clear how much of that means the loss of human autonomy. I’m inclined to think in rough, nebulous terms that AI will erode human autonomy over 10% of our future, taking 10% as a sort of midpoint between the extinction likelihood and the degree of AI influence over our future. I think my previous views would have been in that ballpark.
The exercise did lead me to think the importance of AI is higher than I previously did and the likelihood of extinction per se is lower (though my final beliefs place all these probabilities higher than the priors in the report).
AGI Catastrophe and Takeover: Some Reference Class-Based Priors
I guess I would think that if one wants to argue for democracy as an intrinsic good, that would get you global democracy (and global control of EA funds), and it’s practical and instrumental considerations (which, anyway, are all the considerations in my view) that bite against it.
Yes, that seems right.
It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I’m not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.
Well I think MIC relies on some sort of discontinuity this century, and when we start getting into the range of precedented growth rates, the discontinuity looks less likely.
But we might not be disagreeing much here. It seems like a plausibly important update, but I’m not sure how large.
This is a valuable point, but I do think that giving real weight to a world where we have neither extinction nor 30% growth would still be an update to important views about superhuman AI. It seems like evidence against the Most Important Century thesis, for example.
Yes, basically (if I understand correctly). If you think a policy has impact X for each year it’s in place, and you don’t discount, then the impact of causing it to pass rather than fail is something on the order of 100 * X. The impact of funding a campaign to pass it is bigger, though, because you presumably don’t want to count the possibility that you fund it later as part of the counterfactual (see my note above about Appendix Figure D20).
Some things to keep in mind:
Impacts might change over time (e.g., a policy stops mattering in 50 years even if it’s in place). If you think, e.g., transformative AI will upend everything, that might be what you need to think about here in terms of how long this policy change matters.
I’m looking at whether this policy or a version of it will be in place. It’s possible policies will be substituted for in some way in ways that make things wash out somewhat. (For instance, we don’t pass one animal welfare policy, but we pass some policy to shrink the farming sector.) I think this effect is small given the lack of differences by policy topic—this should be much more of an issue for some topics than others—but see the next point.
There are some hints of less persistence for policies where there’s more room for negotiation/more ways to dial it up and down. See my reply to Erich Grunewald lower down—for taxes and Congressional legislation, it seems like the effect on whether some possibly weaker version of the policy eventually passes might wash out.