Author, The Roots of Progress (rootsofprogress.org)
jasoncrawford
I wouldn’t say speed limits are for no one in particular; I’d say they are for everyone in general, because they are a case where a preference (not dying in car accidents) is universal. But many preferences are not universal.
I know that egoism is technically an ethical framework, but I don’t see how it could ever get meaningful rules to come out of it that I think we’d agree we’d want as a society. It would be hard to even come up with rules like “You shouldn’t murder others” if your starting point is your own ego and maximizing your own self interest.
Thanks… I would like to write more about this sometime. As a starting point, think through in vivid detail what would actually happen to you and your life if you committed murder. Would things go well for you after that? Does it seem like a path to happiness and success in life? Would you advise a friend to do it? If not, then I think you have egoistic reasons against murder.
I’m not using purely deontological reasoning, that is true. I have issues with deontological ethics as well.
I can understand not prioritizing these issues for grant-making, because of tractability. But if something is highly important, and no one is making progress on it, shouldn’t there at least be a lot of discussion about it, even if we don’t yet see tractable approaches? Like, shouldn’t there be energy in trying to find tractability? That seems missing, which makes me think that the issues are underrated in terms of importance.
Yes, but I don’t see why we have to evaluate any of those things on the basis of arguments or thinking like the population ethics thought experiments.
Increased immigration is good because it gives people freedom to improve their lives, increasing their agency.
The demographic transition (including falling fertility rates) is good because it results from increased wealth and education, which indicates that it is about women becoming better-informed and better able to control their own reproduction. If in the future fertility rates rise because people become wealthy enough to make child-rearing less of a burden, that would also be good. In each case people have more information and ability to make choices for themselves and create the life they want. That is what is good, not the number of people or whether the world is better in some impersonal sense with or without them.
Policies to accelerate or decelerate the demographic transition could be good or bad depending on how they operate. If they increase agency, they could be good; if they decrease it, they are bad (e.g., China’s “one child” policy; or bans on abortion or contraception).
We don’t need the premises or the framework of population ethics to address these questions.
Not sure, maybe both? I am at least somewhat sympathetic to consequentialism though
“What is the algorithm that we would like legislators to use to decide which legislation to support?”
I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world.
Re the China/US scenario: this does seem implausible; why would the US AI prevent almost all future progress, forever? Setting that aside, though, if this scenario did happen, it would be a very tough call. However, I wouldn’t make it on the basis of counting people and adding up happiness. I would make it on the basis of something like the value of progress vs. the value of survival.
Abortion policy is a good example. I don’t see how you can decide this on the basis of counting people. What matters here is the wishes of the parents, the rights of the mother, and your view on whether the fetus has rights.
I can’t imagine a way to guide my actions in a normative sense without thinking about whether the future states my actions bring about are preferable or not.
Preferable to whom? Obviously you could think about whether they are preferable to yourself. I’m against the notion that there is such as thing as “preferable” to no one in particular.
Of course many people de facto think about their preferences when making a decision and they often give that a lot of weight, but I see ethics as standing outside of that…
Hmm, I don’t. I see egoism as an alternative ethical framework, rather than as non-ethical.
These are good examples. But I would not decide any of these questions with regard to some notion of whether the world was better or worse with more people in it.
Senator case: I think social engineering through the tax code is a bad idea, and I wouldn’t do it. I would not decide on the tax reform based on its effect on birth rates. (If I had to decide separately whether such effects would be good, I would ask what is the nature of the extra births? Is the tax reform going to make hospitals and daycare cheaper, or is it going to make contraception and abortion more expensive? Those are very different things.)
Advice columnist: I would advise people to start a family if they want kids and can afford them. I might encourage it in general, but only because I think parenting is great, not because I think the world is better with more people in it.
Pastor: I would realize that I’m in the wrong profession as an atheist, and quit. Modulo that, this is the same as the advice columnist.
Redditor: I don’t think people should put pressure on their kids, or anyone else, to have children, because it’s a very personal decision.
All of this is about the personal decision of the parents (and whether they can reasonably afford and take care of children). None of it is about general world-states or the abstract/impersonal value of extra people.
Answered here
Good observations. I wonder if it makes sense to have a role for this, a paid full-time position to seek out and expose liars. Think of a policeman, but for epistemics. Then it wouldn’t be a distraction from, or a risk to, that person’s main job—it would be their job. They could make the mental commitment up front to be ready for a fight from time to time, and the role would select for the kind of person who is ready and willing to do that.
This would be an interesting position for some EA org to fund. A contribution to clean up the epistemic commons.
Thanks. That is an interesting argument, and this isn’t the first time I’ve heard it, but I think I see its significance to the issue more clearly now.
I will have to think about this more. My gut reaction is: I don’t trust my ability to extrapolate out that many orders of magnitude into the future. So, yes, this is a good first-principles physics argument about the limits to growth. (Much better than the people who stop at pointing out that “the Earth is finite”). But once we’re even 10^12 away from where we are now, let alone 10^200, who knows what we’ll find? Maybe we’ll discover FTL travel (ok, unlikely). Maybe we’ll at least be expanding out to other galaxies. Maybe we’ll have seriously decoupled economic growth from physical matter: maybe value to humans is in the combinations and arrangements of things, rather than things themselves—bits, not atoms—and so we have many more orders of magnitude to play with.
If you’re not willing to apply a moral discount factor against the far future, shouldn’t we at least, at some point, apply an epistemic discount? Are we so certain about progress/growth being a brief, transient phase that we’re willing to postpone the end of it by literally the length of human civilization so far, or longer?
First, PS is almost anything but an academic discipline (even though that’s the context in which it was originally proposed). The term is a bit of a misnomer; I think more in terms of there being (right now) a progress community/movement.
I agree these things aren’t mutually exclusive, but there seems to be a tension or difference of opinion (or at least difference of emphasis/priority) between folks in the “progress studies” community, and those in the “longtermist EA” camp who worry about x-risk (sorry if I’m not using the terms with perfect precision). That’s what I’m getting at and trying to understand.
Thanks JP!
Minor note: the “Pascal’s Mugging” isn’t about the chance of x-risk itself, but rather the delta you can achieve through any particular program/action (vs. the cost of that choice).
Followup: I did write that essay some ~5 months ago, but I got some feedback on it that made me think I needed to rethink it more carefully, and then other deadlines took over and I lost momentum.
I was recently nudged on this again, and I’ve written up some questions here that would help me get to clarity on this issue: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies
Thanks ADS. I’m pretty close to agreeing with all those bullet points actually?
I wonder if, to really get to the crux, we need to outline what are the specific steps, actions, programs, investments, etc. that EA/XR and PS would disagree on. “Develop safe AI” seems totally consistent with PS, as does “be cautious of specific types of development”, although both of those formulations are vague/general.
Re Bostrom:
a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
By the same logic, would a 0.001% reduction in XR be worth a delay of 10,000 years? Because that seems like the kind of Pascal’s Mugging I was talking about.
(Also for what it’s worth, I think I’m more sympathetic to the “person-affecting utilitarian” view that Bostrom outlines in the last section of that paper—which may be why I learn more towards speed on the speed/safety tradeoff, and why my view might change if we already had immortality. I wonder if this is the crux?)
OK, so maybe there are a few potential attitudes towards progress studies:
It’s definitely good and we should put resources to it
Eh, it’s fine but not really important and I’m not interested in it
It is actively harming the world by increasing x-risk, and we should stop it
I’ve been perceiving a lot of EA/XR folks to be in (3) but maybe you’re saying they’re more in (2)?
Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I’m somewhere between (1) and (2)… I think there are valuable things to do here, and I’m glad people are doing them, but I can’t see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we’re just disagreeing on relative priority and neglectedness.
(But I don’t think that’s all of it.)
That’s interesting, because I think it’s much more obvious that we could successfully, say, accelerate GDP growth by 1-2 points per year, than it is that we could successfully, say, stop an AI catastrophe.
The former is something we have tons of experience with: there’s history, data, economic theory… and we can experiment and iterate. The latter is something almost completely in the future, where we don’t get any chances to get it wrong and course-correct.
(Again, this is not to say that I’m opposed to AI safety work: I basically think it’s a good thing, or at least it can be if pursued intelligently. I just think there’s a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.)
As to whether my four questions are cruxy or not, that’s not the point! I wasn’t claiming they are all cruxes. I just meant that I’m trying to understand the crux, and these are questions I have. So, I would appreciate answers to any/all of them, in order to help my understanding. Thanks!
I’m not making a claim about how effective our efforts can be. I’m asking a more abstract, methodological question about how we weigh costs and benefits.
If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal’s Mugging.
If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.
And so then I just want to know, OK, what’s the plan? Maybe the best way to find the crux here is to dive into the specifics of what PS and EA/XR each propose to do going forward. E.g.:
We should invest resources in AI safety? OK, I’m good with that. (I’m a little unclear on what we can actually do there that will help at this early stage, but that’s because I haven’t studied it in depth, and at this point I’m at least willing to believe that there are valuable programs there. So, thumbs up.)
We should raise our level of biosafety at labs around the world? Yes, absolutely. I’m in. Let’s do it.
We should accelerate moral/social progress? Sure, we absolutely need that—how would we actually do it? See question 3 above.
But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost. Failing to maintain and accelerate progress, in my mind, is a global catastrophic risk, if not an existential one. And it’s unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.
But maybe that’s not actually the proposal from any serious EA/XR folks? I am still unclear on this.
Only a little bit. In part they were a reaction to the religious wars that plagued Europe for centuries.