Author, The Roots of Progress (rootsofprogress.org)
jasoncrawford
That’s interesting, because I think it’s much more obvious that we could successfully, say, accelerate GDP growth by 1-2 points per year, than it is that we could successfully, say, stop an AI catastrophe.
The former is something we have tons of experience with: there’s history, data, economic theory… and we can experiment and iterate. The latter is something almost completely in the future, where we don’t get any chances to get it wrong and course-correct.
(Again, this is not to say that I’m opposed to AI safety work: I basically think it’s a good thing, or at least it can be if pursued intelligently. I just think there’s a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.)
As to whether my four questions are cruxy or not, that’s not the point! I wasn’t claiming they are all cruxes. I just meant that I’m trying to understand the crux, and these are questions I have. So, I would appreciate answers to any/all of them, in order to help my understanding. Thanks!
I’m not making a claim about how effective our efforts can be. I’m asking a more abstract, methodological question about how we weigh costs and benefits.
If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal’s Mugging.
If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.
And so then I just want to know, OK, what’s the plan? Maybe the best way to find the crux here is to dive into the specifics of what PS and EA/XR each propose to do going forward. E.g.:
We should invest resources in AI safety? OK, I’m good with that. (I’m a little unclear on what we can actually do there that will help at this early stage, but that’s because I haven’t studied it in depth, and at this point I’m at least willing to believe that there are valuable programs there. So, thumbs up.)
We should raise our level of biosafety at labs around the world? Yes, absolutely. I’m in. Let’s do it.
We should accelerate moral/social progress? Sure, we absolutely need that—how would we actually do it? See question 3 above.
But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost. Failing to maintain and accelerate progress, in my mind, is a global catastrophic risk, if not an existential one. And it’s unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.
But maybe that’s not actually the proposal from any serious EA/XR folks? I am still unclear on this.
Good points.
I haven’t read Ord’s book (although I read the SSC review, so I have the high-level summary). Let’s assume Ord is right and we have a 1⁄6 chance of extinction this century.
My “1e-6” was not an extinction risk. It’s a delta between two choices that are actually open to us. There are no zero-risk paths open to us, only one set of risks vs. a different set.
So:
What path, or set of choices, would reduce that 1⁄6 risk?
What would be the cost of that path, vs. the path that progress studies is charting?
How certain are we about those two estimates? (Or even the sign of those estimates?)
My view on these questions is very far from settled, but I’m generally aligned through all of the points of the form “X seems very dangerous!” Where I get lost is when the conclusion becomes, “therefore let’s not accelerate progress.” (Or is that even the conclusion? I’m still not clear. Ord’s “long reflection” certainly seems like that.)
I am all for specific safety measures. Better biosecurity in labs—great. AI safety? I’m a little unclear how we can create safety mechanisms for a thing that we haven’t exactly invented yet, but hey, if anyone has good ideas for how to do it, let’s go for it. Maybe there is some theoretical framework around “value alignment” that we can create up front—wonderful.
I’m also in favor of generally educating scientists and engineers about the grave moral responsibility they have to watch out for these things and to take appropriate responsibility. (I tend to think that existential risk lies most in the actions, good or bad, of those who are actually on the frontier.)
But EA/XR folks don’t seem to be primarily advocating for specific safety measures. Instead, what I hear (or think I’m hearing) is a kind of generalized fear of progress. Again, that’s where I get lost. I think that (1) progress is too obviously valuable and (2) our ability to actually predict and control future risks is too low.
I wrote up some more detailed questions on the crux here and would appreciate your input: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies
A someone fairly steeped in Progress Studies (and actively contributing to it), I think this is a good characterization.
From the PS side, I wrote up some thoughts about the difference and some things I don’t quite understand about the EA/XR side here; I would appreciate comments: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies
- Jun 2, 2021, 10:18 PM; 16 points) 's comment on Progress studies vs. longtermist EA: some differences by (
Help me find the crux between EA/XR and Progress Studies
As someone who is more on the PS side than the EA side, this does not quite resonate with me.
I am still thinking this issue through and don’t have a settled view. But here are a few, scattered reactions I have to this framing.
On time horizon and discount rate:
I don’t think I’m assuming a short civilization. I very much want civilization to last millions or billions of years! (I differ from Tyler on this point, I guess)
You say “what does it matter if we accelerate progress by a few hundred or even a few thousand years”? I don’t understand that framing. It’s not about a constant number of years of acceleration, it’s about the growth rate.
I am more interested in actual lives than potential / not-yet-existing ones. I don’t place zero value or meaning on the potential for many happy lives in the future, but I also don’t like the idea that people today should suffer for the sake of theoretical people who don’t actually exist (yet). This is an unresolved philosophical paradox in my mind.
Note, if we could cure aging, and I and everyone else had indefinite lifespans, I might change my discount rate? Not sure, but I think I would, significantly.
This actually points to perhaps the biggest difference between my personal philosophy (I won’t speak for all of progress studies) and Effective Altruism: I am not an altruist! (My view is more of an enlightened egoism, including a sort of selfish value placed on cooperation, relationships, and even on posterity in some sense.)
On risk:
I’m always wary of multiplying very small numbers by very large numbers and then trying to reason about the product. So, “this thing has a 1-e6 chance of affecting 1e15 future people and therefore should be valued at 1e9” is very suspect to me. I’m not sure if that’s a fair characterization of EA/XR arguments, but some of them land on me this way.
Related, even if there are huge catastrophic and even existential risks ahead of us, I’m not convinced that we reduce them by slowing down. It may be that the best way to reduce them is to speed up—to get more knowledge, more technology, more infrastructure, and more general wealth.
On DTD and moral/social progress:
I very much agree with the general observation that material progress has raced ahead of moral/social progress, and that this is a bad and disturbing and dangerous thing. I agree that we need to accelerate moral/social progress, and that in a sense this is more urgent than accelerating material progress.
I also am sympathetic in principle to the idea of differential technology development.
BUT—I honestly don’t know very clearly what either of these would consist of, in practice. I have not engaged deeply with the EA/XR literature, but I’m at least somewhat familiar with the community and its thinking now, and I still don’t really know what a practical program of action would mean or what next steps would be.
More broadly, I think it makes sense to get smarter about how we approach safety, and I think it’s a good thing that in recent decades we are seeing researchers think about safety issues before disasters happen (e.g., in genetic engineering and AI), rather than after as has been the case for most fields in the past.
“Let’s find safe ways to continue making progress” is maybe a message and a goal that both communities can get behind.
Sorry for the unstructured dump of thoughts, hope that is interesting at least.
I haven’t forgotten this, but my response has turned into an entire essay. I think I’ll do it as a separate post, and link it here. Thanks!
I don’t have strong opinions on the reproducibility issues. My guess is that if it has contributed to stagnation it’s been more of a symptom than a cause.
As for where to spend funding, I also don’t have a strong answer. My feeling is that reproducibility isn’t really stopping anything, it’s a tax/friction/overhead at worst? So I would tend to favor a promising science project over a reproducibility project. On the other hand, metascience feels important, and more neglected than science itself.
I think advances in science leading to technology is only the proximal cause of progress. I think the deeper causes are, in fact, philosophical (including epistemic, moral, and political causes). The Scientific Revolution, the shift from monarchy to republics, the development of free markets and enterprise, the growth of capitalism—all of these are social/political causes that underlie scientific, technological, industrial, and economic progress.
More generally, I think that progress in technology, science, and government are tightly intertwined in history and can’t really be separated.
I think advances in the humanities are absolutely needed—more so in a certain sense than advances in the physical sciences, because our material technology today is far more advanced than our moral technology. I think moral and political causes are to blame for our incompetent response to covid; for high prices in housing, education, and medicine; and for lack of economic progress in poorer countries. I think better social “technology” is needed to avoid war, to reform policing, to end conspiracy theories, and to get everyone to vaccinate their children. And ultimately I think cultural and philosophical issues are at the root of the scientific/technological slowdown of the last ~50 years.
So, yeah, I think social advances were actually important in the past and will be in the future.
It’s hard to prioritize! I try to have overarching / long-term goals, and to spend most of my time on them, but also to take advantage of opportunities when they arise. I look for things that significantly advance my understanding of progress, build my public content base, build my audience, or better, all three.
Right now I’m working on two things. One is continued curriculum development for my progress course for the Academy of Thought and Industry, a private high school. The other, more long-term project is a book on progress. Along the way I intend to keep writing semi-regularly at rootsofprogress.org.
I am broadly sympathetic to Patrick’s way of looking at this, yes.
If progress studies feels like a miss on EA’s part to you… I think folks within EA, especially those who have been well within it for a long time, are better placed to analyze why/how that happened. Maybe rather than give an answer, let me suggest some hypotheses that might be fruitful to explore:
A focus on saving lives and relieving suffering, with these seen as more moral or important than comfort, entertainment, enjoyment, or luxury; or economic growth; or the advance of knowledge?
A data-driven focus that naturally leads to more short-term, measurable impact? (Vs., say, a more historical and philosophical focus?)
A concern about existential risk from technology and progress?
Some other tendency to see technology, capitalism, and economic growth as less important, less moral, or otherwise lower-status?
An assumption that these things are already popular and well-served by market mechanisms and therefore not-neglected?
As for “tuning the metal detector”, I think a root-cause analysis on progress studies or any other area you feel you “missed” would be the best way to approach it!
Well, one final thought: The question of “how to do the most good” is deep and challenging enough that you can’t answer it with anything less than an entire philosophy. I suspect that EA is significantly influenced by a certain philosophical orientation, and that orientation is fundamentally altruistic. Progress isn’t really altruistic, at least not to my mind. Altruism is about giving, whereas progress is about creating. They’re not unrelated, but they’re different orientations.
But I could be wrong here, and @Benjamin_Todd, above, has given me a whole bunch of stuff to read to challenge my understanding of EA, so I should go digest that before speculating any more.
I have a theory of change but not a super-detailed one. I think ideas matter and that they move the world. I think you get new ideas out there any way you can.
Right now I’m working on a book about progress. I hope this book will be read widely, but above all I’d like it to be read by the scientists, engineers and entrepreneurs who are creating, or will create, the next major breakthroughs that move humanity forward. I want to motivate them, to give them inspiration and courage. Someday, maybe in twenty years, I’d love to meet the scientist who solved human aging, or the engineer who invented atomically precise manufacturing, or the founder of a company providing nuclear power to the world, and hear that they were inspired in part by my work.
I’d also like my message to reach people in education, journalism, and the arts, and for them to help spread the philosophy of progress too, which will magnify that kind of impact.
And I’d like it to reach people involved in policy. See my answer to @BrianTan about “interventions” for more detail on what I’m thinking there.
I’d like to see the progress community doing more work on many fronts: on the history of specific areas, on frontier technologies and their possibilities, and on specific policy programs and reforms that would advance progress.
Let me say up front that there is a divergence here between my ideological biases/priors and what I think I can prove or demonstrate objectively. I usually try to stick to the latter because I think that’s more useful to everyone, but since you asked I need to get into the former.
Does government have a role to play? Well, taking that literally, then absolutely, yes. If nothing else, I think it’s clear that government creates certain conditions of political stability, and provides legal infrastructure such as corporate and contract law, property law including IP, and the court system. All of those are necessary for progress.
(And when I mentioned “root-cause analysis on most human suffering” above, I was mostly thinking about dysfunctional governments in the poorest countries that are totally corrupt and/or can’t even maintain law & order)
I also think government, especially the military, has at least a sort of incidental role to play as a customer of technology. The longitude problem was funded in part by the British navy. The technique of canning was invented when Napoleon offered a prize for a way to preserve food for his military on long foreign campaigns. The US military was one of the first customers of integrated circuits. Etc.
And of course the military has reasons to do at least some R&D in-house, too.
But I think what you’re really asking about is whether civilian government should fund progress, or promote it through “policy”, or otherwise be actively involved in directing it.
All I can say for sure here is: I don’t know. So here’s where we get into my priors, which are pretty much laissez-faire. That makes me generally unfavorable towards government subsidy or intervention. But again, this is what I don’t think I have a real answer on yet. In fact, a big part of the motivation for starting The Roots of Progress was to challenge myself on these issues and to try to build up a stronger evidentiary base to draw conclusions.
For now let me just suggest:
I think that all government subsidies are morally problematic, since taxpayers are non-consenting
I don’t (yet?) see what government subsidies can accomplish that can’t (in theory) be accomplished non-coercively
I worry that even when government attempts to advance progress, it may end up slowing it down—for example, the dominance of NIH/NSF in science funding combined with their committee-based peer-review system is often suggested as a factor slowing down scientific progress
In general I think that progress is better made in decentralized fashion, and government solutions tend to be centralized
I also think that progress is helped by accountability mechanisms, and government tends to lack these mechanisms or have weaker ones
That said, here are a few things that give me pause.
Government-backed R&D, even for not-directly-military purposes, has had some significant wins, such as DARPA kicking off the Internet.
Some major projects have only gotten done with government support, such as the US transcontinental railroad in the 1860s. This happened in the most laissez-faire country in the world, at a time when it was way more laissez-faire than it is now, so… if that needed government support, maybe there was a reason. (I don’t know yet.)
Economic strength is closely related to national security, which entangles the government and the economy in ways I haven’t fully worked out yet. E.g., I’m not sure the best way for government to ensure that we have strategic commodities such as oil, steel, and food in wartime.
Anyway, this is all stuff I continue to think deeply about and hope to have more to say about later. And at some point I would like to deeply engage with Mazzucato’s work and other similar work so that I can have a more informed opinion.
I don’t really have great thoughts on metrics, as I indicated to @monadica. Happy to chat about it sometime! It’s a hard problem.
Re measuring progress, it’s hard. No one metric captures it. The one that people use if they have to use something is GDP but that has all kinds of problems. In practice, you have to just look at multiple metrics, some which are narrow but easy to measure, and some which are broad aggregates or indices.
Re “piecewise” process, it’s true that progress is not linear! I agree it is stochastic.
Re a golden age, I’m not sure, but see my reply to @BrianTan below re “interventions”.
I’ll have to read more about progress in “renewables” to decide how big a breakthrough that is, but at best it would have to be counted, like genetics, as a potential future revolution, not one that’s already here. We still get most of our energy from fossil fuels.
Well, the participants are high school students, so for most of them the work they are doing immediately is going to university. Like all education, it is more of a long-term investment.
Maybe there’s just a confusion with the metaphor here? I generally agree that there is a practically infinite amount of progress to be made.
OK, so maybe there are a few potential attitudes towards progress studies:
It’s definitely good and we should put resources to it
Eh, it’s fine but not really important and I’m not interested in it
It is actively harming the world by increasing x-risk, and we should stop it
I’ve been perceiving a lot of EA/XR folks to be in (3) but maybe you’re saying they’re more in (2)?
Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I’m somewhere between (1) and (2)… I think there are valuable things to do here, and I’m glad people are doing them, but I can’t see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we’re just disagreeing on relative priority and neglectedness.
(But I don’t think that’s all of it.)