Author, The Roots of Progress (rootsofprogress.org)
jasoncrawford
My perception of EA is that a lot of it is focused on saving lives and relieving suffering. I don’t see as much focus on general economic growth and scientific and technological progress.
There are two things to consider here. First, there is value in positives above and beyond merely living without suffering. Entertainment, travel, personal fitness and beauty, luxury—all of these are worth pursuing. Second, over the long run, more lives have been saved and suffering relieved by efforts to pursue general growth and progress than direct charitable efforts. So we should consider the balance between the two.
To EA’s credit, I think the community does understand this much better than other proponents of altruism and charity! And some EA organizations put resources into long-term scientific progress, which is great.
One thing I’m puzzled by is why there doesn’t seem to be a strong focus within EA on institutional reform (or not as strong as I would expect). A root-cause analysis on most human suffering, if it went deep enough, would blame government and cultures that don’t foster science, invention, industry, and business. It seems that the most high-leverage long-term plan to reduce human suffering would be to spread global rationality and capitalism.
I think basically you have to look at where an innovation sits in the tech tree.
Energy technologies tend to be fundamental enablers of other sectors. J. Storrs Hall makes a good case for the need to increase per-capita energy usage, which he calls the Henry Adams Curve: https://rootsofprogress.org/where-is-my-flying-car
But also, a fundamentally new way to do manufacturing, transportation, communication, or information processing would enable a lot of downstream progress.
I think there are a couple things with the bicycle. One is that it depended on materials and manufacturing techniques much more than is obvious (and more than I even brought out in that post): bearings, hollow metal tubes, gears and chains, rubber, etc.
The other is that it’s really just the overall story of progress: in a sense there was lots of low-hanging fruit for thousands of years before the Industrial Revolution.
But if you want to understand progress now, 300 years in, when the markets are much more efficient, so to speak, the analysis is different. Now there are lots of fruit-pickers everywhere looking for fruit. So there’s less obvious stuff lying around. Which is why we need to open up new technical fields, to discover whole new orchards of fruit (some of which will be low-hanging).
Oh, I should also point to the SSC response to “ideas getting harder to find”, which I thought was very good: https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/
In particular, I don’t think you can measure “research productivity” as percent improvement divided by absolute research input. I understand the rationale for measuring it this way, but I think for reasons Scott points out, it’s just not the right metric to use.
Another way to look at this is: one generative model for exponential growth is a thing that is growing in proportion to its size. One way this can happen is that the growing thing invests a constant portion of its resources into growth. But in that model, you expect to see the resources used for growth to be exponentially increasing. IMO this is what we see with R&D.
Another place you can see this is in the growth of a startup. Startups can often grow revenue exponentially, but they also hire exponentially. If you used a similar measure of “employee productivity” parallel to “research productivity”, then you’d say it is going down, because an increasing number of employees is needed to maintain a constant % increase in revenue.
Further, what these examples ought to make clear is that exponentially increasing inputs to create exponential growth is actually totally sustainable. So, I don’t see it as a cause for alarm at all, but rather (as Scott says) the natural order of things.
In brief, I think: (1) subjective measures of well-being don’t tell us the full story about whether progress is real, and (2) the measures we have are actually inconsistent, with some showing positive benefits of progress, others flat, and a few slightly negative (but most of them not epidemics).
To elaborate, on the second point first:
The Easterlin Paradox, to my understanding, dissolved over time with more and better data. Steven Pinker addresses this pretty well in Enlightenment Now, which I reviewed here: https://rootsofprogress.org/enlightenment-now
Our World in Data has a section on this, showing that happiness is correlated with income both within and between countries, and over time: https://ourworldindata.org/happiness-and-life-satisfaction#the-link-between-happiness-and-income
Regarding rates of mental illness, the data don’t show a consistent increasing trend, and certainly nothing like the “epidemic” we sometimes hear about:
Mental health and substance abuse disorders, flat since 1990 in many regions: https://ourworldindata.org/grapher/share-with-mental-and-substance-disorders?tab=chart
Global suicide rates are significantly down since 1990: https://ourworldindata.org/grapher/suicide-death-rates-by-sex (see also How have suicide rates changed?)
OWID also concludes that there is no loneliness epidemic: https://ourworldindata.org/loneliness-epidemic
But to return to the first point, I think we have to be careful in using metrics like self-reported life satisfaction to evaluate progress.
Emotional responses tend to be short-term and relative. They report a derivative, not an integral. That does not, however, mean that the derivative is all that matters! Rather, it means that our emotions don’t tell us about everything that matters.
In the last few hundred years, we have eradicated smallpox, given women the ability to control their reproduction and choose their careers, liberated most of humanity from back-breaking physical labor and 80+ hour work weeks, opened the world to travel and cultural exchange, and made the combined knowledge, art, and philosophy of the world available to almost everyone. (And that’s just a small sample of the highlights.)
I think these things are self-evidently good. If a subjective measure of well-being doesn’t report that people are happier when they aren’t sentenced to hard labor on a farm, when they aren’t trapped within a few miles of their village, when they and their families don’t starve from famine caused by drought, and when their children don’t die before the age of five from infectious disease… then all that proves is that people have forgotten what those things are like and don’t know how good they have it.
Off the top of my head:
Maximum life expectancy. We’ve pushed up life expectancy at birth enormously, and life expectancy at all ages has increased somewhat. But 80–90 years is still “old” and we haven’t cured aging itself.
Art? I haven’t looked into it much, but I don’t really know of any significant improvement in fine arts for a very long time—not in style/technique and not even in the technology (e.g., methods of casting a bronze sculpture). I’d also suggest that music has gotten less sophisticated, but this is super-subjective and treads in culture-war territory, so I’m just going to throw it out there as a wild-ass hypothesis for someone to follow up on at some point.
Education? High school graduation rates are up, and world literacy rates are up, but I’m not really sure about overall educational achievement?
Health care price/affordability: medicine itself has advanced tremendously, but the pricing on basic services is all out of whack and the way we pay for them is a tangled mess.
Housing affordability, maybe? I’m not sure.
If you said 50 years instead of 100, there’s a longer and more obvious list. There really hasn’t been any major breakthrough in manufacturing, agriculture, energy, or transportation in that time, and some things (like passenger flight speeds and airport convenience) have clearly regressed.
I should add, though, that I think there is an important truth in the concern about whether progress makes us happier. Material progress doesn’t make us happier on its own: it also requires good choices and a healthy psychology.
Technology isn’t inherently good or bad, it is made so by how we use it. Technology generally gives us more power and more choices, and as our choices expand, we need to get better at making choices. And I’m not sure we’re getting better at making choices as fast as our choices are expanding.
The society-level version of this is that technology can be used for evil at a society level too, for instance, when it enables authoritarian governments or destructive wars. And just as at the individual level, I’m not sure our “moral technology” is advancing at the same rate as our physical technology.
So, I do see problems here. I just don’t think that technology is the problem! Technology is good and we need more of it. But we also need to improve our psychological, social, and moral “technology”.
More in this dialogue: https://pairagraph.com/dialogue/354c72095d2f42dab92bf42726d785ff
I don’t know much about it beyond that Wikipedia page, but I think that something like this is generally in the right direction.
In particular, I would say:
Technology is not inherently risk-creating or safety-creating. Technology can create safety, when we set safety as a conscious goal.
However, technology is probably risk-creating by default. That is, when our goal is anything other than safety—more power, more speed, more efficiency, more abundance, etc.—then it might create risk as a side effect.
Historically, we have been reactive rather than proactive about technology risk. People die, then we do the root-cause analysis and fix it.
Even when we do anticipate problems, we usually don’t anticipate the right ones. When X-rays were first introduced, people had a moral panic about men seeing through women’s clothing on the street, but no one worried about radiation burns or cancer.
Even when we correctly anticipate problems, we don’t necessarily heed the warnings. At the dawn of the antibiotic age, Alexander Fleming foresaw the problem of resistance, but that didn’t prevent doctors from way overprescribing antibiotics for many years.
We need to get better at all of the above in order to continue to improve safety as we simultaneously pursue other technological goals: more proactive, more accurate at predicting risk, and more disciplined about heeding the risk. (This is obviously so for x-risk, where the reactive approach doesn’t work!)
I see positive signs of this in how the AI and genetics communities are approach safety in their fields. I can’t say whether it’s enough, too much, or just right.
Anyway, DTD seems like a much better concept than the conventional “let’s slow down progress across the board, for safety’s sake.” This is a fundamental error, for reasons David Deutsch describes in The Beginning of Infinity.
But that’s also where I might (I’m not sure) disagree with DTD, depending on how it’s formulated. The reason to accelerate safety-creating technology is not because “it may be too difficult to prevent the development of a risky technology.” It’s because most risky technologies are also extremely valuable, and we don’t want to prevent them. We want them, we just want to have them safely.
Re my own focus:
The irony is that my original motivation for studying progress was to better ground and validate my epistemic and moral ideas!
One challenge with epistemic, moral, and (I’ll throw in) political ideas is that we’ve literally been debating them for 2,500 years and we still don’t agree. We’ve probably come up with many good ideas already, but they haven’t gotten wide enough adoption. So I think figuring out how to spread best practices is more high-leverage than making progress in these fields as such.
Before I got into what would come to be called “progress studies”, I spent a quarter-century discussing and debating philosophic ideas with many different people, who had many different viewpoints. One thing that became clear to me was that, not only do people not agree on how to solve our problems, they don’t even agree on what the problems are. A left-wing environmentalist focuses on climate change, while a right-wing deficit hawk focuses on the national debt. Each thinks that even the problem the other one is so worried about is overblown, while their own problem is neglected. So of course they call for different policies.
I realized that a lot of the issues I care about, and the problems underlying them, were founded on my keen appreciation for the story of human progress: how bad living standards used to be and how much they’ve improved.
And, further, I thought that studying the history of progress—not just material, but epistemic and moral too, actually—would be the best way to empirically ground any claims about how to make the world better.
I started by studying material progress because (1) it happened to be what I was most interested in and (2) it’s the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress. Science obviously supports technology. Freedom of thought and expression is needed for science. Economic freedom is needed for material progress. Technological progress provides the surplus that is needed to fund science, and invents the instruments that science needs too. Economic progress provides the means for a free society to defend itself militarily, and ultimately justifies and validates that society. So I don’t think they can be separated.
Long-term, I’d like to study moral and epistemic progress. I’d love to do a history of science, for instance. On moral progress, I’d love to read (or write!) about how we ended practices like slavery, dueling, and trial by ordeal; how we developed concepts like rule of law and individual rights; how we moved from tribalism to universalism and recognized the humanity of all races and sexes. Some of this is covered very well in Pinker’s recent books (Better Angels and Enlightenment Now) but more could be done.
Re the Long Reflection:
I haven’t read Ord’s take on this, but the concept as you describe it strikes me as not quite right. For one, to pause on material progress would come at a terrible cost: all of the lives we could be saving and extending, all the people we could be lifting out of poverty, all of the things we can’t even anticipate that would come from more wealth, technology and infrastructure.
For another, it seems to imply a very high degree of being able to anticipate and predict the future, which I think we just don’t have. I think David Deutsch captures this better than I can; from The Beginning of Infinity (pp 202–204):
… a recurring theme in pessimistic theories throughout history has been that an exceptionally dangerous moment is imminent. Our Final Century makes the case that the period since the mid twentieth century has been the first in which technology has been capable of destroying civilization. But that is not so. Many civilizations in history were destroyed by the simple technologies of fire and the sword. Indeed, of all civilizations in history, the overwhelming majority have been destroyed, some intentionally, some as a result of plague or natural disaster. Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology, better hygiene, or better political or economic institutions. Very few, if any, could have been saved by greater caution about innovation. In fact most had enthusiastically implemented the precautionary principle.…
As we look back on the failed civilizations of the past, we can see that they were so poor, their technology was so feeble, and their explanations of the world so fragmentary and full of misconceptions that their caution about innovation and progress was as perverse as expecting a blindfold to be useful when navigating dangerous waters. Pessimists believe that the present state of our own civilization is an exception to that pattern. But what does the precautionary principle say about that claim? Can we be sure that our present knowledge, too, is not riddled with dangerous gaps and misconceptions? That our present wealth is not pathetically inadequate to deal with unforeseen problems? Since we cannot be sure, would not the precautionary principle require us to confine ourselves to the policy that would always have been salutary in the past – namely innovation and, in emergencies, even blind optimism about the benefits of new knowledge?
When you look back at the history of progress, one theme is that it’s generally impossible to anticipate where progress will come from or what an advance will lead to. Who could have anticipated that studying electromagnetic radiation would give us ways to communicate long-distance, or to do non-invasive imaging inside the human body?
So to say, “let’s not do these risky things, let’s only do these safe things”, presumes that (a) we know what risks we are subject to and (b) we know what activities will lead towards or away from them, and towards or away from solutions. But I just don’t think we can predict those things, not at the level that a Long Reflection would imply.
If we had paused for Reflection in 2010, instead of founding Moderna and BioNTech to pursue mRNA vaccine technology, where would we be today vs. covid?
In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.
Hmm, I thought that running discussion sessions with the students might be hard, but it was quite natural! I was lucky to get a great group of students in the first cohort.
There were some gaps in their knowledge I didn’t anticipate. They weren’t very familiar with simple machines and mechanical advantage, with basic molecular biochemistry such as proteins and DNA, or with basic financial/accounting concepts such as fixed vs. variable cost.
Not sure what to say about an EA course, sorry!
I will answer this, but there’s a lot to read here, so I will come back to it later—thanks!
“Are new fields getting harder to find?” I think this is the trillion-dollar question! I don’t have an answer yet though.
Is progress open indefinitely? I think there is probably at least a theoretic end to progress, but it’s so unimaginably far away that for our purposes today we should consider progress as potentially infinite. There are still an enormous number of things to learn and invent.
Maybe when I have some interventions I’m more sure of! (And/or if some powerful person or agency was directly asking me for input.)
Epistemically, before I can recommend interventions I need to really understand causation, and before I can explain or hypothesize causation, I need to get clear on the specific timeline of events. And in terms of personal motivation, I’m much more interested in the detailed history of progress than in arguing policy with people.
But, yes, eventually the whole point of progress studies is to figure out how to make more (and better) progress, so it should end up in some sort of intervention at some level.
If I had to recommend something now, I would at least point to a few areas of leverage:
Promote the idea of progress. Teach its history, in schools and universities. Promote it in art, especially more optimistic sci-fi. Journalists should become industrially literate, and it should be reflected in their stories. Celebrate major achievements. Etc.
Roll back over-burdensome regulation. As just one example, there’s a big spotlight shining on the FDA right now and its role in delaying the covid vaccines. For another, see Eli Dourado on environmental review.
Decentralize funding for science & research. I fear that the dominance of the federal government (in the US at least) in research funding, and the reliance on committee-based peer review, has led to too much consensus and groupthink and not enough room for contrarians and for ideas that challenge dominant paradigms. See Donald Braben’s Scientific Freedom (recently re-printed by Stripe Press).
See also my review of Where Is My Flying Car?, which I am very sympathetic with: https://rootsofprogress.org/where-is-my-flying-car
Alan Kay suggested that progress in education should be measured in “Sistine Chapel ceilings per lifetime.” Ultimately my goal is something similar, but maybe substitute “Nobel-worthy scientific discoveries”, “Watt-level inventions” or “trillion-dollar businesses” for the artistic goal. I’ll know if I’m successful if in twenty years, or fifty, people who did those things are telling me they were given inspiration and courage from my work.
The problem with Sistine Chapel ceilings is that it’s a lagging metric. We all need leading metrics to steer ourselves by. So on a much shorter timescale, I look at my audience size—over 12k on Twitter now and ~2,700 on my email list. I also look at the quality of the audience and the feedback I’m getting. With Progress Studies for Young Scholars, we gave the students an end-of-program feedback survey (two-thirds rated it 9 or 10 out of 10). When I write a book, of course, I’ll look at how well it sells. Etc.
Re actions I want people to take: right now I’m just happy if they listen and learn and find what I have to say interesting. And, especially for young people, I hope they will consider devoting their careers to ambitious goals that drive forward human progress.
I think being an engineer helps me dig into the technical details of the history I’m researching, and to write explanations that go deeper into that detail. Many histories of technology are very light on technical detail and don’t really explain how the inventions worked. One thing that makes me unique is actually explaining how stuff works. This is probably the most important thing.
I think being a founder is helpful in understanding some business fundamentals like marketing or finance. And I am constantly drawing parallels and making comparisons between today’s tech startup world and how business and invention were done in the past, or how science and research are done today.
I also think my experiences as a founder have helped me in launching The Roots of Progress. I have a sense of what kind of opportunities I’m personally interested in and have aptitude for, how to launch things and iterate on them, when something is taking off, what opportunities to pursue, how to build a social media presence, etc.
The Roots of Progress was really about following an opportunity at a specific moment in time, for me and for the world. Both starting the project as a hobby, when I was personally fascinated by the topic, and going full-time on it right when the “progress studies” movement was taking off. So I don’t see how it could have happened any differently.
See my reply to @BrianTan on a similar question, thanks!
There isn’t a lot out there. In addition to my own work, I would suggest Steven Pinker’s Enlightenment Now and perhaps David Deutsch’s The Beginning of Infinity. Those are some of the best sources on the philosophy of progress. Also Ayn Rand’s Atlas Shrugged, which is the only novel I know of that portrays science, engineering and business as a noble quest for the betterment of humanity.
Maybe there’s just a confusion with the metaphor here? I generally agree that there is a practically infinite amount of progress to be made.
I think ideas get progressively harder to find within any given field as it matures. However, when we create new fields or find new breakthrough technologies, it opens up whole new orchards of low-hanging fruit.
When the Web was created, there were lots of new ideas that were easy to find: “put X on the web” for many values of X. After penicillin was invented, there was a similar golden age of antibiotics: “check out X mold or Y soil sample and check it for effectiveness against Z disease”. At times like this you see very rapid progress in certain applications.
Similarly, imagine if we got atomically precise manufacturing (APM). There would be a whole set of easy-to-find ideas: “manufacture X using APM.” Or if we got an easy way to understand and manipulate genes, there would be a set of easy-to-find ideas of the form “edit X gene to cure Y disease or enhance Z trait.”
I think the Great Stagnation is not a failure to extract all the value from existing fields, but rather a failure to open up new fields, to have new breakthroughs decades ago.
Further reading: https://rootsofprogress.org/teasing-apart-the-s-curves
Also: https://rootsofprogress.org/where-is-my-flying-car