If you haven’t seen it, you might enjoy my and John Myers’ article criticising the UK’s Future Generations Bill, which made many of the same arguments you make against a proposed law that featured things like the Posterity Impact Statements.
Instead, task a specific, identifiable agency with enforcing posterity impact statements. If their judgements are unreasonable, contradictory, or inconsistent, then there is a specific agency head that can be fired and replaced instead of a vast and unmanageable judiciary.
I’ve noticed this distinction become relevant a few times now: between wide, department-spanning regulation / intiatives on one hand; and fociused offices / people / agencies / departments with a narrow, specific remit on the other. I have in mind that the ‘wide’ category involves checking for compliance with some desiderata, and stopping or modifying existing plans if they don’t; while the ‘focused’ category involves figuring out how to proactively achieve some goal, sometimes by building something new in the world.
Examples of the ‘wide’ category are NEPA (and other laws / regulation where basically anyone can sue); or new impact assessments required for a wide range of projects, such as the ‘future generations impact assessment’ proposal from the Wellbeing of Future Generations Bill (page 7 of this PDF).
I think my examples show a bias towards the ‘focused and proactive’ category but the ‘wide regulation’ category obviously is sometimes very useful; even necessary. Maybe one thought is that concrete projects should often precede wide regulation, and wide regulation often does best when it’s specific and legible (i.e. requiring that a specific safety-promoting technology is installed in new builds). We don’t mind regulation that requires smoke alarms and sprinklers, because they work and they are worth the money. It’s possible to imagine focused projects to drive down costs of e.g. sequencing and sterilisation tech, and then maybe following up with regulation which requires specific tech be installed to clear standards, enforced by a specific agency.
Though one thing I should have mentioned explicitly in the post is that being illegible and distributed is only one of the failure modes of regulation, but certainly not the only one. For example, many US cities have building height limits which economists have estimated are causing billions in deadweight loss, higher rents, etc. But a building height limit is very legible and clear. Still, somehow the relevant government bodies are often too captured by concentrated activist groups and don’t consider expected value on the broader public.
Btw, I think The Power Broker is an interesting book to read regarding focused projects. There are many legitimate criticisms of Robert Moses, but still it is remarkable how he basically built a startup within the NY government that was much more competent, efficient, and visionary than the rest of the political system.
It’s possible to imagine focused projects to drive down costs of e.g. sequencing and sterilisation tech, and then maybe following up with regulation which requires specific tech be installed to clear standards, enforced by a specific agency.
Is there a good read regarding regulatory proposals for these technologies in particular? I worry that wide regulation around sequencing in particular might slow down tech that I think will be good, like CRISPR therapies or embryo selection. Or maybe that’s a category error?
Just skimming the subjects, I can tell that this will be the best interview of him I’ve seen so far, congratulations on getting him on. I am now a subscriber, and listening.
If you post another interview of him I will buy a sub on your substack for sure
In the section on the sources of shorttermist biases, John and MacAskill write:Cognitive biases include actors’ tendencies to respond more strongly to vivid risks than to information acquired from abstract, general social scientific trends, as well as over-optimism about their ability to control and eliminate risks under situations of uncertainty. The attention that political actors pay to the future and to the nearby past are asymmetric because voters and many other political actors “can readily observe past economic performance but have little information about future conditions.” Thus, to economize on cognitive effort, many political actors forego the task of making predictions about the future and choose policies which have worked in the recent past. [emphasis mine]
What John and MacAskill are describing here doesn’t sound like a bias—it sounds like an actual political philosophy, one which people like Matt Ridley or Steven Pinker would probably endorse. There are many reasonable people who believe that we should extrapolate from past performance rather than “abstract, general social scientific trends”, and that we should be more optimistic with regards to our ability to deal with risks in due time rather than rely on hastily implemented policies.The people who believe this might be wrong, but you have to actually argue that they’re mistaken instead of just dismissing their worldview as a cognitive bias. Arguably their philosophy was a useful corrective to issues in the past that involved long-term trends. The people who responded to the concrete overpopulation scare in the 20th century with vague optimism about our ability to feed more people were correct, whereas the people who had “abstract social scientific” reasons for expecting resources to run out were wrong, and disastrously so given the mass sterilization and population control programs they inspired in India and China. (Obviously, I don’t think John and MacAskill would endorse those atrocities—my point is simply that what they call a cognitive bias would have prevented all that unnecessary suffering.)To zoom out a bit, we should be careful that we don’t implement longtermist reform in a way that dismisses the optimistic philosophy of governance which places greater weight on past experiences.
I disagree with this, primarily because of 2 reasons:
I disagree with the presumption that they were rational in being optimistic, primarily because while there is real progress in history (only if we count humans.), I don’t agree with the implication that we should expect an optimistic bright future. I would argue that technological x-risk has wiped out all expected value from the future, especially under a longtermist view, assuming the future is positive thus x-risk reduction is our main priority. If the expected value of the future is negative, then moral-circle expansion is the most important thing to do.
I disagree with the implication that the population bomb didn’t happen, ergo the sterilization was wrong. This is a classic case of hindsight bias, and there was no mitigation against this bias. More exhaustively, you need to make the claim that the population bomb can’t happen or was likely not to happen in order for your argument to go through. A longer comment by EricHerboso summarizes the miracles necessary in order to defuse the population bomb.
There was good reason back then to believe that overpopulation was a real problem whose time would come relatively soon. If it wasn’t for technological breakthroughs with dwarf wheat and IR8 rice variants, spearheaded by Norman Borlaug and others, our population would have seriously passed our ability to grow food by this point—the so-called Malthusian trap.
Using overpopulation as an example here would be akin to using something like global climate change as an example in the present, if it turns out that a technological breakthrough in the next 5-10 years completely obviates the need for us to be careful about greenhouse gas release in the future.
Because of this, I don’t think overpopulation as a cause area would make for the best example that you’re trying to make here.
I disagree with the presumption that they were rational in being optimistic, primarily because while there is real progress in history (only if we count humans.), I don’t agree with the implication that we should expect an optimistic bright future. I would argue that technological x-risk has wiped out all expected value from the future, especially under a longtermist view, assuming the future is positive thus x-risk reduction is our main priority. If the expected value of the future is negative, then moral-circle expansion is the most important thing to do
Or just making society wealthier overall (aka maximizing economic growth) so can enjoy these last few hundred years more. Nonetheless, I don’t share your pessimism.
I disagree with the implication that the population bomb didn’t happen, ergo the sterilization was wrong. This is a classic case of hindsight bias, and there was no mitigation against this bias. More exhaustively, you need to make the claim that the population bomb can’t happen or was likely not to happen in order for your argument to go through
But my point is precisely that we couldn’t have known in advance what those solutions looked like in advance because knowledge growth is unpredictable. But given the fact that we do end up solving many of these seemingly devastating problems, we should update in favor of a vague optimism about our future capabilities to deal with problems. I give the example of peak oil worries later in this post:
In the 70s, it was a common belief among the relevant technical experts that we would hit peak oil by the 90s. They could not have anticipated the new technologies that made more oil reserves accessible to us. If there was a longtermist research institute within the government at that time, it would have recommended that we stock up on foreign oil, and the end result of this would have been unaffordable transportation and heating for the poorest people on the planet.
Superb article, thanks so much for writing this.
If you haven’t seen it, you might enjoy my and John Myers’ article criticising the UK’s Future Generations Bill, which made many of the same arguments you make against a proposed law that featured things like the Posterity Impact Statements.
Thanks Larks, that was a great post!
Thanks Dwarkesh, really enjoyed this.
This section stood out to me:
I’ve noticed this distinction become relevant a few times now: between wide, department-spanning regulation / intiatives on one hand; and fociused offices / people / agencies / departments with a narrow, specific remit on the other. I have in mind that the ‘wide’ category involves checking for compliance with some desiderata, and stopping or modifying existing plans if they don’t; while the ‘focused’ category involves figuring out how to proactively achieve some goal, sometimes by building something new in the world.
Examples of the ‘wide’ category are NEPA (and other laws / regulation where basically anyone can sue); or new impact assessments required for a wide range of projects, such as the ‘future generations impact assessment’ proposal from the Wellbeing of Future Generations Bill (page 7 of this PDF).
Examples of the ‘focused’ category are the Office of Technology Assessment, the Spaceguard Survey Report, or something like the American Pandemic Preparedness Plan (even without the funding it deserves).
I think my examples show a bias towards the ‘focused and proactive’ category but the ‘wide regulation’ category obviously is sometimes very useful; even necessary. Maybe one thought is that concrete projects should often precede wide regulation, and wide regulation often does best when it’s specific and legible (i.e. requiring that a specific safety-promoting technology is installed in new builds). We don’t mind regulation that requires smoke alarms and sprinklers, because they work and they are worth the money. It’s possible to imagine focused projects to drive down costs of e.g. sequencing and sterilisation tech, and then maybe following up with regulation which requires specific tech be installed to clear standards, enforced by a specific agency.
Great point Fin!
Though one thing I should have mentioned explicitly in the post is that being illegible and distributed is only one of the failure modes of regulation, but certainly not the only one. For example, many US cities have building height limits which economists have estimated are causing billions in deadweight loss, higher rents, etc. But a building height limit is very legible and clear. Still, somehow the relevant government bodies are often too captured by concentrated activist groups and don’t consider expected value on the broader public.
Btw, I think The Power Broker is an interesting book to read regarding focused projects. There are many legitimate criticisms of Robert Moses, but still it is remarkable how he basically built a startup within the NY government that was much more competent, efficient, and visionary than the rest of the political system.
Is there a good read regarding regulatory proposals for these technologies in particular? I worry that wide regulation around sequencing in particular might slow down tech that I think will be good, like CRISPR therapies or embryo selection. Or maybe that’s a category error?
Almost any intervention that slows down embryo selection is a net negative for the world, regardless of what other positives come along with it.
Embryo selection is probably the highest ROI cause for EA around, and it’s possible right now, it is crazy that Hsu is not getting more attention.
I agree! Not sure if you saw my interview of Steve Hsu on my podcast, where we get deep into the weeds on embryo selection: https://www.dwarkeshpatel.com/p/steve-hsu
You got him to talk about the gwern analysis!
Just skimming the subjects, I can tell that this will be the best interview of him I’ve seen so far, congratulations on getting him on. I am now a subscriber, and listening.
If you post another interview of him I will buy a sub on your substack for sure
Don’t have paid subs, but thank you! Glad you enjoyed!
I disagree with this, primarily because of 2 reasons:
I disagree with the presumption that they were rational in being optimistic, primarily because while there is real progress in history (only if we count humans.), I don’t agree with the implication that we should expect an optimistic bright future. I would argue that technological x-risk has wiped out all expected value from the future, especially under a longtermist view, assuming the future is positive thus x-risk reduction is our main priority. If the expected value of the future is negative, then moral-circle expansion is the most important thing to do.
I disagree with the implication that the population bomb didn’t happen, ergo the sterilization was wrong. This is a classic case of hindsight bias, and there was no mitigation against this bias. More exhaustively, you need to make the claim that the population bomb can’t happen or was likely not to happen in order for your argument to go through. A longer comment by EricHerboso summarizes the miracles necessary in order to defuse the population bomb.
Thanks for the comment.
Or just making society wealthier overall (aka maximizing economic growth) so can enjoy these last few hundred years more. Nonetheless, I don’t share your pessimism.
But my point is precisely that we couldn’t have known in advance what those solutions looked like in advance because knowledge growth is unpredictable. But given the fact that we do end up solving many of these seemingly devastating problems, we should update in favor of a vague optimism about our future capabilities to deal with problems. I give the example of peak oil worries later in this post: