I recently gave a talk on one of my own ambitious projects at my organization, and gave the following outside view outcomes in order of likelihood.
The project fails to gain any traction or have any meaningful impact on the world.
The project has an impact on the world, but despite intentions the impact is negative, neutral or too small to matter.
The project has enough of a positive outcomes to matter.
In general, I’d say that outside view this is the most likely order of outcomes of any ambitious/world-saving project. And I was saying it specifically to elicit feedback and make sure people were red-teaming me morally.
However, it’s not specifically clear to me that putting more money into research/thinking improves it much?
For one thing, again the most likely outcome is that the project fails to gain any traction or have any impact at all, so you need to be de-risking the likelihood of that through classic lean-startup MVP style stuff anyway, you shouldn’t wait on that, and spend a bunch of money figuring out the positive or negative effects of an intervention at scale that won’t actually be able to scale (most things won’t scale).
For another, I think that a lot of the benefit of potentially world changing projects is through hard to reason about flow through effects. For instance, in your example about Andrew Carnegie and libraries, a lot of the benefits would be some hard to gesture at stuff related to having a more educated populace and how that effects various aspects of society and culture. You can certainly create Fermi estimates and systems models but ultimately people’s models will be very different, and missing one variable or relationship in a complex systems model of society can completely reverse the outcome.
Ultimately, it might be better to use the types of reasoning/systems analyis that work under Knightian Uncertainty, things like “Is this making us more anti-fragile? is this effectuative and allowing us to continually build towards more impact? Is this increasing our capabilities in an asymmetric way?”
This is the exact type of reasoning that would cause someone intuitively to think that space settlements are important—it’s clearly a thing that increases the anti-fragility of humanity, even if you don’t have exact models of the threats that it may help against. By increasing anti-fragility, you’re increasing the ability to face unknown threats. Certainly, you can get into specifics, and you can realize it doesn’t make you as anti-fragile as you thought, but again, it’s very easy to miss some other specifics that are unknown unknowns and totally reverse your conclusion.
I ultimately think what makes sense is a sort of culture of continuous oversight/thinking about your impact, rather than specific up front research or a budget. Maybe you could have “impact-analysisathons” once a quarter where you discuss these questions. I’m not sure exactly what it would look like, but I notice I’m pretty skeptical at the idea of putting a budget here or creating a team for this purpose. I think they end up doing lots of legible impact analysis which ultimately isn’t that useful for the real questions you care about.
This is the exact type of reasoning that would cause someone intuitively to think that space settlements are important—it’s clearly a thing that increases the anti-fragility of humanity, even if you don’t have exact models of the threats that it may help against. By increasing anti-fragility, you’re increasing the ability to face unknown threats. Certainly, you can get into specifics, and you can realize it doesn’t make you as anti-fragile as you thought, but again, it’s very easy to miss some other specifics that are unknown unknowns and totally reverse your conclusion.
This will be a good argument if Musk built and populated Antarctica bunkers before space.
It’s pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.
I agree it provides stronger challenges. I think I disagree with the other claims as presented, but the sentence is not detailed enough for me to really know if I actually disagree.
I recently gave a talk on one of my own ambitious projects at my organization, and gave the following outside view outcomes in order of likelihood.
The project fails to gain any traction or have any meaningful impact on the world.
The project has an impact on the world, but despite intentions the impact is negative, neutral or too small to matter.
The project has enough of a positive outcomes to matter.
In general, I’d say that outside view this is the most likely order of outcomes of any ambitious/world-saving project. And I was saying it specifically to elicit feedback and make sure people were red-teaming me morally.
However, it’s not specifically clear to me that putting more money into research/thinking improves it much?
For one thing, again the most likely outcome is that the project fails to gain any traction or have any impact at all, so you need to be de-risking the likelihood of that through classic lean-startup MVP style stuff anyway, you shouldn’t wait on that, and spend a bunch of money figuring out the positive or negative effects of an intervention at scale that won’t actually be able to scale (most things won’t scale).
For another, I think that a lot of the benefit of potentially world changing projects is through hard to reason about flow through effects. For instance, in your example about Andrew Carnegie and libraries, a lot of the benefits would be some hard to gesture at stuff related to having a more educated populace and how that effects various aspects of society and culture. You can certainly create Fermi estimates and systems models but ultimately people’s models will be very different, and missing one variable or relationship in a complex systems model of society can completely reverse the outcome.
Ultimately, it might be better to use the types of reasoning/systems analyis that work under Knightian Uncertainty, things like “Is this making us more anti-fragile? is this effectuative and allowing us to continually build towards more impact? Is this increasing our capabilities in an asymmetric way?”
This is the exact type of reasoning that would cause someone intuitively to think that space settlements are important—it’s clearly a thing that increases the anti-fragility of humanity, even if you don’t have exact models of the threats that it may help against. By increasing anti-fragility, you’re increasing the ability to face unknown threats. Certainly, you can get into specifics, and you can realize it doesn’t make you as anti-fragile as you thought, but again, it’s very easy to miss some other specifics that are unknown unknowns and totally reverse your conclusion.
I ultimately think what makes sense is a sort of culture of continuous oversight/thinking about your impact, rather than specific up front research or a budget. Maybe you could have “impact-analysisathons” once a quarter where you discuss these questions. I’m not sure exactly what it would look like, but I notice I’m pretty skeptical at the idea of putting a budget here or creating a team for this purpose. I think they end up doing lots of legible impact analysis which ultimately isn’t that useful for the real questions you care about.
This will be a good argument if Musk built and populated Antarctica bunkers before space.
It’s pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.
I agree it provides stronger challenges. I think I disagree with the other claims as presented, but the sentence is not detailed enough for me to really know if I actually disagree.