I think this has potential to be a crucial consideration with regard to our space colonization strategy
I see this raised often, but it seems like it’s clearly the wrong order of magnitude to make any noticeable proportional difference to the broad story of a space civilization, and I’ve never seen a good counterargument to that point.Wikipedia has a fine page on orders of magnitude for power. Solar energy received by Earth from the Sun is 1.740*10^17 W, vs 3.846*10^26W for total solar energy output, a difference of 2 billion times. Mars is further from the Sun and smaller, so receives almost another order of magnitude less solar flux.
Surfaces of planets are a miniscule portion of the habitable universe, whatever lives there won’t meaningfully directly affect aggregate population or welfare statistics of an established space civilization. The frame of the question is quantitatively much more extreme than treating the state of affairs in the tiny principality of Liechtenstein as of comparable importance to the state of affairs for the rest of the Earth.
I currently would guess that space habitats are better because they offer a more controlled environment due to greater surveillance as well human proximity, whereas an ecosystem on a planet would by and large be unmanaged wilderness,
Even on Mars (and moreso on the other even less hospitable planets in our system) support for life would have to be artificially constructed, and the life biologically altered (e.g. to deal with differences in gravity), moreso for planets around stars with different properties. So in terms of human control over the creation of the environment the tiny slice of extraterrestrial planets shouldn’t be expected to be very different in expected pseudowild per unit of solar flux, within one OOM.
if we can determine which method creates more wellbeing with some confidence, and we can tractably influence on the margin whether humanity chooses one or the other. e.g. SpaceX wants to colonize Mars whereas BlueOrigin wants to build O’Neill cylinders, so answering this question may imply supporting one company over the other.
Influence by this channel seems to be ~0. Almost all the economic value of space comes from building structures in space, not on planetary surfaces, and leaving planets intact wastes virtually all of the useful minerals in them. Early primitive Mars bases (requiring space infrastructure to get them there) that are not self-sustaining societies will in no way noticeably substitute for the use of the other 99.99999%+ of extraterrestrial resources in the Solar System that are not on the surface of Mars in the long run. Any effects along these lines would be negligible compared to other channels (like Elon Musk making money, or which is more successful at building space industry).
Thanks for the interesting post. Could you say more about the epistemic status of agricultural pesticides as the largest item in this category, e.g. what chance that in 3 years you would say another item (maybe missing from this list) is larger? And what ratio do you see between agricultural pesticides and other issues you excluded from the category (like climate change and partially naturogenic outcomes)?
But this is essentially separate from the global public goods issue, which you also seem to consider important (if I’m understanding your original point about “even the largest nation-states being only a small fraction of the world”),
The main dynamic I have in mind there is ‘country X being overwhelmingly technologically advantaged/disadvantaged ’ treated as an outcome on par with global destruction, driving racing, and the necessity for international coordination to set global policy.
I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.
Biotech threats are driven by violence. On AI, for rational regulators of a global state, a 1% or 10% chance of destroying society looks enough to mobilize immense resources and delay deployment of dangerous tech for safety engineering and testing. There are separate epistemic and internal coordination issues that lead to failures of rational part of the rational social planner model (e.g. US coronavirus policy has predictably failed to serve US interests or even the reelection aims of current officeholders, underuse of Tetlockian forecasting) that loom large (it’s hard to come up with a rational planner model explaining observed preparation for pandemics and AI disasters).
I’d say that given epistemic rationality in social policy setting, then you’re left with a big international coordination/brinksmanship issue, but you would get strict regulation against blowing up the world for small increments of profit.
I’d say it’s the other way around, because longtermism increases both rewards and costs in prisoner’s dilemmas. Consider an AGI race or nuclear war. Longtermism can increase the attraction of control over the future (e.g. wanting to have a long term future following religion X instead of Y, or communist vs capitalist). During the US nuclear monopoly some scientists advocated for preemptive war based on ideas about long-run totalitarianism. So the payoff stakes of C-C are magnified, but likewise for D-C and C-D.
On the other hand, effective bargaining and cooperation between players today is sufficient to reap almost all the benefits of safety (most of which depend more on not investing in destruction than investing in safety, and the threat of destruction for the current generation is enough to pay for plenty of safety investment).
And coordinating on deals in the interest of current parties is closer to the curent world than fanatical longtermism.
But the critical thing is that risk is not just an ‘investment in safety’ but investments in catastrophically risky moves driven by games ruled out by optimal allocation.
Thanks for this substantive and useful post. We’ve looked at this topic every few years in unpublished work at FHI to think about whether to prioritize it. So far it hasn’t looked promising enough to pursue very heavily, but I think more careful estimates of the inputs and productivity of research in the field (for forecasting relevant timelines and understanding the scale of the research) would be helpful. I’ll also comment on a few differences between the post and my models of BCI issues:
It does not seem a safe assumption to me that AGI is more difficult than effective mind-reading and control, since the latter requires complex interface with biology with large barriers to effective experimentation; my guess is that this sort of comprehensive regime of BCI capabilities will be preceded by AGI, and your estimate of D is too high
The idea that free societies never stabilize their non-totalitarian character, so that over time stable totalitarian societies predominate, leaves out the applications of this and other technologies to stabilizing other societal forms (e.g. security forces making binding oaths to principles of human rights and constitutional government, backed by transparently inspected BCI, or introducing AI security forces designed with similar motivations), especially if the alternative is predictably bad; also other technologies like AGI would come along before centuries of this BCI dynamic
Global dominance is blocked by nuclear weapons, but dominance of the long-term future through a state that is a large chunk of the world outgrowing the rest (e.g. by being ahead in AI or space colonization once economic and military power is limited by resources) is more plausible, and S is too low
I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term), but BCI that actually was highly effective for mind-reading would make international deals on WMD or AGI racing much more enforceable, as national leaders could make verifiable statements that they have no illicit WMD programs or secret AGI efforts, or that joint efforts to produce AGI with specific objectives are not being subverted; this seems to be potentially an enormous factor
Lie detection via neurotechnology, or mind-reading complex thoughts, seems quite difficult, and faces structural issues in that the representations for complex thoughts are going to be developed idiosyncratically in each individual, whereas things like optic nerve connections and the lower levels of V1 can be tracked by their definite inputs and outputs, shared across humans
I haven’t seen any great intervention points here for the downsides, analogous to alignment work for AI safety, or biosecurity countermeasures against biological weapons;
If one thought BCI technology was net helpful one could try to advance it, but it’s a moderately large and expensive field so one would likely need to leverage by advocacy or better R&D selection within the field to accelerate it enough to matter and be competitive with other areas of x-risk reduction activity
I think if you wanted to get more attention on this, likely the most effective thing to do would be a more rigorous assessment of the technology and best efforts nuts-and-bolts quantitative forecasting (preferably with some care about infohazards before publication). I’d be happy to give advice and feedback if you pursue such a project.
My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people’s tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.
If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data about previous coronaviruses such as SARS to stop COVID-19 in its tracks. Renewable energy research funding would be vastly higher than it is today, as would AI technical safety. As advanced AI developments brought AI catstrophic risks closer, there would be no competitive pressures to take risks with global externalities in development either by firms or nation-states.
Externalities massively reduce the returns to risk reduction, with even the largest nation-states being only a small fraction of the world, individual politicians much more concerned with their term of office and individual careers than national-level outcomes, and individual voters and donors constituting only a minute share of the affected parties. And conflict and bargaining problems are entirely responsible for war and military spending, central to the failure to overcome externalities with global climate policy, and core to the threat of AI accident catastrophe.
If those things were solved, and the risk-reward tradeoffs well understood, then we’re quite clearly in a world where we can have very low existential risk and high consumption. But if they’re not solved, the level of consumption is not key: spending on war and dangerous tech that risks global catastrophe can be motivated by the fear of competitive disadvantage/local catastrophe (e.g. being conquered) no matter how high consumption levels are.
People often argue that we urgently need to prioritize reducing existential risk because we live in an unusually dangerous time. If existential risk decreases over time, one might intuitively expect that efforts to reduce x-risk will matter less later on. But in fact, the lower the risk of existential catastrophe, the more valuable it is to further reduce that risk.
Think of it like this: if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four centuries. Halving risk again doubles the expected length to eight centuries. In general, halving x-risk becomes more valuable when x-risk is lower.
This argument starts with assumptions implying that civilization has on the order of a 10^-3000 chance of surviving a million years, a duration typical of mammalian species. In the second case it’s 10^-1250. That’s a completely absurd claim, a result of modeling as though you have infinite certainty in a constant hazard rate.
If you start with some reasonable credence that we’re not doomed and can enter a stable state of low risk, this effect becomes second order or negligible. E.g. leaping off from the Precipice estimates, say there’s expected 1⁄6 extinction risk this century, and 1⁄6 for the rest of history. I.e. probably we stabilize enough for civilization to survive as long as feasible. If the two periods were uncorrelated, then this reduces the value of preventing an existential catastrophe this century by between 1⁄6 and 1/3rd compared to preventing one after the risk of this century. That’s not negligible, but also not first order, and the risk of catastrophe would also cut the returns of saving for the future (your investments and institution/movement-building for x-risk 2 are destroyed if x-risk 1 wipes out humanity).
[For the Precipice estimates, it’s also worth noting that part of the reason for risk being after this century is credence on critical tech developments like AGI happening after this century, so if we make it through that this century, then risk in the later periods is lower since we’ve already passed through the dangerous transition and likely developed the means for stabilization at minimal risk.]
Scenarios where we are 99%+ likely to go prematurely extinct, from a sequence of separate risks that would each drive the probability of survival low, are going to have very low NPV of the future population, but we should not be near-certain that we are in such a scenario, and with uncertainty over reasonable parameter values you wind up with the dominant cases being those with substantial risk followed by substantial likelihood of safe stabilization, and late x-risk reduction work is not favored over reduction soon.
The problem with this is similar to the problem with not modelling uncertainty about discount rates discussed by Weitzman. If you project forward 100 years, scenarios with high discount rates drop out of your calculation, while the low discount rates scenarios dominate at that point. Likewise, the longtermist value of the long term future is all about the plausible scenarios where hazard rates give a limited cumulative x-risk probability over future history.
This result might not hold up if:
In future centuries, civilization will reduce x-risk to such a low rate that it will become too difficult to reduce any further.
It’s not required that it *will* do so, merely that it may plausibly go low enough that the total fraction of the future lost to such hazard rates doesn’t become overwhelmingly high.
“The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save.”
That was explicitly discussed at the time. I cited the blog post as a historical reference illustrating that such considerations were in mind, not as a comprehensive publication of everything people discussed at the time, when in fact there wasn’t one. That’s one reason, in addition to your novel contributions, I’m so happy about your work! GPI also has a big hopper of projects adding a lot of value by further developing and explicating ideas that are not radically novel so that they have more impact and get more improvement and critical feedback.
If you would like to do further recorded discussions about your research, I’m happy to do so anytime.
The Stern discussion.
Hanson’s If Uploads Come First is from 1994, his economic growth given machine intelligence is from 2001, and uploads were much discussed in transhumanist circles in the 1990s and 2000s, with substantial earlier discussion (e.g. by Moravec in his 1988 book Mind Children). Age of Em added more details and has a number of interesting smaller points, but the biggest ideas (Malthusian population growth by copying and economic impacts of brain emulations) are definitely present in 1994. The general idea of uploads as a technology goes back even further.
Age of Em should be understood like Superintelligence, as a polished presentation and elaboration of a set of ideas already locally known.
My recollection is that back in 2008-12 discussions would often cite the Stern Review, which reduced pure time preference to 0.1% per year, and thus concluded massive climate investments would pay off, the critiques of it noting that it would by the same token call for immense savings rates (97.5% according to Dasgupta 2006), and the defenses by Stern and various philosophers that pure time preference of 0 was philosophically appropriate.
In private discussions and correspondence it was used to make the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. I cited it for this in this 2012 blog post. People also discussed how this would go away if sufficient investment was applied patiently (whether for altruistic or other reasons), ending the era of dreamtime finance by driving pure time preference towards zero.
Trammell also argued that most people use too high a discount rate, so patient philanthropists should compensate by not donating any money; as far as I know, this is a novel argument.
This has been much discussed from before the beginning of EA, Robin Hanson being a particularly devoted proponent.
My biggest issue is that I don’t think returns to increased donations are flat, with the highest returns coming from entering into neglected areas where EA funds are already, or would be after investment, large relative to the existing funds, and I see returns declining closer to logarithmically than flat with increased EA resources;
This is not correctly modeled in your guesstimate, despite it doing a Monte Carlo draw over different rates of diminishing returns, because it ignores the correlations between diminishing returns and impact of existing spending: if EA makes truly outsized altruistic returns, it will be by doing things that are much better than typical, and so the accounts on which more neglected activities are the best thing to do now have higher current philanthropic returns as well as faster diminishing returns
Likewise, high investment returns are associated with moving along the diminishing returns curve in the future, as diminishing marginal returns are not exogenous when EA is a large share of activity in an area; by drawing investment returns and diminishing returns from separate variables, your results wind up dominated by cases where explosive growth in EA funds is accompanied by flat marginal returns that are extremely implausible because of the missing correlations
These reflect a general problem with Guesstimate models, it’s easy to create independent draws of variables that are not independent of each other and get answers exponentially off as one considers longer time frames or more variables
Regarding prognostications of future equity returns, I think it’s worthwhile to follow other fundamental projections in breaking down equity returns into components such as P/E, economic growth, growth in corporate profits as a share of the economy etc; in particular, this reveals that some past sources of equity returns can’t be extrapolated indefinitely, e.g. 100%+ corporate profit shares are not possible and huge profit shares would likely be accompanied by higher corporate or investment taxes, while early stock returns involved low rates of stock ownership and high transaction costs
When there are diminishing returns to spending in a given year, being forced to spend assets too quickly in response to a surprise does lower efficiency of spending, so regulatory changes requiring increased disbursement rates can be harmful
Mission hedging and tying funding to epistemic claims can be very important for altruistic investing; e.g. if scenarios where AI risk is higher are correlated with excess returns for AI firms, then an allocation to address that risk might overweight AI securities
GiveWell top charities are relatively extreme in the flatness of their returns curves among areas EA is active in, which is related to their being part of a vast funding pool of global health/foreign aid spending, which EA contributions don’t proportionately increase much.
In other areas like animal welfare and AI risk EA is a very large proportional source of funding. So this would seem to require an important bet that areas with relatively flat marginal returns curves are and will be the best place to spend.
I agree risks of expropriation and costs of market impact rise as a fund gets large relative to reference classes like foundation assets (eliciting regulatory reaction) let alone global market capitalization. However, each year a fund gets to reassess conditions and adjust its behavior in light of those changing parameters, i.e. growing fast while this is all things considered attractive, and upping spending/reducing exposure as the threat of expropriation rises. And there is room for funds to grow manyfold over a long time before even becoming as large as the Bill and Melinda Gates Foundation, let alone being a significant portion of global markets. A pool of $100B, far larger than current EA financial assets, invested in broad indexes and borrowing with margin loans or foundation bonds would not importantly change global equity valuations or interest rates.
Regarding extreme drawdowns, they are the flipside of increased gains, so are a question of whether investors have the courage of their convictions regarding the altruistic returns curve for funds to set risk-aversion. Historically, Kelly criterion leverage on a high-Sharpe portfolio could have provided some reassurance with being ahead of a standard portfolio over very long time periods, even with great local swings.
Thanks for the post. One concern I have about the use of ‘power’ is that it tends to be used for fairly flexible ability to pursue varied goals (good or bad, wisely or foolishly). But many resources are disproportionately helpful for particular goals or levels of competence. E.g. practices of rigorous reproducible science will give more power and prestige to scientists working on real topics, or who achieve real results, but it also constraint what they can do with that power (the norms make it harder for a scientist who wins stature thereby to push p-hacked pseudoscience for some agenda). Similarly, democracy increases the power of those who are likely to be elected, while constraining their actions towards popular approval. A charity evaluator like GiveWell may gain substantial influence within the domain of effective giving, but won’t be able to direct most of its audience to charities that have failed in well powered randomized control trials.
This kind of change, which provides power differentially towards truth, or better solutions, should be of relatively greater interest to those seeking altruistic effectiveness (whereas more flexible power is of more interest to selfish actors or those with aims that hold up less well under those circumstances). So it makes sense to place special weight on asymmetric tools favoring correct views, like science, debate, and betting.
Wayne, the case for leverage with altruistic investment is in no way based on the assumption that arithmetic returns equal median or log returns. I have belatedly added links to several documents that go into the issues at length above,.
The question is whether leverage increases the expected impact of your donations, taking into account issues such as diminishing marginal returns. Up to a point (the Kelly criterion level), increasing leverage drives up long-run median returns and growth rates at the expense of greater risk (much less than the increase in arithmetic returns).
The expected $ donated do grow with the increased arithmetic returns (multiplied by leverage less borrowing costs, etc), but they become increasingly concentrated in outcomes of heavy losses or a shrinking minority of increasingly extreme gains. In personal retirement, you value money less as you have more of it at a quite rapid rate, which means the optimal amount of risk to take for returns is less than the rate that maximizes long-run growth (the Kelly criterion), and vastly less than maximizing arithmetic returns.
In altruism when you are a small portion of funding for the causes you support you have much less reason to be risk-averse, as the marginal value of a dollar donated won’t change a lot if it goes from $30M to $30M+$100k in a given year. At the level of the whole cause, something closer to Kelly looks sensible.
E.g. the VIX, a measure of stock market volatility (and risk plays a role in formulae for leverage) is above 30 right now, close to twice the typical level. Although that’s a quantitative matter, and considering future donation streams (which are not invested), pushes towards more (see the book Lifecycle Investing). But people shouldn’t do anything involving leverage before understanding it thoroughly.
This is a brief response, so please don’t rush intemperately into things before understanding what you’re doing on the basis of any of this. For general finance information, especially about low-fee index investing, I recommend Bogleheads (the wiki and the forum):
For altruistic investment, the biggest differentiating factors are 1) maximizing tax benefits of donation; 2) greater willingness to take risks than with personal retirement, suggesting some leverage.
Some tax benefits worth noting in the US:
1) If you purchase multiple securities you can donate those which increase, avoiding capital gains tax, and sell those that decline (tax-loss harvesting), allowing you to cancel out other capital gains tax and deduct up to $3000/yr of ordinary income.
2) You can get a deduction for donating to charity (this is independent of and combines with avoiding capital gains on donations of appreciated securities). But this is only if you itemize deductions (so giving up the standard deduction), and thus is best to do only once in a few years, concentrating your donations to make itemizing worthwhile. There is a cap of 60% of income (100% this year because of the CARES act) for deductible cash contributions, 30% for donations of appreciated securities (although there can be carryover).
3) You can donate initially to a donor advised fund to collect the tax deduction early and have investments grow inside tax-free, saving you from taxes on dividends, interest and any sales of securities that you aren’t transferring directly to a charity. However, DAFs charge fees that take back some of these gains, and have restrictions on available investment options (specifically most DAFs won’t permit leverage).
Re leverage, this increases the likelihood of the investment going very high or very low, with the optimal level depending on a number of factors . Here are some discussions of the considerations:
My own preference would be to make a leveraged investment that can’t go to a negative value so you don’t need to monitor it constantly, e.g. a leveraged index ETF (e.g. UPRO, TQQQ, or SOXL), or a few. If it collapses you can liquidate and tax-loss harvest. If it appreciates substantially then donate the appreciated ETF in chunks to maximize your tax deduction (e.g. bunching it up when your marginal tax rate will be high to give up to the 30% maximum deduction limits).