Thanks! Looks like I have access through my university.
CalebMaresca
Diverging utilities can be an issue. You can also get infinite output in finite time. The larger issue is that the economy has no steady state. In economic growth models, a steady state (or balanced growth path (BGP)) represents a long-run equilibrium where key economic variables per capita (like capital per worker, output per worker, consumption per worker) grow at constant rates. This greatly simplifies the analysis.
For example, I have a paper in which I analyze how households would behave if they expected TAI to transform the economy. To do this, I calculated the steady state the economy was in prior to households learning about the potential of TAI as well as the post-TAI steady state. I could then calculate the transition path between these two steady states. I thought of using the same production function you used here, but then there wouldn’t have been a post-TAI steady state, which is necessary to be able to find the transition path.
A production function that I have mused about is one like yours, but with land. This should solve the issue, as the post-TAI economy will no longer be AK. It also addresses another issue I have with growth models: it’s really tricky to get wages to decrease in the long run. For the production function you use, if A_{old} was also growing at a constant rate, then wages would eventually rise. This is because the new production technology doesn’t harm the old technology other than by taking capital away. Each production technology could work next to the other without interfering. In reality, there is a limited amount of space on earth and once the new technology is more efficient, it isn’t profit maximizing to ‘waste space’ on the old style of production. I don’t know if it is worth modeling for what you are doing, might be too many bells and whistles, but it is something I’ve been thinking about.
Thanks for this. If I understand correctly, the result is primarily driven by the elastic labor supply, which is a function of W and not of R, and the constant supply of capital. This seems most relevant for very fast takeoff scenarios.
My intuition is that as people realize that their jobs are being automated away, they will want to work more to bolster their savings before we move into the new regime where their labor is worthless and capital is all that matters. This would require fully modeling the household’s intertemporal utility and endogenizing the capital supply. This might be tricky, however, with your production function, because if A_{auto} is increasing at a constant rate and households are allowed to save, you will get superexponential growth.
Thanks for this excellent primer and case study! I learned a lot about causal analysis from your explanation. The section on using three waves to control for confounders while avoiding controlling for potential mediators was particularly helpful. I would be interested in hearing more about how the sensitivity analysis for unmeasured confounders works.
The positive effect of activism on meat consumption that you found is especially concerning and important. I hope that we can gain more insight into this soon. If this finding replicates, then a lot of organizations might have to reevaluate their methods.
Hi Matthew,
Thank you for your comment. I think this is a reasonable criticism! There is definitely an endogenous link between investment and AI timelines that this model misses. I think that this might be hard to model in a realistic way, but I encourage people to try!
On the other hand, I think the strategic motivation is important as well. For example, here is Satya Nadella on the Dwarkesh Podcast:And by the way, one of the things is that there will be overbuild. To your point about what happened in the dotcom era, the memo has gone out that, hey, you know, you need more energy, and you need more compute. Thank God for it. So, everybody’s going to race.
In reality, both mechanisms are probably in play. My paper is intended to focus on the race mechanism.
Two more notes: higher savings imply lower consumption in the short term. However, even if TAI isn’t invented, consumption will rise higher than in the stationary equilibrium purely from capital accumulation.
Lastly, the main thrust of the paper is on the implications for interest rates, I do not intend to make strong claims about social welfare.
I don’t think that the possible outcomes of AGI/superintelligence are necessarily so binary. For example, I am concerned that AI could displace almost all human labor, making traditional capital more important as human capital becomes almost worthless. This could exacerbate wealth inequality and significantly decrease economic mobility, making post-AGI wealth mostly a function of how much wealth you had pre-AGI.
In this scenario, saving more now would enable you to have more capital while returns to capital are increasing. At the same time, there could be billions of people out of work without significant savings and in need of assistance.
I also think even if AGI goes well for humans, that doesn’t necessarily translate into going well for animals. Animal welfare could still be a significant cause area in a post-AGI future and by saving more now, you would have more to donate then (potentially a lot more if returns to capital are high).
Why would Knightian uncertainty be an argument against AI as an existential risk? If anything, our deep uncertainty about the possible outcomes of AI should lead us to be even more careful.
Similar campaigns have worked really well for animal advocacy, so I’m excited to see what you can accomplish.
I’m wondering, what kinds of tasks can volunteers help with? If I have no social media accounts or experience trying to promote causes on social media is there anything I can do?
However, if they believe in near-term TAI, savvy investors won’t value future profits (since they’ll be dead or super rich anyways)
My future profits aren’t very relevant if I’m dead, but I might still care about it even if I’m super rich. Sure, my marginal utility will be very low, but on the other hand the profit from my investments will be very large. Even if everyone is stupendously rich by today’s terms, there might be a tangible difference between having a trillion dollars in your bank account and having a quadrillion dollars in your bank account. Maybe I want my own galaxy in which I alone have the rights to build Dyson spheres and that is out of the price range of your average joe with a trillion-dollar net wealth. Maybe (and this might be more salient to your typical investor who isn’t actively thinking about far out sci-fi scenarios) I want the prestige, political control, etc, that come with being wealthy compared to everyone else.
A bet that interest rates will rise is not a bet on short AI timelines. Rather, it is a bet that:
Most consumers will correctly perceive that AI timelines are short, and
Most consumers will realize this long enough before TAI that there is enough time to benefit from profitable bets made now, and
Most consumers will believe that transformative AI will significantly reduce the marginal utility they get from their savings—and not, say, increase the marginal value of saving, because they could lose their jobs without taking part in the newfound prosperity from AI
I believe that this is almost correct. My objection is with the second bullet point, “interest rates can rise before we get TAI”. This is possible, but we no longer have a reason to believe that it will happen—unless very many people decide to reduce their savings rates. By then, this is no longer a bet on short AI timelines, but rather a bet about whether the typical consumer will realize that AI timelines are short sufficiently long enough before AI that you have time to enjoy your profits.
If future benefits exist for being even richer after TAI, interest rates could rise due to inductive reasoning even before consumers begin adjusting their savings rates in response to TAI. If I know that consumers will adjust their savings rate one day before TAI (assuming a deterministic timeline where TAI occurs in one discontinuous jump and very unrealistic timescales for consumers changing their savings rate for simplicity’s sake), then I should place a bet on the interest rate rising (e.g. shorting government bonds) two days before TAI. If enough investors take this action, then interest rates will rise two days before TAI. Knowing this, I should short government bonds three days before TAI, etc… Similar to how if the government promises to print a lot of money in one month, then inflation will begin to rise immediately.
I am not aware of any international treaties which sanction the use of force against a non-signatory nation except for those circumstances under which one of the signatory nations is first attacked by a non-signatory nation (e.g. collective defense agreements such as NATO). Your counterexample of the Israeli airstrike on the Osirak reactor is not a precedent as it was not a lawful use of force according to international law and was not sanctioned by any treaty. I agree that the Israeli government made the right decision in orchestrating the attack, but it is important to point out the differences between that and what you are suggesting.
Ultimately, to quibble about whether your suggestion is an “act of violence” or not misses the point. What you suggest would be an unprecedented sanctioning of force. I believe the introduction of such an agreement would be very incendiary and would offer a bad precedent. Note that no such agreement was signed in order to prevent nuclear proliferation. Many experts were very worried that nuclear weapons would proliferate much further than they ultimately did. Sometimes the use of force was used, but always with a lighter hand than “let’s sign a treaty to bomb anyone we think has a reactor.”
My argument doesn’t hang on whether an X-risk occurs during my PhD. If AGI is 10 years away, it’s questionable whether investing half of that remaining time into completing a PhD is optimal.
I think that when discussing career longtermism we should keep the possibility of short AGI timelines in consideration (or the possibility of some non-AI related existential catastrophe occuring in the short-term). By the time we transition from learning and building career capital to trying to impact the world, it might be too late to make a difference. Maybe an existential catastrophe has already occurred or AGI was successful and so outclasses us that all of that time building career capital was wasted.
For example, I am in my first year of an economics PhD. Social impact through academia is very slow. I worry that before I am able to create any impact through my research it might be too late. I chose this path because I believe it will give me valuable and broadly robust skills that I could apply to creating impactful research. But now I wonder if I should have pursued a more direct and urgent way of contributing to the long-term future.
Many EAs, like me, have chosen paths in academia, which has a particularly long impact trajectory and thus is more prone to short timelines.
PS: I recently switched to the Microsoft Edge web browser and was intrigued to see if the Bing AI could help me write this comment. The final product is a heavily edited version of the final output it gave after multiple prompt attempts. Was it faster/better than just writing the entire comment myself? Probably not.
I don’t have an answer to which countries would be more receptive to the idea, definitely don’t try here in Israel!
I am however interested in the claimed effectiveness of open borders. Do these estimates take into account potential backlash or political instability that a large number of immigrants could cause? I understand that theoretically, closed borders are economically inefficient and solidify inequality, but I fear that open borders could cause significant political problems and backlash. Even if we were to consider this backlash to be unjustified or immoral, we need to keep it in consideration when thinking of the effects of this policy. Am I unjustified in thinking that significant negative political effects are possible?
I agree that the urban/rural divide as opposed to clear cut boundaries is not a significant reason to discredit the possibility of civil war, however, there are other reasons to think that civil war is unlikely.
This highly cited article provides evidence that the main causal factors of civil wars are what the authors call conditions that favor insurgency, rather than ethnic factors, discrimination, and grievances (such as economic inequality). The argument is that even in the face of grievances that cause people to start a civil war if the right conditions are not in place the civil war cannot even get off the ground. A huge caveat here is that political polarization is not measured in this article, so this article does not rule it out as a significant factor.
The conditions in America do not favor insurgency. America has huge military, intelligence, and surveillance resources that she can use to counter insurgency, and there are few underdeveloped regions where the insurgents could hide.
Thanks for your input. Option value struck me as a subject that is not only relevant to EA, but also has not disseminated effectively from the academic literature to a larger audience. It’s very hard to find concrete information on option value outside of the literature. For example, the Wikipedia article on the subject is a garbled mess.
Hi Viadehi, I’m part of the new research group at EA Israel. For me personal fit and building career capital are the main reasons why I want to take part. I don’t think that research I do now will save the world, but hopefully it will help me build relevant skills and knowledge and develop a passion for research.
I’m imagining something that is Cobb-Douglas between capital and land. Growth should be exponential (not super exponential) when A_auto is growing at a constant rate, same as a regular Cobb-Douglas production function between capital and labor. Specifically, I was thinking something like this:
X_old^beta(A_old K_old^alpha L^{1-alpha})^(1-beta) + X_auto^beta(A_auto K_auto)^(1-beta)
st X_old + X_auto = X_total (allocating land between the two production technologies)
As to your second point, yes, you are correct, as long as A_old is constant wages would not increase.