I’m the Forecasting Program Coordinator at Metaculus. Formerly a bridge engineer and I’ve also written some sci-fi.
https://www.ryanbeckauthor.com/
https://twitter.com/BeckRyooan
I’m the Forecasting Program Coordinator at Metaculus. Formerly a bridge engineer and I’ve also written some sci-fi.
https://www.ryanbeckauthor.com/
https://twitter.com/BeckRyooan
That makes sense, I agree it’s better to have more direct sources.
That’s a good question, I’ve thought about this some before and while it’s kind of messy I think the general gist of my thoughts is something like this (framed from a US perspective but I think it generalizes to most countries):
Tariffs should be avoided or minimized wherever possible due to them likely costing US citizens much more than they benefit them. However tariffs and sanctions can be important tools when a country does something very offensive, particularly when punitive measures are applied in cooperation with allies. Tariffs and sanctions should be targeted toward the offensive behavior and scale with the importance of the offense.
So my rough framework isn’t that we should always avoid tariffs and sanctions, but that they should be limited, targeted to serve a purpose, and be in conjunction with our allies where possible. I think sanctions on China over the treatment of Uyghurs are justified and from what I’ve heard these have been targeted at the Xinjiang region and at Chinese entities involved.
Similarly, the Russian invasion warrants severe consequences, and sanctions are more effective here because they’ve been imposed in conjunction with allies. If China were to invade Taiwan or threaten to do so a similar response would be justified.
The big difference to me with the trade war was that it was based on a misguided attempt to fix our trade imbalance, which my impression is that most economists don’t really see as a problem. The idea also seemed to be to use tariffs as a bargaining chip to negotiate better trade practices such as IP protection. But these tariffs were applied unilaterally and don’t appear to be targeted at all, and never seemed likely to accomplish these goals. And in the meantime they’ve made things more expensive for Americans and have probably damaged relations with China with nothing to show for it.
This is a good point, I completely agree that the trade war is of small importance relative to things like relations with Taiwan. My reason for focusing on the trade war though is because trade deescalation would have very few downsides and would probably be a substantial positive all on its own before even considering the potential positive effects it could have on relations with China and possibly nuclear risk.
To me the same can’t be said for the Taiwan issue. The optimal policy here is far from clear to me. Strategic ambiguity is our intentional policy, and I’m not sure clarifying our stance would be preferable to that. Committing to defend Taiwan could allow Taiwan to do more provocative things, which could lead to war. Declaring we will not defend Taiwan could empower China to invade. I agree it’s a significant issue that should be carefully considered, but it’s also an issue that I’m sure international relations experts have spilled huge amounts of ink over so I’m not sure if there are any clearly superior policy improvements available in this area.
To be clear I’m not arguing that people shouldn’t think about it or try to solve it. I’m definitely in favor of more discussion on that topic and I’d love to read some high effort analysis from an EA perspective.
If I’m understanding correctly the main point you’re making is that I probably shouldn’t have said this:
There is little room for improvement here...
Which in that case that’s a fair critique. I’m not well-informed enough to know the options here and their advantages and risks in great detail, so my perception that there’s not much room for improvement could be way off base.
I’d summarize my position as having the perception that the Taiwan issue is a hard question that I’m not equipped to solve and I’m skeptical that there are significant improvements available there, so instead I focused on a topic that I view as low hanging fruit. Though I was probably wrong to characterize the Taiwan issue as futile or unimprovable, instead I should have characterized it as a highly complex issue that I’m not equipped to do justice to and I perceive as having substantial downsides to any shift in policy.
Yeah definitely on the same page then! I agree with what you said there with the possible exception or caveat that I’m skeptical on improvements to the Taiwan issue and that if you find or know of any persuasive abyss-staring arguments on this topic (or write them yourself) I’d appreciate it if you share them with me because I’d be happy to be wrong in my skepticism and would like to learn more about any promising options.
Even if the ~300 new DF-41 silos discovered last year are each armed with only 3 warheads (the missile can carry ~10 max), and no other silos are built/discovered, that’s still 900 warheads on top of the ~400 already in service.
I’m not well-versed in this area but reading through the Chinese nuclear notebook from November 2021 they seem kind of skeptical of claims like this and point out that China could also be intending the silos to be a “shell game”. Quoting from the notebook:
And in November 2021, the Pentagon’s annual report to Congress projects that China might have 700 deliverable warheads by 2027, and possibly as many as 1,000 by 2030 (US DefenseDepartment 2021, 90).
Such increases would require the deployment of a significant number of additional launchers, including MIRV-equipped missiles. It seems likely that the new projection assumes that China plans to deploy large numbers of MIRV’ed missiles in the new missile silo fields that are currently under construction. But there are several unknown factors. First, how many of the new silos will be loaded? China might build more silos than missiles to create a “shell game” that would make it harder for an adversary to target the missiles. Second, how many of the missiles will be MIRV’ed, and with how many warheads? Many non-official sources attribute very high numbers of warheads to MIRVed missiles (for example, 10 warheads per DF-41), but the actual number will likely be lower to maximize the range of the missile (perhaps three to five each, perhaps less). This is because we believe that the main purpose of the massive silo construction program is to safeguard China’s retaliatory capability against a surprise first-strike. And the main purpose of the MIRV program is probably to ensure penetration of US missile defenses, rather than to maximize the warhead loading of the Chinese missile force. As the United States strengthens its offensive forces and missile defenses, China will likely further modify its nuclear posture to ensure the credibility of its retaliatory strike force, including deploying hypersonic glide vehicles.
Would you disagree with that assessment?
I know a lot of ways to reduce China-US nuclear risk even without non-starters to the pro-democracy crowd (e.g. giving up defence commitments to certain US allies). There seems to be some major civilizational inadequacy in this area; i.e. obvious ways to have a major reduction on the risk that just nobody’s bothered to implement. I don’t think economic tensions/trade wars are very relevant to nuclear risk compared to more important factors in the grand scheme of things to be frank.
I agree that the trade war issue is probably low impact, but I focused on it because it has few downsides and potential upsides for nuclear risk. What ways to reduce China-US nuclear risk do you suggest? From what I’ve seen so far (which is admittedly very little) it seems like there are very few feasible options to reduce nuclear risk with China, and most available options involve a lot of unknowns with regard to implementation and effectiveness and potentially have significant downsides.
That’s really interesting, thanks! I wonder why India is so supportive of it in comparison to other countries.
This was a cool contest, thanks for running it! In my view there’s a lot of value in doing this. Doing a deep dive into polygenic selection for IQ was something I had wanted to do for quite a while and your contest motivated me to finally sit down and actually do it and to write it up in a way that would be potentially useful to others.
I think your initial criteria of how much a writeup changed your minds may have played a role in fewer than expected entries as well. Your forecasts on the set of questions seemed very reasonable and my own forecasts were pretty similar on the ones I had forecasted, so I didn’t feel that I had much to contribute in terms of the contest criteria for most of them.
Hopefully that’s helpful feedback to you or anyone else looking to run a contest like this in the future!
No problem!
Also if you’re interested in elaborating about why my scenarios were unintuitive I’d appreciate the feedback, but if not no worries!
There are good points and helpful, thanks! I agree I wasn’t clear about viewing the scenarios exclusively in the initial comment, I think I made that a little clearer in the follow up.
when I read 80% to reach saturation at 40% predictive power I read this as “capping out at around 40%” which would only leave a maximum of 20% for scenarios with much greater than 40%?
Ah I think I see how that’s confusing. My use of the term saturation probably confuses things too much. My understanding is saturation is the likely maximum that could be explained with current approaches, so my forecast was an 80% chance we get to the 40% “saturation” level, but I think there’s a decent chance our technology/understanding advances so that more than the saturation can be explained, and I gave a 30% chance that we reach 80% predictive power.
That’s a good point about iterated embryo selection, I totally neglected that. My initial thought is it would probably overlap a lot with the scenarios I used, but I should have given that more thought and discussed it in my comment.
It still seems like prefixing with “not” still runs into defining based on disagreement, where I would guess people who lean that way would rather be named for what they’re prioritizing as opposed to what they aren’t. I came up with a few (probably bad) ideas along that vein:
Immediatists (apparently not a made up word according to Merriam-Webster)
Contemporary altruists
Effective immediately
I’m relatively new so take my opinion with a big grain of salt. Maybe “not longtermist” is fine with most.
That’s a good point, I agree. None of my suggestions really fit very well, it’s hard to think of a descriptive name that could be easily used conversationally.
It’s a common misconception that those who want to mitigate AI risk think there’s a high chance AI wipes out humanity this century. But opinions vary and proponents of mitigating AI risk may still think the likelihood is low. Crowd forecasts have placed the probability of a catastrophe caused by AI as around 5% this century, and extinction caused by AI as around 2.5% this century. But even these low probabilities are worth trying to reduce when what’s at stake is millions or billions of lives. How willing would you be to take a pill at random from a pile of 100 if you knew 5 were poison? And the risk is higher for timeframes beyond this century.
I think the above could be improved with forecasts of extinction risk from prominent AI safety proponents like Yudkowsky and Christiano if they’ve made them but I’m not aware of whether they have or not.
Did your outcomes 2 and 3 get mixed up at some point? I feel like the evaluations don’t align with the initial descriptions of those, but maybe I’m misunderstanding.
Thanks for writing this though, this is something I’ve been thinking a little about as I try to understand longtermism better. It makes sense to be risk-averse with existential risk, but at the same I have a hard time understanding some of the more extreme takes. My wild guess would be that AI has a significantly higher chance of improving the well-being of humanity than it does causing extinction, like I said care is warranted with existential risk but at the same time slowing AI development delays your positive outcomes 2 and 3, and I haven’t seen much discussion about the downsides of delaying.
Also I’m not sure about outcome 1 having zero utility, maybe that’s standard notation but it seems unintuitive to me, like it kind of buries the downsides of extinction risk. To me it would seem more natural as a negative utility, relative to the positive utility currently existing in the world.
I’m not sure how your first point relates to what I was saying in this post; but, I’ll take a guess.
Sorry, what I said wasn’t very clear. Attempting to rephrase, I was thinking more along the lines of what the possible future for AI might look like if there were no EA interventions in the AI space. I haven’t seen much discussion of the possible downsides there (for example slowing down AI research by prioritizing alignment resulting in delays in AI advancement and delays in good things brought about by AI advancement). But this was a less-than-half-baked idea, thinking about it some more I’m having trouble thinking of scenarios where that could produce a lower expected utility.
It doesn’t matter what outcome you assign zero value to as long as the relative values are the same since if a utility function is an affine function of another utility function then they produce equivalent decisions.
Thanks, I follow this now and see what you mean.
Is there a way to sort answers by newest? I’m not seeing that option. It would be useful for finding new answers I haven’t seen yet.
Very cool, thank you!
It’s possible I missed it but I didn’t see anything stating whether multiple submissions from one author are allowed, I assume they are though?
Makes sense, thanks!
I didn’t have access to your link but I found another version of it here.
To be honest I’m not familiar with the direct evidence either so I’m mostly relying on secondhand impressions and general descriptions of tariff burdens falling on consumers. I searched around briefly just now and found this paper (also cited in the paper you linked as Amiti et al. (2020b)) which reports:
However it’s not clear to me what the relationship is between tariff burden and the welfare loss estimates you mentioned in your comment. It seems to me like they could be measuring different things.