Thanks for the great analysis!
Your first post said “My current best guess is that, between now and 2100, we face a ~35% chance of a serious, direct conflict between Great Powers.” This seems to be the estimate that is used in your guesstimate model for “probability of a major great power war breaking out before 2100″.
But in this post you say your best guess for the chance of at least one great power war breaking out this century is 45%. Not sure why there is this discrepancy, am I missing something?
jchen1
“I have regularly seen proposals in the community to stop and regulate AI development”—Are there any public ones you can signpost to or are these all private proposals?
A couple have already mentioned it but I’ll repeat the request for nutrition advice. In particular: As a vegan, what supplements (aside from B12) should I be taking? Are there any that may be harmful? Are there any evidence-backed dietary interventions that improve cognitive performance?
Re your second point, a counter would be that the implementation of recommendations arising from ERS will often have impacts on the population around at the time of implementation, and the larger those impacts are the less possible specialization seems. E.g. if total utilitarians/longtermists were considering seriously pursuing the implementation of global governance/ubiquitous surveillance, this might risk such a significant loss of value to non-utilitarian non-longtermists that it’s not clear total utilitarians/longtermists should be left to dominate the debate.
Thank you for the detailed response, very helpful!
Thanks for the thorough reply, and I’ve now read the second post which suggested more potential for direct impact than I had initially thought. On (2), I agree value drift wasn’t a great term for what I had in mind. Thanks for bringing out the nuance there
Thanks for this write-up! A few questions, some of which you may already be planning to address in future posts:
How long do you think it takes to pick up the vast majority (say >80%) of the transferable skills mentioned? I.e. is it likely that an optimal strategy may be to go into a top consulting firm for 1-2 years and then use the skills, brand and connections to do something more directly impactful? Roughly what proportion of consultants you interviewed/in the EA and Consulting Network are pursuing this strategy vs those who think they can maximise the impact of their career by staying in consulting longer term?
Do you have a sense of what proportion of EAs end up staying in consulting longer than planned /than would be optimal from an impact perspective due to value drift? More concretely, I’m thinking of how the temptation to maintain a high salary or pursue the next promotion or the lack of time to spend considering other options might delay people’s exit longer than is optimal.
Any successful case studies to support this: “You could potentially shift the project portfolio of your consulting firm towards more impactful opportunities (especially in a senior role like partner), and influence existing projects to be more effective”?
You wrote, “You can develop expertise in a specific area or industry early in your career, even when you don’t have a background into the topic through on-the-job learning, training opportunities, direct exposure to experts”. You also said that “Management consulting firms expect new consultants to spend time working across various industries before specializing, which could delay the start date for you to focus on your areas of interest by a few years.” Are you saying (1) it is possible to develop expertise in a specific area but only after a few years; (2) even in the first few years you can develop some level of expertise on each of your projects, but it’s more broad than deep; (3) something else?
Challenge prize(s) to incentivise the development of innovative solutions in priority areas. These could be prizes for goals already suggested by people in this thread (e.g. producing resilient food sources, drastic changes to diagnostic testing, meat alternatives underinvested in by the market) or others.
Quotes from a Nesta report on challenge prizes (caveat that I haven’t spent any time looking up opposing evidence/perspectives):
By guiding and incentivising the smartest minds, prizes create more diverse solutions. Because prizes only pay out when a problem has been solved, you can support long shots, radical ideas and unusual suspects while minimising risk...
The high profile of a prize can raise public awareness and shape the future development of markets and technologies. Prizes can help identify best practice, shift regulation and drive policy change...
For the Ansari XPRIZE, 26 teams spent $100 million chasing the $10 million prize, jump starting the commercial space industry.
See also Musk’s $100m prize for carbon capture tech
Thanks for this! I’d love to hear your views on the potential for impact in this career path. For example: (1) What are some positive examples of impact that you/colleagues have been able to have on cause areas that EAs typically care about such as reducing existential risk? How rare is this kind of impact? (2) To what extent are staff in overseas embassies influencing the policy of the UK government vs just communicating it?
(3) How much has the foreign office’s chance of making a difference on important global issues been diminished by Brexit?
No worries if you don’t feel comfortable answering all/any of these questions on here.
Thanks very much for this write-up, I learned a lot from it!
I’m a bit confused by your position on the counterfactual impact of Pugwash on political leaders’ views.
On the one hand:
You say in your summary table that there is ‘weak positive evidence’ for Claim 4, the full version of which is that “Soviet Pugwash scientists made a difference to their government’s perspective on anti-ballistic missiles.” (as an aside, I think it would be clearer to change the shorthand for Claim 4 from ‘Relaying ideas to governments’ to something that highlights the necessity for counterfactual change, e.g. ‘Counterfactually changing government position’)
You write in the section on Claim 4, “Since there is a good case that Pugwash participants had a counterfactual influence on the policy outcome, I think it is right to treat the effect of Pugwash on the Anti-Ballistic Missile Treaty as an example of a big win for Track II diplomacy.”
Your conclusion states, “This is a proof of concept indicating that Track II diplomacy can have very large effects [my emphasis] on policies that are important for great power relations.”
On the other hand:
In the section on Claim 4 you say you think it’s equally likely that “political leaders would have agreed to limit ABMs regardless of whether they were convinced that the technology could be destabilising” and that “hearing scientists’ concerns would have been important for swaying political leaders to agree to ABM limitations.”
You later write that “If Pugwash participants ultimately influenced the policy outcome in the case of anti-ballistic missiles, it was because there was a window of opportunity for influence. The advocacy of respected scientists gave leaders additional reason to support a policy which they would already have been strongly disposed to favour because of other considerations.” This suggests to me something like, “Leaders were already >50% likely to support the policy, and Pugwash increased that likelihood but did not have a decisive impact on the policy being approved.”
More generally, it might be helpful to use probability ranges to clarify what you mean by phrases like ‘a good case that’ and ‘very large effects’, and to use quantitative modelling to try to reach a more precise estimate of Pugwash’s counterfactual impact.