CEO of Convergence.
David_Kristoffersson
Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research
The ‘far future’ is not just the far future
What I appreciate the most about this post is simply just the understanding it shows for people in this situation.
It’s not easy. Everyone has their own struggles. Hang in there. Take some breaks. You can learn, you can try something slightly different, or something very different. Make sure you have a balanced life, and somewhere to go. Make sure you have good plan B’s (e.g., myself, I can always go back to the software industry). In the for-profit and wider world, there are many skills you can learn better than you would working at an EA org.
State Space of X-Risk Trajectories
Variant of Korthon’s comment:
I never look at the “forum favorites” section. It seems like it’s looked the same forever and it takes up a lot of screen real estate without any use for me!
Happy to see the new institute take form! Thanks for doing this, Maxime and Konrad. International long-term governance appears very high-leverage to me. Good luck, and I’m looking forward to see more of your work!
BERI is doing an awesome service for university-affiliated groups, I hope more will take advantage of it!
Thanks for posting this. I find it quite useful to get an overview of how the EA community is being managed and developed.
I think this is an excellent initiative, thank you, Michael! (Disclaimer: Michael and I work together on Convergence.)
An assortment of thoughts:
More and more studious estimates of x-risks seem clearly very high value to me due to how much the likelihood of risks and events affect priorities and how the quality of the estimates affect our communication about these matters.
More estimates should generally should increase our common knowledge of the risks, and individually, if people think about how to make these estimates, they will reach a deeper understanding of the questions.
Breaking down the causes of one’s estimates is generally valuable. It allows one to improve one’s estimates, understanding of causation, and to discuss them in more detail.
More estimates can be bad if low quality estimates swamp out better quality ones somehow.
Estimates building on new (compared to earlier estimates) sources of information are especially interesting. Independent data sources increase our overall knowledge.
I see space for someone writing an intro post on how to do estimates of this type better. (Scott Alexander’s old posts here might be interesting.)
This kind of complexity tells me that we should talk more often of risk %’s in terms of the different scenarios they are associated with. E.g., the form of current trajectory Ord is using, and also possibly better (if society would act further more wisely) and possible worse trajectories (society makes major mistakes), and what the probabilities are under these.
We can’t disentangle talking about future risks and possibilities entirely from the different possible choices of society since these choices are what shapes the future. What we do affect these choices.
(Also, maybe you should edit the original post to include the quote you included here or parts of it.)
AI and X-risk Strategy Unconference at EA Hotel in November
Great idea and excellent work, thanks for doing this!
This gets me wondering what other kinds of data sources could be integrated (on some other platform, perhaps). And, I guess you could fairly easily do statistics to see big picture differences between the data on the different sites.
Metaculus: Will quantum computing “supremacy” be achieved by 2025? [prediction closed on Jun 1, 2018.]
While I find it plausible that it will happen, I’m not personally convinced that quantum computers will be practically very useful due the difficulties in scaling them up.
Vision of Earth fellows Kyle Laskowski and Ben Harack had a poster session on this topic at EA Global San Francisco 2019: https://www.visionofearth.org/wp-content/uploads/2019/07/Vision-of-Earth-Asteroid-Manipulation-Poster.pdf
They were also working on a paper on the topic.
Thanks for your detailed comment, Max!
Relative to my own intuitions, I feel like you underestimate the extent to which your “spine” ideally would be a back-and-forth between its different levels
I agree, the “spine” glosses over a lot of the important dynamics.
I think I would find it easier to understand to what extent I agree with your recommendations if you gave specific examples of (i) what you consider to be valuable past examples of strategy research, and (ii) how you’re planning to do strategy research going forward (or what methods you’d recommend to others).
Very good points. Both would indeed be highly valuable to the argument. As follow up posts, I’m considering writing up (1) concrete projects in strategy research that seem valuable, and (2) a research agenda.
While I agree that we face substantial strategic uncertainty, I think I’m significantly less optimistic about the marginal tractability of strategy research than you seem to be.
Yeah, we’re more optimistic than you here. I don’t think it’s possible to do useful completely “tactics and data free” strategy research. But I do think there is highly valuable strategy research to do that can be grounded with a smaller amount of tactics and data gathering.
What tactics research and data gathering is key? I think this is a strategic question and I think we’re currently just scratching the surface.For example, while I tend to be excited about work that, say, immediately helps Open Phil to determine their funding allocation, I tend to be quite pessimistic about external researchers sitting at their desks and considering questions such as “how to best allocate resources between reducing various existential risks” in the abstract.
I agree that it seems like that could easily be a bad use of time for “external researchers” to do that. I’m somewhat optimistic about these researchers examining sub-questions that would inform how to do the allocation.
Very loosely, I expect marginal activities that effectively reduce strategic uncertainty to look more like executives debating their companies strategy in a meeting rather than, say, Newton coming up with his theory of mechanics. I’m therefore reluctant to call them “research”.
I think the idea cluster of existential risk reduction was formed through something I’d call “research”. I think, in a certain way, we need more work of this type. But it also needs to be different in some important way in order to create new valuable knowledge. We hope to do work of this nature.
The long term future is especially popular among EAs living in Oxford, not surprising given the focus of the Global Priorities Institute on longtermism
Even more than that, The Future of Humanity Institute has been in Oxford since 2005!
I’m sympathetic to many of the points, but I’m somewhat puzzled by the framing that you chose in this letter.
Why AI risk might be solved without additional intervention from longtermist
Sends me the message that longtermists should care less about AI risk.
Though, the people in the “conversations” all support AI safety research. And, from Rohin’s own words:
Overall, it feels like there’s around 90% chance that AI would not cause x-risk without additional intervention by longtermists.
10% chance of existential risk from AI sounds like a problem of catastrophic proportions to me. It implies that we need many more resources spent on existential risk reduction. Though perhaps not strictly on technical AI safety. Perhaps more marginal resources should be directed to strategy-oriented research instead.
Excellent points, Carl. (And Stefan’s as well.) We would love to see follow-up posts exploring nuances like these, and I put them into the Convergence list of topics worth elaborating.
Sounds like you got some pretty great engagement out of this experiment! Great work! This exact kind of project, and the space of related ideas seems well worth exploring further.
The five people that we decided to reject were given feedback about their translations as well as their motivation letters. We also provided two simple call-to-actions to them: (1) read our blog and join our newsletter, and (2) follow our FB page and attend our public events. None of these five people have so far done these actions to our awareness.
Semi-general comment regarding rejections: I think, overall, rejection is a sensitive matter. And if we do want rejected applicants (to stipends, jobs, projects, …) to try more or to maintain their interest in the specific project and in EA overall, we need to take a lot of care. I’m, for example, concerned that the difficulty of getting jobs at EA orgs and the situation of being rejected from them discourages many people from engaging closer with EA. Perhaps just being sympathetic and encouraging enough will do a lot of good. Perhaps there’s more we could do.
Would you really call Jakub’s response “hostile”?