CEO of Convergence.
David_Kristoffersson
X-risk: yes. The idea of fast AI development: yes. Knowing the phrase “takeoff speed”? No. For sure, this also depends a bit on the type of role and seniority. “Moral patienthood” strikes me as one of those terms where if someone is interested in one of our jobs, they will likely get the idea, but they might not know the term “moral patienthood”. So let’s note here that I wrote “language”, and you wrote “concepts”, and these are not the same. One of the distinctions I care about is that people understand, or can easily come to understand the ideas/concepts. I care less what specific words they use.
Digressing slightly, note that using specific language is a marker for group belonging, and people seem to find pleasure in using in-group language as this signals group belonging, even if there exists standard terms for the concepts. Oxytocin creates internal group belonging and at the same time exclusion towards outsiders. Language can do some of the same.
So yes, it’s important to me that people understand certain core concepts. But again, don’t overindex on me. I should’ve maybe clarified the following better in my first comment: I’ve personally thought that EA/AI safety groups have done a bit too much in-group hiring, so I set out how to figure out how to hire people more widely, and retain the same mission focus regardless.
Speaking as a hiring manager at a small group in AI safety/governance who made an effort to not just hire insiders (it’s possible I’m in a minority—don’t take my take for gospel if you’re looking for a job), it’s not important to me that people know a lot about in-group language, people, or events around AI safety. It is very important to me that people agree with foundational ideas such as to actually be impact-focused and to take short-ish AI timelines and AI risk seriously and have thought about it seriously.
Thanks Holly. I agree that fixating on just trying to answer the “AI timelines” question won’t be productive for most people. Though, we all need to come to terms with it somehow. I like your callout for “timeline-robust interventions”. I think that’s a very important point. Though I’m not sure that implies calling your representatives.
I disagree that “we know what we need to know”. To me, the proper conversation about timelines isn’t just “when AGI”, but rather, “at what times will a number of things happen”, including various stages of post-AGI technology, and AI’s dynamics with the world as a whole. It incorporates questions like “what kinds of AIs will be present”.This allows us to make more prudent interventions: What technical AI safety and AI governance you need depends on the nature of the AI that will be built. Important AI to address isn’t just orthogonality thesis-driven paperclip maximizers.
I think seeing the way AI is emerging, that it’s clear some classic AI safety challenges are not as relevant anymore. For example, it seems to me that “value learning” is looking much easier than classic AI safety advocates thought. But versions of many classic AI safety challenges are still relevant. The same issue remains: if we can’t verify that something vastly more intelligent than us is acting in our interests, then we are in peril.
I don’t think it would be right if everyone would be occupied with such AI timelines and AI scenarios questions, but I think they deserve very strong efforts. If you are trying to solve a problem, the most important thing to get right is what problem you’re trying to solve. And what is the problem of AI safety? That depends on what kind of AI will be present in the world and what humans will be doing with it.
Thank you, Zachary and team! I’m happy to see CEA take an ambitious stance. Your goals make perfect sense to me. The EA community is a very important one and your stewardship is needed.
I like this principles-first approach! I think it’s really valuable to have a live discussion that starts from “How do we do the most good?”, even if I am kind of all-in on one cause. (Kind of: I think most causes tie together: making the future turn out well.) I think it’d be a valuable use of the time of you folks to try and clarify and refine your approach, philosophy, and incentives further, using the comments here as one input.
I have this fresh in my mind as we’ve had some internal discussion on the topic at Convergence. My personal take is that “consciousness” is a bit of a trap subject because it bakes in a set of distinct complex questions, people talk about it differently, it’s hard to peer inside the brain, and there’s slight mystification because consciousness feels a bit magical, from the inside. Sub-topics include but are not limited to: 1. Higher-order though. 2. Subjective experience. 3. Sensory integration. 3. Self-awareness. 4. Moral patienthood.
My recommendation is to try and talk in terms of these sub-topics as much as possible rather than the fuzzy, differently understood, and massive concept “consciousness”.
Is contributing to this work useful/effective? Well, I think it will be more useful if, when one works in this domain (or domains), one has specific goals (more in the direction of “understand self-awareness” or “understand moral patienthood” than “understand consciousness”) and one does them for specific purposes.
My personal take is that the current “direct AI risk reduction work” that has the highest value is AI strategy and AI governance. And hence, I would reckon that “consciousness”-work that has clear bearing on AI strategy and AI governance can be impactful.
BERI is doing an awesome service for university-affiliated groups, I hope more will take advantage of it!
Would you really call Jakub’s response “hostile”?
Thanks for posting this. I find it quite useful to get an overview of how the EA community is being managed and developed.
Happy to see the new institute take form! Thanks for doing this, Maxime and Konrad. International long-term governance appears very high-leverage to me. Good luck, and I’m looking forward to see more of your work!
Some “criticisms” are actually self-fulfilling prophecies
EAs are far too inclined to abandon high-EV ideas that are <50% likely to succeed
Over-relying on outside views over inside views.
Picking the wrong outside view / reference class, or not even considering the different reference classes on offer.
Strong upvote for these.
What I appreciate the most about this post is simply just the understanding it shows for people in this situation.
It’s not easy. Everyone has their own struggles. Hang in there. Take some breaks. You can learn, you can try something slightly different, or something very different. Make sure you have a balanced life, and somewhere to go. Make sure you have good plan B’s (e.g., myself, I can always go back to the software industry). In the for-profit and wider world, there are many skills you can learn better than you would working at an EA org.
Great idea and excellent work, thanks for doing this!
This gets me wondering what other kinds of data sources could be integrated (on some other platform, perhaps). And, I guess you could fairly easily do statistics to see big picture differences between the data on the different sites.
Thanks Linch; I actually missed that the prediction had closed!
Metaculus: Will quantum computing “supremacy” be achieved by 2025? [prediction closed on Jun 1, 2018.]
While I find it plausible that it will happen, I’m not personally convinced that quantum computers will be practically very useful due the difficulties in scaling them up.
Excellent points, Carl. (And Stefan’s as well.) We would love to see follow-up posts exploring nuances like these, and I put them into the Convergence list of topics worth elaborating.
Sounds like you got some pretty great engagement out of this experiment! Great work! This exact kind of project, and the space of related ideas seems well worth exploring further.
The five people that we decided to reject were given feedback about their translations as well as their motivation letters. We also provided two simple call-to-actions to them: (1) read our blog and join our newsletter, and (2) follow our FB page and attend our public events. None of these five people have so far done these actions to our awareness.
Semi-general comment regarding rejections: I think, overall, rejection is a sensitive matter. And if we do want rejected applicants (to stipends, jobs, projects, …) to try more or to maintain their interest in the specific project and in EA overall, we need to take a lot of care. I’m, for example, concerned that the difficulty of getting jobs at EA orgs and the situation of being rejected from them discourages many people from engaging closer with EA. Perhaps just being sympathetic and encouraging enough will do a lot of good. Perhaps there’s more we could do.
Variant of Korthon’s comment:
I never look at the “forum favorites” section. It seems like it’s looked the same forever and it takes up a lot of screen real estate without any use for me!
Vision of Earth fellows Kyle Laskowski and Ben Harack had a poster session on this topic at EA Global San Francisco 2019: https://www.visionofearth.org/wp-content/uploads/2019/07/Vision-of-Earth-Asteroid-Manipulation-Poster.pdf
They were also working on a paper on the topic.
Making the writing easy for myself: What’s your response to Carl Shulman’s take? Which is, that pushing for a pause too early might spoil the chance of getting people to agree to a pause when it would matter the most: at pivotal points where AI improvement is happening tremendously quickly. Carl Shulman on the 80k podcast.
You may have responded to this before. Feel free to provide a link.
This page is giving me a 404 right now: https://pauseai.info/mitigating-pause-failures