We are organizing a small, virtual discussion group to explore how emerging technology might shape the future. This group is aimed especially at graduate students and early-career professionals. We hope to include people with a wide range of backgrounds—especially those with some depth of knowledge or experience in a field other than AI.
To express interest in joining the group, please fill out this form, which we expect will take less than 5 minutes to complete, by August 8.
Background
We are two graduate students, one pursuing a PhD in computer science and another studying law, both in the U.S. After spending several months engaging off-and-on with writing by members of the EA community about risks related to emerging technology, we want to try something a bit different.
Instead of continuing to read independently, we hope to convene a small reading and discussion group to explore these risks together. Our goal is to approach these problems from first principles, while trying not to anchor too much on the existing discourse around AI alignment and progress (though we do expect AI to be a significant part of our discussions).
We’re particularly excited about exploring a broader range of risks than a “sharp left-turn” in AI capabilities and a broader range of solutions than technical AI alignment. If we do converge back on those topics, we would like to do so after having considered many alternatives.
A primary goal of these conversations is to help ourselves and other participants form relatively independent “inside views” about the topics we will be discussing. The motivation for this project is similar to that of “minimal-trust investigations” (though we will not be investing the amount of time necessary for true minimal-trust investigations of our discussion topics).
We are interested in exploring questions like the following:
What are some plausible scenarios for humanity’s technical capabilities in the next year? 10 years? 100 years?
How might those capabilities reshape the way people live and work?
What outcomes can we be confident will happen eventually, unless there is some preempting event?
What challenges remain, even assuming that humans retain complete control over powerful new technologies?
What are some potential stable, positive outcomes? Stable, negative outcomes?
Scope
Our initial guess is that we will hold 6 to 8 virtual sessions over the course of 2 to 3 months, with each session lasting somewhere between 1 and 2 hours. We hope participants will spend 1 to 2 hours per session reading and reflecting in advance of each discussion. We plan to provide some suggested reading material, but anticipate that a lot of the content will come from participants’ suggestions.
We plan to revisit the scope of the discussion series during the first session, where we will be reviewing and revising intended discussion topics.
Target Audience
We are looking for participants who are interested in thinking deeply about the risks associated with emerging technology, but have spent between 0 and ~1000 hours (about 6 months of full-time work) actively working on the topics we will be discussing.
We would be particularly excited to receive expressions of interest from graduate students and early-career professionals. We expect that many, but not all, members of the group will have one of those backgrounds.
To improve our odds of forming original insights and perspectives together, we hope to include people with a breadth of subject-matter backgrounds and experiences. We would be excited to receive expressions of interest from people with backgrounds in technical fields, law and policy, social science (e.g. institutional design), or other backgrounds entirely (e.g. visual art).
We particularly hope to include people with backgrounds that are different from those best-represented in the existing discourse on AI alignment.
We unfortunately expect that we won’t be able to include everyone who expresses interest in the first iteration of this group, but if you opt in on the interest form, we would be happy to connect you with others who express interest.
Request for Feedback and Suggestions
We’d welcome any suggested reading material on any of the topics listed above, as well as any advice about facilitating an effective virtual discussion group. Please leave a comment below or feel free to send a direct message.
Interest Form
To express interest in joining the group, please fill out this form, which we expect will take less than 5 minutes to complete, by August 8.
Explore Risks from Emerging Technology with Peers Outside of (or New to) the AI Alignment Community—Express Interest by August 8
Summary
We are organizing a small, virtual discussion group to explore how emerging technology might shape the future. This group is aimed especially at graduate students and early-career professionals. We hope to include people with a wide range of backgrounds—especially those with some depth of knowledge or experience in a field other than AI.
To express interest in joining the group, please fill out this form, which we expect will take less than 5 minutes to complete, by August 8.
Background
We are two graduate students, one pursuing a PhD in computer science and another studying law, both in the U.S. After spending several months engaging off-and-on with writing by members of the EA community about risks related to emerging technology, we want to try something a bit different.
Instead of continuing to read independently, we hope to convene a small reading and discussion group to explore these risks together. Our goal is to approach these problems from first principles, while trying not to anchor too much on the existing discourse around AI alignment and progress (though we do expect AI to be a significant part of our discussions).
We’re particularly excited about exploring a broader range of risks than a “sharp left-turn” in AI capabilities and a broader range of solutions than technical AI alignment. If we do converge back on those topics, we would like to do so after having considered many alternatives.
A primary goal of these conversations is to help ourselves and other participants form relatively independent “inside views” about the topics we will be discussing. The motivation for this project is similar to that of “minimal-trust investigations” (though we will not be investing the amount of time necessary for true minimal-trust investigations of our discussion topics).
We are interested in exploring questions like the following:
What are some plausible scenarios for humanity’s technical capabilities in the next year? 10 years? 100 years?
How might those capabilities reshape the way people live and work?
What outcomes can we be confident will happen eventually, unless there is some preempting event?
What challenges remain, even assuming that humans retain complete control over powerful new technologies?
What are some potential stable, positive outcomes? Stable, negative outcomes?
Scope
Our initial guess is that we will hold 6 to 8 virtual sessions over the course of 2 to 3 months, with each session lasting somewhere between 1 and 2 hours. We hope participants will spend 1 to 2 hours per session reading and reflecting in advance of each discussion. We plan to provide some suggested reading material, but anticipate that a lot of the content will come from participants’ suggestions.
We plan to revisit the scope of the discussion series during the first session, where we will be reviewing and revising intended discussion topics.
Target Audience
We are looking for participants who are interested in thinking deeply about the risks associated with emerging technology, but have spent between 0 and ~1000 hours (about 6 months of full-time work) actively working on the topics we will be discussing.
We would be particularly excited to receive expressions of interest from graduate students and early-career professionals. We expect that many, but not all, members of the group will have one of those backgrounds.
To improve our odds of forming original insights and perspectives together, we hope to include people with a breadth of subject-matter backgrounds and experiences. We would be excited to receive expressions of interest from people with backgrounds in technical fields, law and policy, social science (e.g. institutional design), or other backgrounds entirely (e.g. visual art).
We particularly hope to include people with backgrounds that are different from those best-represented in the existing discourse on AI alignment.
We unfortunately expect that we won’t be able to include everyone who expresses interest in the first iteration of this group, but if you opt in on the interest form, we would be happy to connect you with others who express interest.
Request for Feedback and Suggestions
We’d welcome any suggested reading material on any of the topics listed above, as well as any advice about facilitating an effective virtual discussion group. Please leave a comment below or feel free to send a direct message.
Interest Form
To express interest in joining the group, please fill out this form, which we expect will take less than 5 minutes to complete, by August 8.