As an adjunct to my research on high-level machine intelligence (HLMI) risk, recently posted requesting assistance on likelihood values, the second half of the ranking includes subjective judgments on the overall impact on international stability and security. My collection window for the project is closing soon, so any contribution to either survey would be greatly appreciated.
Please share your perspectiveson whether each AI scenario condition listed, if it were to occur, would greatly increase, greatly decrease, or have no effect at all on society and security.
This form lists several potential paths for high-level machine intelligence (HLMI). Each question is a dimension (e.g., takeoff speed) with three/four conditions (e.g., fast) on the left and asks the participant to:
Please rank the degree to which each condition could impact social stability or security (greatly increase to decrease) in the long term. For conditions (e.g., technologies) that you don’t believe would cause an increase or a decrease, just choose the best option in your view or leave it as “no effect.”
The survey is more of a ranking than a questionnaire and if the topic is familiar to you the detailed writeups are likely unnecessary. The goal is to classify the degree of impact we could potentially expect from each condition (e.g., fast takeoff, deep learning scaling to HLMI, concentrated control of HLMI).
I’d appreciate any help that you can provide on this! These values are subjective, and some will likely have no effect at all, but the values will be very helpful in categorizing each individual dimension on the degree of overall risk to civilization.
This project aims to develop a futures modeling framework for advanced AI scenario development. The goal is to cover the full spectrum of AI development paths and identify interesting combinations, or ideally, entirely new AI scenarios. The project aims to highlight risks and paths that receive less consideration (e.g., structural, decision/value erosion, global failure cascades) and structure a framework of potential futures.
For further details on the methodology, purpose, and overall study please check out the original post here.
Thank you, I really appreciate any help you can provide.
[Question] Please Share Your Perspectives on the Degree of Societal Impact from Transformative AI Outcomes
As an adjunct to my research on high-level machine intelligence (HLMI) risk, recently posted requesting assistance on likelihood values, the second half of the ranking includes subjective judgments on the overall impact on international stability and security. My collection window for the project is closing soon, so any contribution to either survey would be greatly appreciated.
Please share your perspectives on whether each AI scenario condition listed, if it were to occur, would greatly increase, greatly decrease, or have no effect at all on society and security.
This form lists several potential paths for high-level machine intelligence (HLMI). Each question is a dimension (e.g., takeoff speed) with three/four conditions (e.g., fast) on the left and asks the participant to:
Please rank the degree to which each condition could impact social stability or security (greatly increase to decrease) in the long term. For conditions (e.g., technologies) that you don’t believe would cause an increase or a decrease, just choose the best option in your view or leave it as “no effect.”
Impact Survey
The survey is more of a ranking than a questionnaire and if the topic is familiar to you the detailed writeups are likely unnecessary. The goal is to classify the degree of impact we could potentially expect from each condition (e.g., fast takeoff, deep learning scaling to HLMI, concentrated control of HLMI).
I’d appreciate any help that you can provide on this! These values are subjective, and some will likely have no effect at all, but the values will be very helpful in categorizing each individual dimension on the degree of overall risk to civilization.
This project aims to develop a futures modeling framework for advanced AI scenario development. The goal is to cover the full spectrum of AI development paths and identify interesting combinations, or ideally, entirely new AI scenarios. The project aims to highlight risks and paths that receive less consideration (e.g., structural, decision/value erosion, global failure cascades) and structure a framework of potential futures.
For further details on the methodology, purpose, and overall study please check out the original post here.
Thank you, I really appreciate any help you can provide.