Not a comprehensive answer but a few ideas. I don’t know of any existing documentation or organisation about how to do this.
I think talking to people currently heavily involved in funding x-risk mitigation efforts is a good start. People with a proven track record of taking x-risks seriously are more likely to adequately consider the relevant concerns and assist by progressing the discussion and coming up with meaningful mitigation strategies. For example, you could email Nick Bostrom or someone at Open Philanthropy. I’ve heard Kevin Esvelt is someone with a track record or taking info-hazards seriously too.
Maybe don’t go directly to super critical people in existing efforts. It’s possible that you should qualify your ideas first by talking to other experts (who you trust) in whichever domain is likely to know about those risks (although of course you’d want to avoid losing control of the narrative, such as by someone you tell overzealously raising alarm and damaging your credibility).
There’s probably lots of specific reasoning that might be necessary based on the relevant risk (for example if it’s tied up with specific economic activity the way AI capabilities development is).
I endorse the suggestion to talk to talking to someone senior at Open Phil. EA doesn’t have a centralized decisionmaker, but Open Phil might be closest as a generally trusted group which is used to handling these issues.
Ok, and any advice for reaching out to trusted-but-less-prestigious experts? It seems unlikely that reaching out to e.g. Kevin Esvelt will generate a response!
I think someone like Esvelt (and also Greg, who personally answered in the affirmative) will probably respond. Even if they are too busy to do a call, they’ll know the appropriate junior-level people to triage things to.
To build on Linch’s response here: I work on the biosecurity & pandemic preparedness team at Open Philanthropy. Info hazard disclosure questions are often gnarly. I’m very happy to help troubleshoot these sorts of issues, including both general questions and more specific concerns. The best way to contact me, anonymously or non-anonymously, is through this short form. (Alternatively, you could reach my colleague Andrew Snyder-Beattie here.) Importantly, if you’re reaching out, please do not include potentially sensitive details of info hazards in form submissions – if necessary, we can arrange more secure means of follow-up communication, anonymous or otherwise (e.g., a phone call).
Not a comprehensive answer but a few ideas. I don’t know of any existing documentation or organisation about how to do this.
I think talking to people currently heavily involved in funding x-risk mitigation efforts is a good start. People with a proven track record of taking x-risks seriously are more likely to adequately consider the relevant concerns and assist by progressing the discussion and coming up with meaningful mitigation strategies. For example, you could email Nick Bostrom or someone at Open Philanthropy. I’ve heard Kevin Esvelt is someone with a track record or taking info-hazards seriously too.
Maybe don’t go directly to super critical people in existing efforts. It’s possible that you should qualify your ideas first by talking to other experts (who you trust) in whichever domain is likely to know about those risks (although of course you’d want to avoid losing control of the narrative, such as by someone you tell overzealously raising alarm and damaging your credibility).
There’s probably lots of specific reasoning that might be necessary based on the relevant risk (for example if it’s tied up with specific economic activity the way AI capabilities development is).
I endorse the suggestion to talk to talking to someone senior at Open Phil. EA doesn’t have a centralized decisionmaker, but Open Phil might be closest as a generally trusted group which is used to handling these issues.
Ok, and any advice for reaching out to trusted-but-less-prestigious experts? It seems unlikely that reaching out to e.g. Kevin Esvelt will generate a response!
I think someone like Esvelt (and also Greg, who personally answered in the affirmative) will probably respond. Even if they are too busy to do a call, they’ll know the appropriate junior-level people to triage things to.
To build on Linch’s response here:
I work on the biosecurity & pandemic preparedness team at Open Philanthropy. Info hazard disclosure questions are often gnarly. I’m very happy to help troubleshoot these sorts of issues, including both general questions and more specific concerns. The best way to contact me, anonymously or non-anonymously, is through this short form. (Alternatively, you could reach my colleague Andrew Snyder-Beattie here.) Importantly, if you’re reaching out, please do not include potentially sensitive details of info hazards in form submissions – if necessary, we can arrange more secure means of follow-up communication, anonymous or otherwise (e.g., a phone call).