AI Safety Audience Dialog Initiative : Call to Alpha-Testers
AISADI is a potential online program aiming to teach effective discussion techniques to AI Safety workers in order to handle disagreement effectively. The program is currently at an alpha stage but it requires testers, both for knowing the length of the program (estimated 1h30) and measuring whether its effects are significant. The test will consist in following a presentation and completing exercices about various conversational methods. If interested please consider emailing camille.berger@psl.eu. Your help will be incredibly appreciated !
FAQ : 1-What is AISADI, exactly ? AISADI aims to teach conversational methods that improve epistemic rationality and rapport, e.g, relying on Street Epistemology, Deep Canvassing, Cooling Conversations and Principled Negotiation. However, this teaching is to be delivered through a Deliberate Practice framework, with timely, feedback-filled exercice.
2-How developed is the program ? The program consists in an introduction to the general phases of an effective dialog as well as fast-feedback exercices. On the beta stage, the exercices will have been selected for their effectiveness and discussed with scientific experts of each of the techniques.
3-Is it manipulative ? No. The program will eventually be open to all sides on the AI Safety debate, its goal is to maximize espitemic rationality for the time of the discussion, on the topic of the discussion. I believe this requires a good handling of rapport, which in turns requires the technique to not be manipulative.
4-Why do you want to do this ? With AIS becoming mainstream, I believe that good skills for interacting with non-rationalist, non-EA people and yet having a rational discussion are soon to be required for a very wide proportion of AI Safety workers (rather than a few communicators), and that the community currently lack those skills.
5-Why “potential” ? The program will be subjected to funding and evaluated empirically. If the empirical results are not convincing, or that funders identify a core issue with the program, the program will be abandonned.
6-Is this massive outreach ? No. The program aims for AI Safety workers and teaches them to respond accordingly to live, non-mediatic criticism, not to outreach for sensitizing people on AI Safety.
AI Safety Audience Dialog Initiative : Call to Alpha-Testers
AISADI is a potential online program aiming to teach effective discussion techniques to AI Safety workers in order to handle disagreement effectively.
The program is currently at an alpha stage but it requires testers, both for knowing the length of the program (estimated 1h30) and measuring whether its effects are significant. The test will consist in following a presentation and completing exercices about various conversational methods. If interested please consider emailing camille.berger@psl.eu. Your help will be incredibly appreciated !
FAQ :
1-What is AISADI, exactly ?
AISADI aims to teach conversational methods that improve epistemic rationality and rapport, e.g, relying on Street Epistemology, Deep Canvassing, Cooling Conversations and Principled Negotiation. However, this teaching is to be delivered through a Deliberate Practice framework, with timely, feedback-filled exercice.
2-How developed is the program ?
The program consists in an introduction to the general phases of an effective dialog as well as fast-feedback exercices. On the beta stage, the exercices will have been selected for their effectiveness and discussed with scientific experts of each of the techniques.
3-Is it manipulative ?
No. The program will eventually be open to all sides on the AI Safety debate, its goal is to maximize espitemic rationality for the time of the discussion, on the topic of the discussion. I believe this requires a good handling of rapport, which in turns requires the technique to not be manipulative.
4-Why do you want to do this ?
With AIS becoming mainstream, I believe that good skills for interacting with non-rationalist, non-EA people and yet having a rational discussion are soon to be required for a very wide proportion of AI Safety workers (rather than a few communicators), and that the community currently lack those skills.
5-Why “potential” ?
The program will be subjected to funding and evaluated empirically. If the empirical results are not convincing, or that funders identify a core issue with the program, the program will be abandonned.
6-Is this massive outreach ?
No. The program aims for AI Safety workers and teaches them to respond accordingly to live, non-mediatic criticism, not to outreach for sensitizing people on AI Safety.