Thanks for this! I liked it and found it helpful for understanding the key arguments for AI risk.
It also felt more engaging than other presentations of those arguments because it is interactive and comparative.
I think that the user experience could be improved a little but that it’s probably not worth making those improvements until you have a larger number of users.
One change you could make now is to mention the number of people who have completed the tool (maybe on the first page) and also change the outputs on the conclusion page to percentages.
How do you imagine using this tool in the future? Like what are some user stories (e.g., person x wants to do y, so they use this)?
Here are some quick (possibly bad) ideas I have for potential uses (ideally after more testing):
As something that advocates like Robert Miles can refer relevant people to
As part of a longitudinal study where a panel of say 100 randomly selected AI safety researchers do this annually, and you report on changes in their responses over time.
Using a similar approach/structure, with new sections and arguments, to assess levels of agreement and disagreement with different AI safety research agendas within the AI Safety community and to identify the cruxes
As a program that new AI Safety researchers, engineers and movement builders do to understand the relevant arguments and counterarguments.
I also like the idea of people making something like this for other cause areas and appreciate the effort invested to make that easy to do.
Thanks for this! I liked it and found it helpful for understanding the key arguments for AI risk.
It also felt more engaging than other presentations of those arguments because it is interactive and comparative.
I think that the user experience could be improved a little but that it’s probably not worth making those improvements until you have a larger number of users.
One change you could make now is to mention the number of people who have completed the tool (maybe on the first page) and also change the outputs on the conclusion page to percentages.
How do you imagine using this tool in the future? Like what are some user stories (e.g., person x wants to do y, so they use this)?
Here are some quick (possibly bad) ideas I have for potential uses (ideally after more testing):
As something that advocates like Robert Miles can refer relevant people to
As part of a longitudinal study where a panel of say 100 randomly selected AI safety researchers do this annually, and you report on changes in their responses over time.
Using a similar approach/structure, with new sections and arguments, to assess levels of agreement and disagreement with different AI safety research agendas within the AI Safety community and to identify the cruxes
As a program that new AI Safety researchers, engineers and movement builders do to understand the relevant arguments and counterarguments.
I also like the idea of people making something like this for other cause areas and appreciate the effort invested to make that easy to do.