A few quick ideas: 1. On the methods side, I find the potential use of LLMs/AI as research participants in psychology studies interesting (not necessarily related to safety). This may sound ridiculous at first but I think the studies are really interesting. From my post on studying AI-nuclear integration with methods from psychology:
[Using] LLMs as participants in a survey experiment, something that is seeing growing interest in the social sciences (see Manning, Zhu, & Horton, 2024; Argyle et al., 2023; Dillion et al., 2023; Grossmann et al., 2023).
2. You may be interested or get good ideas from the Large Language Model Psychology research agenda (safety-focused). I haven’t gone into it so this is not an endorsement.
3. Then you have comparative analyses of human and LLM behavior. E.g. the Human vs. Machine paper (Lamparth, 2024) compares humans and LLMs’ decision-making in a wargame. I do something similar with a nuclear decision-making simulation, but it’s not in paper/preprint form yet.
A few quick ideas:
1. On the methods side, I find the potential use of LLMs/AI as research participants in psychology studies interesting (not necessarily related to safety). This may sound ridiculous at first but I think the studies are really interesting.
From my post on studying AI-nuclear integration with methods from psychology:
2. You may be interested or get good ideas from the Large Language Model Psychology research agenda (safety-focused). I haven’t gone into it so this is not an endorsement.
3. Then you have comparative analyses of human and LLM behavior. E.g. the Human vs. Machine paper (Lamparth, 2024) compares humans and LLMs’ decision-making in a wargame. I do something similar with a nuclear decision-making simulation, but it’s not in paper/preprint form yet.
Helpful suggestions, thank you! Will check them out.