HPS FOR AI SAFETYEleni_AJul 13, 2022, 4:37 PMA collection of AI safety posts from the history and philosophy of science (HPS) point of view. An Epistemological Account of Intuitions in Science Eleni_ASep 3, 2022, 11:21 PM5 points0 comments17 min readEA linkAlignment is hard. Communicating that, might be harder Eleni_ASep 1, 2022, 11:45 AM17 points1 comment3 min readEA link“Normal accidents” and AI systems Eleni_AAug 8, 2022, 6:43 PM5 points1 comment1 min readEA link(www.achan.ca) It’s (not) how you use it Eleni_ASep 7, 2022, 1:28 PM6 points3 comments2 min readEA linkAlignment’s phlogiston Eleni_AAug 18, 2022, 1:41 AM18 points1 comment2 min readEA linkWho ordered alignment’s apple? Eleni_AAug 28, 2022, 2:24 PM5 points0 comments3 min readEA linkThere is no royal road to alignment Eleni_ASep 17, 2022, 1:24 PM18 points2 comments3 min readEA linkAgainst the weirdness heuristic Eleni_AOct 5, 2022, 2:13 PM5 points0 comments2 min readEA linkCognitive science and failed AI forecasts Eleni_ANov 18, 2022, 2:25 PM13 points0 comments2 min readEA linkEmerging Paradigms: The Case of Artificial Intelligence SafetyEleni_AJan 18, 2023, 5:59 AM16 points0 comments19 min readEA link