HPS FOR AI SAFETYEleni_A13 Jul 2022 16:37 UTCA collection of AI safety posts from the history and philosophy of science (HPS) point of view. An Epistemological Account of Intuitions in Science Eleni_A3 Sep 2022 23:21 UTC5 points0 comments17 min readEA linkAlignment is hard. Communicating that, might be harder Eleni_A1 Sep 2022 11:45 UTC17 points1 comment3 min readEA link“Normal accidents” and AI systems Eleni_A8 Aug 2022 18:43 UTC5 points1 comment1 min readEA link(www.achan.ca) It’s (not) how you use it Eleni_A7 Sep 2022 13:28 UTC6 points3 comments2 min readEA linkAlignment’s phlogiston Eleni_A18 Aug 2022 1:41 UTC18 points1 comment2 min readEA linkWho ordered alignment’s apple? Eleni_A28 Aug 2022 14:24 UTC5 points0 comments3 min readEA linkThere is no royal road to alignment Eleni_A17 Sep 2022 13:24 UTC18 points2 comments3 min readEA linkAgainst the weirdness heuristic Eleni_A5 Oct 2022 14:13 UTC5 points0 comments2 min readEA linkCognitive science and failed AI forecasts Eleni_A18 Nov 2022 14:25 UTC13 points0 comments2 min readEA linkEmerging Paradigms: The Case of Artificial Intelligence SafetyEleni_A18 Jan 2023 5:59 UTC16 points0 comments19 min readEA link