HPS FOR AI SAFETY

A collection of AI safety posts from the history and philosophy of science (HPS) point of view.

An Episte­molog­i­cal Ac­count of In­tu­itions in Science

Align­ment is hard. Com­mu­ni­cat­ing that, might be harder

“Nor­mal ac­ci­dents” and AI sys­tems

It’s (not) how you use it

Align­ment’s phlo­gis­ton

Who or­dered al­ign­ment’s ap­ple?

There is no royal road to al­ign­ment

Against the weird­ness heuris­tic

Cog­ni­tive sci­ence and failed AI fore­casts

Emerg­ing Paradigms: The Case of Ar­tifi­cial In­tel­li­gence Safety