RSS

Ihor Ivliev

Karma: 1

Thinking to think thinkability thinkier if/​when/​… thinkable

A Doc­trine of Strate­gic Per­sis­tence: A Di­ag­nos­tic and Oper­a­tional Frame­work for Nav­i­gat­ing Sys­temic Risk

Ihor Ivliev31 Jul 2025 15:05 UTC
1 point
0 comments58 min readEA link

A Cog­ni­tive In­stru­ment on the Ter­mi­nal Contest

Ihor Ivliev23 Jul 2025 23:30 UTC
0 points
1 comment8 min readEA link

The Oper­a­tor’s Gam­ble: A Pivot to Ma­te­rial Con­se­quence in AI Safety

Ihor Ivliev21 Jul 2025 19:33 UTC
−1 points
0 comments4 min readEA link

An Ex­ec­u­tive Briefing on the Ar­chi­tec­ture of a Sys­temic Crisis

Ihor Ivliev10 Jul 2025 0:46 UTC
0 points
0 comments4 min readEA link

The Eng­ine of Foreclosure

Ihor Ivliev5 Jul 2025 15:26 UTC
0 points
0 comments25 min readEA link

Wet­ware’s De­fault: A Di­ag­no­sis of Sys­temic My­opia un­der AI-Driven Autonomy

Ihor Ivliev3 Jul 2025 23:21 UTC
1 point
0 comments7 min readEA link

Fun­da­men­tal Risk

Ihor Ivliev26 Jun 2025 0:25 UTC
−5 points
0 comments1 min readEA link

The Ver­ifi­ca­tion Gap: A Scien­tific Warn­ing on the Limits of AI Safety

Ihor Ivliev24 Jun 2025 19:08 UTC
3 points
0 comments2 min readEA link

An Anal­y­sis of Sys­temic Risk and Ar­chi­tec­tural Re­quire­ments for the Con­tain­ment of Re­cur­sively Self-Im­prov­ing AI

Ihor Ivliev17 Jun 2025 0:16 UTC
2 points
5 comments4 min readEA link

AI Self-Mod­ifi­ca­tion Am­plifies Risks

Ihor Ivliev3 Jun 2025 20:27 UTC
0 points
0 comments2 min readEA link

Eigh­teen Open Re­search Ques­tions for Govern­ing Ad­vanced AI Systems

Ihor Ivliev3 May 2025 19:00 UTC
2 points
0 comments6 min readEA link

Ar­chi­tect­ing Trust: A Con­cep­tual Blueprint for Ver­ifi­able AI Governance

Ihor Ivliev31 Mar 2025 18:48 UTC
3 points
0 comments8 min readEA link

How to build AI you can ac­tu­ally Trust—Like a Med­i­cal Team, Not a Black Box

Ihor Ivliev22 Mar 2025 21:27 UTC
2 points
1 comment4 min readEA link