RSS

Matthew_Barnett

Karma: 3,302

Why I think it’s im­por­tant to work on AI forecasting

Matthew_Barnett27 Feb 2023 21:24 UTC
179 points
10 comments10 min readEA link

Con­cern­ing the Re­cent 2019-Novel Coron­avirus Outbreak

Matthew_Barnett27 Jan 2020 5:47 UTC
144 points
142 comments3 min readEA link

[Question] What is the cur­rent most rep­re­sen­ta­tive EA AI x-risk ar­gu­ment?

Matthew_Barnett15 Dec 2023 22:04 UTC
116 points
48 comments3 min readEA link

My thoughts on the so­cial re­sponse to AI risk

Matthew_Barnett1 Nov 2023 21:27 UTC
116 points
17 comments1 min readEA link

AI al­ign­ment shouldn’t be con­flated with AI moral achievement

Matthew_Barnett30 Dec 2023 3:08 UTC
110 points
15 comments5 min readEA link

A com­pute-based frame­work for think­ing about the fu­ture of AI

Matthew_Barnett31 May 2023 22:00 UTC
96 points
36 comments19 min readEA link

The pos­si­bil­ity of an in­definite AI pause

Matthew_Barnett19 Sep 2023 12:28 UTC
90 points
73 comments15 min readEA link

Slightly against al­ign­ing with neo-luddites

Matthew_Barnett26 Dec 2022 23:27 UTC
71 points
17 comments4 min readEA link

AI val­ues will be shaped by a va­ri­ety of forces, not just the val­ues of AI developers

Matthew_Barnett11 Jan 2024 0:48 UTC
69 points
3 comments3 min readEA link

A pro­posal for a small in­duce­ment prize platform

Matthew_Barnett5 Jun 2021 19:06 UTC
66 points
10 comments3 min readEA link

Prevent­ing a US-China war as a policy priority

Matthew_Barnett22 Jun 2022 18:07 UTC
64 points
22 comments8 min readEA link

Effects of anti-ag­ing re­search on the long-term future

Matthew_Barnett27 Feb 2020 22:42 UTC
61 points
33 comments4 min readEA link

Up­dat­ing Drexler’s CAIS model

Matthew_Barnett17 Jun 2023 1:57 UTC
59 points
0 comments1 min readEA link

An­a­lyz­ing the moral value of un­al­igned AIs

Matthew_Barnett8 Apr 2024 16:06 UTC
59 points
36 comments19 min readEA link

My cur­rent thoughts on the risks from SETI

Matthew_Barnett15 Mar 2022 17:17 UTC
47 points
9 comments10 min readEA link