RSS

Greg_Colbourn ⏸️

Karma: 5,964

Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)

AI Risk timelines: 10% chance (by year X) should be the head­line (and dead­line), not 50%. And 10% is _this year_!

Greg_Colbourn ⏸️ 5 Jan 2026 11:57 UTC
16 points
19 comments1 min readEA link

Which side of the AI safety com­mu­nity are you in?

Greg_Colbourn ⏸️ 23 Oct 2025 14:23 UTC
12 points
2 comments2 min readEA link
(www.lesswrong.com)

Pause House, Blackpool

Greg_Colbourn ⏸️ 13 Oct 2025 11:36 UTC
84 points
0 comments1 min readEA link
(gregcolbourn.substack.com)

How We Might All Die in A Year

Greg_Colbourn ⏸️ 28 Mar 2025 13:31 UTC
14 points
6 comments21 min readEA link
(x.com)

Fron­tier AI sys­tems have sur­passed the self-repli­cat­ing red line

Greg_Colbourn ⏸️ 10 Dec 2024 16:33 UTC
25 points
14 comments1 min readEA link
(github.com)

“Near Mid­night in Suicide City”

Greg_Colbourn ⏸️ 6 Dec 2024 19:54 UTC
5 points
0 comments1 min readEA link
(www.youtube.com)

OpenAI’s o1 tried to avoid be­ing shut down, and lied about it, in evals

Greg_Colbourn ⏸️ 6 Dec 2024 15:25 UTC
23 points
9 comments1 min readEA link
(www.transformernews.ai)

Ap­pli­ca­tions open: Sup­port for tal­ent work­ing on in­de­pen­dent learn­ing, re­search or en­trepreneurial pro­jects fo­cused on re­duc­ing global catas­trophic risks

CEEALAR9 Feb 2024 13:04 UTC
63 points
1 comment2 min readEA link

Fund­ing cir­cle aimed at slow­ing down AI—look­ing for participants

Greg_Colbourn ⏸️ 25 Jan 2024 23:58 UTC
93 points
3 comments2 min readEA link

Job Op­por­tu­nity: Oper­a­tions Man­ager at CEEALAR

Beth Anderson21 Dec 2023 14:24 UTC
13 points
1 comment2 min readEA link

Giv­ing away copies of Un­con­trol­lable by Dar­ren McKee

Greg_Colbourn ⏸️ 14 Dec 2023 17:00 UTC
40 points
2 comments1 min readEA link

Timelines are short, p(doom) is high: a global stop to fron­tier AI de­vel­op­ment un­til x-safety con­sen­sus is our only rea­son­able hope

Greg_Colbourn ⏸️ 12 Oct 2023 11:24 UTC
78 points
83 comments9 min readEA link

Vol­un­teer­ing Op­por­tu­nity: Trus­tee at CEEALAR

Beth Anderson5 Oct 2023 14:55 UTC
16 points
0 comments3 min readEA link

Ap­ply to CEEALAR to do AGI mora­to­rium work

Greg_Colbourn ⏸️ 26 Jul 2023 21:24 UTC
62 points
0 comments1 min readEA link

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn ⏸️ 19 Jul 2023 16:46 UTC
31 points
2 comments1 min readEA link

UN Sec­re­tary-Gen­eral recog­nises ex­is­ten­tial threat from AI

Greg_Colbourn ⏸️ 15 Jun 2023 17:03 UTC
58 points
1 comment1 min readEA link

Play Re­grantor: Move up to $250,000 to Your Top High-Im­pact Pro­jects!

Dawn Drescher17 May 2023 16:51 UTC
58 points
2 comments2 min readEA link
(impactmarkets.substack.com)

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn ⏸️ 2 May 2023 10:40 UTC
15 points
28 comments3 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn ⏸️ 2 May 2023 10:17 UTC
70 points
35 comments13 min readEA link

[Question] If your AGI x-risk es­ti­mates are low, what sce­nar­ios make up the bulk of your ex­pec­ta­tions for an OK out­come?

Greg_Colbourn ⏸️ 21 Apr 2023 11:15 UTC
65 points
55 comments1 min readEA link