RSS

Greg_Colbourn ⏸️

Karma: 5,672

Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)

How We Might All Die in A Year

Greg_Colbourn ⏸️ Mar 28, 2025, 1:31 PM
6 points
2 comments1 min readEA link
(x.com)

Fron­tier AI sys­tems have sur­passed the self-repli­cat­ing red line

Greg_Colbourn ⏸️ Dec 10, 2024, 4:33 PM
25 points
14 comments1 min readEA link
(github.com)

“Near Mid­night in Suicide City”

Greg_Colbourn ⏸️ Dec 6, 2024, 7:54 PM
5 points
0 comments1 min readEA link
(www.youtube.com)

OpenAI’s o1 tried to avoid be­ing shut down, and lied about it, in evals

Greg_Colbourn ⏸️ Dec 6, 2024, 3:25 PM
23 points
9 comments1 min readEA link
(www.transformernews.ai)

Ap­pli­ca­tions open: Sup­port for tal­ent work­ing on in­de­pen­dent learn­ing, re­search or en­trepreneurial pro­jects fo­cused on re­duc­ing global catas­trophic risks

CEEALARFeb 9, 2024, 1:04 PM
63 points
1 comment2 min readEA link

Fund­ing cir­cle aimed at slow­ing down AI—look­ing for participants

Greg_Colbourn ⏸️ Jan 25, 2024, 11:58 PM
92 points
3 comments2 min readEA link

Job Op­por­tu­nity: Oper­a­tions Man­ager at CEEALAR

Beth AndersonDec 21, 2023, 2:24 PM
13 points
1 comment2 min readEA link

Giv­ing away copies of Un­con­trol­lable by Dar­ren McKee

Greg_Colbourn ⏸️ Dec 14, 2023, 5:00 PM
39 points
2 comments1 min readEA link

Timelines are short, p(doom) is high: a global stop to fron­tier AI de­vel­op­ment un­til x-safety con­sen­sus is our only rea­son­able hope

Greg_Colbourn ⏸️ Oct 12, 2023, 11:24 AM
73 points
85 comments9 min readEA link

Vol­un­teer­ing Op­por­tu­nity: Trus­tee at CEEALAR

Beth AndersonOct 5, 2023, 2:55 PM
16 points
0 comments3 min readEA link

Ap­ply to CEEALAR to do AGI mora­to­rium work

Greg_Colbourn ⏸️ Jul 26, 2023, 9:24 PM
62 points
0 comments1 min readEA link

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn ⏸️ Jul 19, 2023, 4:46 PM
31 points
2 comments1 min readEA link

UN Sec­re­tary-Gen­eral recog­nises ex­is­ten­tial threat from AI

Greg_Colbourn ⏸️ Jun 15, 2023, 5:03 PM
58 points
1 comment1 min readEA link

Play Re­grantor: Move up to $250,000 to Your Top High-Im­pact Pro­jects!

Dawn DrescherMay 17, 2023, 4:51 PM
58 points
2 comments2 min readEA link
(impactmarkets.substack.com)

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn ⏸️ May 2, 2023, 10:40 AM
13 points
28 comments3 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn ⏸️ May 2, 2023, 10:17 AM
68 points
35 comments13 min readEA link

[Question] If your AGI x-risk es­ti­mates are low, what sce­nar­ios make up the bulk of your ex­pec­ta­tions for an OK out­come?

Greg_Colbourn ⏸️ Apr 21, 2023, 11:15 AM
62 points
55 comments1 min readEA link

Merger of Deep­Mind and Google Brain

Greg_Colbourn ⏸️ Apr 20, 2023, 8:16 PM
11 points
12 comments1 min readEA link
(blog.google)

Re­cruit the World’s best for AGI Alignment

Greg_Colbourn ⏸️ Mar 30, 2023, 4:41 PM
34 points
8 comments22 min readEA link

Adam Cochran on the FTX meltdown

Greg_Colbourn ⏸️ Nov 17, 2022, 11:54 AM
15 points
7 comments1 min readEA link
(twitter.com)