RSS

Greg_Colbourn

Karma: 5,520

Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)

Fron­tier AI sys­tems have sur­passed the self-repli­cat­ing red line

Greg_Colbourn10 Dec 2024 16:33 UTC
30 points
14 comments1 min readEA link
(github.com)

“Near Mid­night in Suicide City”

Greg_Colbourn6 Dec 2024 19:54 UTC
5 points
0 comments1 min readEA link
(www.youtube.com)

OpenAI’s o1 tried to avoid be­ing shut down, and lied about it, in evals

Greg_Colbourn6 Dec 2024 15:25 UTC
23 points
9 comments1 min readEA link
(www.transformernews.ai)

Ap­pli­ca­tions open: Sup­port for tal­ent work­ing on in­de­pen­dent learn­ing, re­search or en­trepreneurial pro­jects fo­cused on re­duc­ing global catas­trophic risks

CEEALAR9 Feb 2024 13:04 UTC
63 points
1 comment2 min readEA link

Fund­ing cir­cle aimed at slow­ing down AI—look­ing for participants

Greg_Colbourn25 Jan 2024 23:58 UTC
92 points
3 comments2 min readEA link

Job Op­por­tu­nity: Oper­a­tions Man­ager at CEEALAR

Beth Anderson21 Dec 2023 14:24 UTC
13 points
1 comment2 min readEA link

Giv­ing away copies of Un­con­trol­lable by Dar­ren McKee

Greg_Colbourn14 Dec 2023 17:00 UTC
39 points
2 comments1 min readEA link

Timelines are short, p(doom) is high: a global stop to fron­tier AI de­vel­op­ment un­til x-safety con­sen­sus is our only rea­son­able hope

Greg_Colbourn12 Oct 2023 11:24 UTC
73 points
85 comments9 min readEA link

Vol­un­teer­ing Op­por­tu­nity: Trus­tee at CEEALAR

Beth Anderson5 Oct 2023 14:55 UTC
16 points
0 comments3 min readEA link

Ap­ply to CEEALAR to do AGI mora­to­rium work

Greg_Colbourn26 Jul 2023 21:24 UTC
62 points
0 comments1 min readEA link

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn19 Jul 2023 16:46 UTC
31 points
2 comments1 min readEA link

UN Sec­re­tary-Gen­eral recog­nises ex­is­ten­tial threat from AI

Greg_Colbourn15 Jun 2023 17:03 UTC
58 points
1 comment1 min readEA link

Play Re­grantor: Move up to $250,000 to Your Top High-Im­pact Pro­jects!

Dawn Drescher17 May 2023 16:51 UTC
58 points
2 comments2 min readEA link
(impactmarkets.substack.com)

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn2 May 2023 10:40 UTC
13 points
28 comments3 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn2 May 2023 10:17 UTC
68 points
35 comments13 min readEA link

[Question] If your AGI x-risk es­ti­mates are low, what sce­nar­ios make up the bulk of your ex­pec­ta­tions for an OK out­come?

Greg_Colbourn21 Apr 2023 11:15 UTC
62 points
55 comments1 min readEA link

Merger of Deep­Mind and Google Brain

Greg_Colbourn20 Apr 2023 20:16 UTC
11 points
12 comments1 min readEA link
(blog.google)

Re­cruit the World’s best for AGI Alignment

Greg_Colbourn30 Mar 2023 16:41 UTC
34 points
8 comments22 min readEA link

Adam Cochran on the FTX meltdown

Greg_Colbourn17 Nov 2022 11:54 UTC
15 points
7 comments1 min readEA link
(twitter.com)

Why didn’t the FTX Foun­da­tion se­cure its bag?

Greg_Colbourn15 Nov 2022 19:54 UTC
57 points
34 comments2 min readEA link