RSS

richard_ngo

Karma: 7,503

Former AI safety research engineer, now AI governance researcher at OpenAI. Blog: thinkingcomplete.blogspot.com

Defin­ing al­ign­ment research

richard_ngo19 Aug 2024 22:49 UTC
48 points
1 comment1 min readEA link

Twit­ter thread on open-source AI

richard_ngo31 Jul 2024 0:30 UTC
32 points
0 comments1 min readEA link
(x.com)

Twit­ter thread on AI safety evals

richard_ngo31 Jul 2024 0:29 UTC
38 points
2 comments1 min readEA link
(x.com)

Towards more co­op­er­a­tive AI safety strategies

richard_ngo16 Jul 2024 4:36 UTC
62 points
5 comments1 min readEA link

You must not fool your­self, and you are the eas­iest per­son to fool

richard_ngo8 Jul 2023 14:05 UTC
25 points
0 comments1 min readEA link

Agency begets agency

richard_ngo6 Jul 2023 13:09 UTC
28 points
1 comment1 min readEA link

Cul­ti­vate an ob­ses­sion with the ob­ject level

richard_ngo7 Jun 2023 1:39 UTC
24 points
0 comments1 min readEA link

Co­er­cion is an adap­ta­tion to scarcity; trust is an adap­ta­tion to abundance

richard_ngo23 May 2023 18:14 UTC
38 points
5 comments1 min readEA link

Self-lead­er­ship and self-love dis­solve anger and trauma

richard_ngo22 May 2023 22:30 UTC
29 points
0 comments1 min readEA link

Trust de­vel­ops grad­u­ally via mak­ing bids and set­ting boundaries

richard_ngo19 May 2023 22:16 UTC
25 points
0 comments1 min readEA link

Re­solv­ing in­ter­nal con­flicts re­quires listen­ing to what parts want

richard_ngo19 May 2023 0:04 UTC
23 points
0 comments1 min readEA link

Con­flicts be­tween emo­tional schemas of­ten in­volve in­ter­nal coercion

richard_ngo17 May 2023 10:08 UTC
34 points
0 comments1 min readEA link

We learn long-last­ing strate­gies to pro­tect our­selves from dan­ger and rejection

richard_ngo16 May 2023 16:36 UTC
43 points
0 comments1 min readEA link

Judg­ments of­ten smug­gle in im­plicit standards

richard_ngo15 May 2023 18:50 UTC
46 points
1 comment1 min readEA link

From fear to excitement

richard_ngo15 May 2023 6:23 UTC
62 points
1 comment1 min readEA link

Clar­ify­ing and pre­dict­ing AGI

richard_ngo4 May 2023 15:56 UTC
69 points
2 comments1 min readEA link

AGI safety ca­reer advice

richard_ngo2 May 2023 7:36 UTC
211 points
20 comments1 min readEA link

AGISF adap­ta­tion for in-per­son groups

Sam Marks17 Jan 2023 18:33 UTC
30 points
0 comments3 min readEA link
(www.lesswrong.com)

Ap­pli­ca­tions open for AGI Safety Fun­da­men­tals: Align­ment Course

Jamie B13 Dec 2022 10:50 UTC
75 points
0 comments2 min readEA link

Brain­storm­ing ways to make EA safer and more inclusive

richard_ngo15 Nov 2022 11:14 UTC
149 points
97 comments1 min readEA link