RSS

richard_ngo

Karma: 6,787

Former AI safety research engineer, now AI governance researcher at OpenAI. Blog: thinkingcomplete.blogspot.com

You must not fool your­self, and you are the eas­iest per­son to fool

richard_ngo8 Jul 2023 14:05 UTC
18 points
0 comments1 min readEA link

Agency begets agency

richard_ngo6 Jul 2023 13:09 UTC
28 points
1 comment1 min readEA link

Cul­ti­vate an ob­ses­sion with the ob­ject level

richard_ngo7 Jun 2023 1:39 UTC
24 points
0 comments1 min readEA link

Co­er­cion is an adap­ta­tion to scarcity; trust is an adap­ta­tion to abundance

richard_ngo23 May 2023 18:14 UTC
38 points
5 comments1 min readEA link

Self-lead­er­ship and self-love dis­solve anger and trauma

richard_ngo22 May 2023 22:30 UTC
27 points
0 comments1 min readEA link

Trust de­vel­ops grad­u­ally via mak­ing bids and set­ting boundaries

richard_ngo19 May 2023 22:16 UTC
25 points
0 comments1 min readEA link

Re­solv­ing in­ter­nal con­flicts re­quires listen­ing to what parts want

richard_ngo19 May 2023 0:04 UTC
23 points
0 comments1 min readEA link

Con­flicts be­tween emo­tional schemas of­ten in­volve in­ter­nal coercion

richard_ngo17 May 2023 10:08 UTC
34 points
0 comments1 min readEA link

We learn long-last­ing strate­gies to pro­tect our­selves from dan­ger and rejection

richard_ngo16 May 2023 16:36 UTC
43 points
0 comments1 min readEA link

Judg­ments of­ten smug­gle in im­plicit standards

richard_ngo15 May 2023 18:50 UTC
46 points
1 comment1 min readEA link

From fear to excitement

richard_ngo15 May 2023 6:23 UTC
62 points
1 comment1 min readEA link

Clar­ify­ing and pre­dict­ing AGI

richard_ngo4 May 2023 15:56 UTC
69 points
2 comments1 min readEA link

AGI safety ca­reer advice

richard_ngo2 May 2023 7:36 UTC
209 points
20 comments1 min readEA link

AGISF adap­ta­tion for in-per­son groups

Sam Marks17 Jan 2023 18:33 UTC
30 points
0 comments3 min readEA link
(www.lesswrong.com)

Ap­pli­ca­tions open for AGI Safety Fun­da­men­tals: Align­ment Course

Jamie B13 Dec 2022 10:50 UTC
75 points
0 comments2 min readEA link

Brain­storm­ing ways to make EA safer and more inclusive

richard_ngo15 Nov 2022 11:14 UTC
149 points
97 comments1 min readEA link

Align­ment 201 curriculum

richard_ngo12 Oct 2022 19:17 UTC
94 points
9 comments1 min readEA link

The al­ign­ment prob­lem from a deep learn­ing perspective

richard_ngo11 Aug 2022 3:18 UTC
58 points
0 comments21 min readEA link

Mo­ral strate­gies at differ­ent ca­pa­bil­ity levels

richard_ngo27 Jul 2022 20:20 UTC
24 points
1 comment5 min readEA link
(thinkingcomplete.blogspot.com)

Mak­ing de­ci­sions us­ing mul­ti­ple worldviews

richard_ngo13 Jul 2022 19:15 UTC
43 points
0 comments11 min readEA link