The philos­o­phy of choos­ing be­tween ex­tend­ing and im­prov­ing lives

JohnW31 Mar 2023 23:44 UTC
29 points
1 comment5 min readEA link
(observablehq.com)

Seek­ing ad­vice on im­pact­ful ca­reer paths given my unique ca­pa­bil­ities and interests

Grateful4PathTips31 Mar 2023 23:30 UTC
32 points
5 comments1 min readEA link

Hu­man Values and AGI Risk | William James

William James31 Mar 2023 22:30 UTC
1 point
0 comments12 min readEA link

Book Re­view: Parfit—A Philoso­pher and His Mis­sion to Save Morality

Tyler Johnston31 Mar 2023 22:15 UTC
29 points
2 comments4 min readEA link
(www.goodreads.com)

We might get lucky with AGI warn­ing shots. Let’s be ready!

tcelferact31 Mar 2023 21:37 UTC
22 points
2 comments2 min readEA link

New­comb’s Para­dox Explained

Alex Vellins31 Mar 2023 21:26 UTC
2 points
11 comments2 min readEA link

New Cause Area: Por­trait Welfare (+in­tro­duc­ing SPEWS)

Luke Freeman31 Mar 2023 21:22 UTC
74 points
17 comments15 min readEA link

Keep Mak­ing AI Safety News

Gil31 Mar 2023 20:11 UTC
67 points
4 comments1 min readEA link

My up­dates af­ter FTX

Benjamin_Todd31 Mar 2023 19:22 UTC
272 points
78 comments20 min readEA link

Man­i­fund x AI Worldviews

Austin31 Mar 2023 15:32 UTC
32 points
2 comments2 min readEA link
(manifund.org)

[Question] What are the biggest ob­sta­cles on AI safety re­search ca­reer?

jackchang11031 Mar 2023 14:53 UTC
2 points
1 comment1 min readEA link

In­tro­duc­ing the Ma­ter­nal Health Initiative

Ben Williamson31 Mar 2023 14:19 UTC
110 points
7 comments5 min readEA link

SEA 2023

MargotUtrecht31 Mar 2023 12:36 UTC
1 point
0 comments1 min readEA link

AI, Cy­ber­se­cu­rity, and Malware: A Shal­low Re­port [Tech­ni­cal]

Madhav Malhotra31 Mar 2023 12:03 UTC
4 points
0 comments9 min readEA link

AI, Cy­ber­se­cu­rity, and Malware: A Shal­low Re­port [Gen­eral]

Madhav Malhotra31 Mar 2023 12:01 UTC
5 points
0 comments8 min readEA link

Wi­den­ing Over­ton Win­dow—Open Thread

Prometheus31 Mar 2023 10:06 UTC
12 points
5 comments1 min readEA link
(www.lesswrong.com)

Cri­tiques of promi­nent AI safety labs: Red­wood Research

Omega31 Mar 2023 8:58 UTC
339 points
90 comments20 min readEA link

GWWC’s 2020–2022 Im­pact eval­u­a­tion (ex­ec­u­tive sum­mary)

Michael Townsend🔸31 Mar 2023 7:34 UTC
181 points
19 comments8 min readEA link
(www.givingwhatwecan.org)

Longter­mism and short­ter­mism can dis­agree on nu­clear war to stop ad­vanced AI

David Johnston30 Mar 2023 23:22 UTC
2 points
0 comments1 min readEA link

Defer­ence on AI timelines: sur­vey results

Sam Clarke30 Mar 2023 23:03 UTC
68 points
3 comments2 min readEA link

Nu­clear brinks­man­ship is not a good AI x-risk strategy

titotal30 Mar 2023 22:07 UTC
19 points
8 comments5 min readEA link

What’s sur­prised me as an en­try-level gen­er­al­ist at Open Phil & my recom­men­da­tions to early ca­reer professionals

Sam Anschell30 Mar 2023 21:48 UTC
107 points
4 comments12 min readEA link

[Event] Join Me­tac­u­lus To­mor­row, March 31st, for Fore­cast Fri­day!

christian30 Mar 2023 20:58 UTC
29 points
1 comment1 min readEA link
(www.metaculus.com)

ChatGPT is ca­pa­ble of cog­ni­tive em­pa­thy!

Miquel Banchs-Piqué (prev. mikbp)30 Mar 2023 20:42 UTC
3 points
0 comments1 min readEA link
(nonzero.substack.com)

Marginal Aid and Effec­tive Altruism

TomDrake30 Mar 2023 20:31 UTC
39 points
14 comments2 min readEA link

Lead­er­ship and Or­ga­ni­za­tional Challenges Office Hour hosted by Scar­let Spark

Sharleen 30 Mar 2023 17:51 UTC
1 point
0 comments1 min readEA link

Re­cruit the World’s best for AGI Alignment

Greg_Colbourn ⏸️ 30 Mar 2023 16:41 UTC
34 points
8 comments22 min readEA link

“Dangers of AI and the End of Hu­man Civ­i­liza­tion” Yud­kowsky on Lex Fridman

𝕮𝖎𝖓𝖊𝖗𝖆30 Mar 2023 15:44 UTC
28 points
0 comments1 min readEA link
(www.youtube.com)

The fun­da­men­tal hu­man value is power.

Linyphia30 Mar 2023 15:15 UTC
−1 points
5 comments1 min readEA link

AI and Evolution

Dan H30 Mar 2023 13:09 UTC
41 points
1 comment2 min readEA link
(arxiv.org)

Up­side Bar­gains: How To Be More Ambitious

Alex Vellins30 Mar 2023 12:13 UTC
7 points
2 comments4 min readEA link

Stop Us­ing Dis­cord as an Archive

Nicholas Kross30 Mar 2023 2:15 UTC
9 points
4 comments1 min readEA link
(www.reddit.com)

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibs29 Mar 2023 23:30 UTC
212 points
75 comments3 min readEA link
(time.com)

Some es­ti­ma­tion work in the horizon

NunoSempere29 Mar 2023 22:18 UTC
25 points
0 comments4 min readEA link
(nunosempere.com)

A Hap­pier World survey

Jeroen Willems🔸29 Mar 2023 21:23 UTC
9 points
0 comments1 min readEA link

Vote for GWWC to pre­sent at SXSW Syd­ney!

Giving What We Can🔸29 Mar 2023 21:14 UTC
54 points
3 comments1 min readEA link

[Question] What are the ar­gu­ments that sup­port China build­ing AGI+ if Western com­pa­nies de­lay/​pause AI de­vel­op­ment?

DMMF29 Mar 2023 18:53 UTC
32 points
9 comments1 min readEA link

Linkpost: Italy in­tro­duces bill to ban lab-grown meat

Matt Goodman29 Mar 2023 16:53 UTC
47 points
5 comments1 min readEA link

Giv­ing Coupons—Pro­ject Proposal

Wayne29 Mar 2023 16:48 UTC
6 points
4 comments2 min readEA link

Nathan A. Sears (1987-2023)

HaydnBelfield29 Mar 2023 16:07 UTC
297 points
8 comments4 min readEA link

Some up­dates to my think­ing in light of the FTX col­lapse by Owen Cot­ton Bar­ratt [Link Post]

Nathan Young29 Mar 2023 15:23 UTC
133 points
16 comments1 min readEA link
(docs.google.com)

Want to win the AGI race? Solve al­ign­ment.

leopold29 Mar 2023 15:19 UTC
56 points
5 comments5 min readEA link
(www.forourposterity.com)

Ten­ta­tive prac­ti­cal tips for us­ing chat­bots in research

Erich_Grunewald 🔸29 Mar 2023 15:01 UTC
48 points
7 comments5 min readEA link

No­body’s on the ball on AGI alignment

leopold29 Mar 2023 14:26 UTC
328 points
66 comments9 min readEA link
(www.forourposterity.com)

[Linkpost] Vox: “To make the pre­sent feel more mean­ingful, think be­yond it”

Ubuntu29 Mar 2023 13:39 UTC
22 points
0 comments1 min readEA link
(www.vox.com)

Should we pri­or­tize cog­ni­tive sci­ence in EA?

jackchang11029 Mar 2023 10:11 UTC
7 points
1 comment1 min readEA link

Re­solv­ing moral un­cer­tainty with randomization

Bob Jacobs29 Mar 2023 10:10 UTC
29 points
3 comments10 min readEA link

FLI open let­ter: Pause gi­ant AI experiments

Zach Stein-Perlman29 Mar 2023 4:04 UTC
220 points
38 comments2 min readEA link
(futureoflife.org)

Run Posts By Orgs

Jeff Kaufman 🔸29 Mar 2023 2:40 UTC
281 points
13 comments3 min readEA link

De­sen­si­tiz­ing Deepfakes

Phib29 Mar 2023 1:20 UTC
22 points
11 comments1 min readEA link