Ad­vice column: How big a deal are work visas?

EA Lifestyles13 Jul 2023 21:23 UTC
20 points
9 comments2 min readEA link
(ealifestyles.substack.com)

What should Founders Pledge Cli­mate con­sider fund­ing? A re­quest for ideas

jackva13 Jul 2023 19:40 UTC
21 points
0 comments1 min readEA link
(forms.gle)

The Emer­gence of Cy­borgs and the Un­rest of Tran­si­tion: An­ti­ci­pat­ing the Fu­ture of Hu­man Rights

George_A (Digital Intelligence Rights Initiative) 13 Jul 2023 17:35 UTC
8 points
0 comments3 min readEA link

The Evolu­tion of Hu­mans: Travers­ing from Biolog­i­cal Ori­gins to a Tech­nolog­i­cally Aug­mented Future

George_A (Digital Intelligence Rights Initiative) 13 Jul 2023 16:57 UTC
1 point
0 comments3 min readEA link

[Question] What new psy­chol­ogy re­search could best pro­mote AI safety & al­ign­ment re­search?

Geoffrey Miller13 Jul 2023 16:30 UTC
29 points
13 comments1 min readEA link

The God­dess of Every­thing Else—The Animation

Writer13 Jul 2023 16:26 UTC
64 points
1 comment1 min readEA link

Win­ners of AI Align­ment Awards Re­search Contest

Akash13 Jul 2023 16:14 UTC
48 points
1 comment1 min readEA link

Reflec­tion: Four Years of Col­lect­ing Recommendations

morgansimps2413 Jul 2023 14:37 UTC
10 points
0 comments1 min readEA link

Tet­lock on low AI xrisk

TeddyW13 Jul 2023 14:19 UTC
10 points
15 comments1 min readEA link

Are ed­u­ca­tion in­ter­ven­tions as cost effec­tive as the top health in­ter­ven­tions? Five sep­a­rate lines of ev­i­dence for the in­come effects of bet­ter ed­u­ca­tion [Founders Pledge]

Vadim Albinsky13 Jul 2023 13:35 UTC
145 points
13 comments33 min readEA link

In Con­ver­sa­tion with Will MacAskill

brook13 Jul 2023 13:06 UTC
7 points
0 comments1 min readEA link

Why We Need a Tax­on­omy of Mo­ral Concern

Maryam Ali Khan13 Jul 2023 12:26 UTC
17 points
0 comments3 min readEA link

[Question] AI Safety and Censorship

Kuiyaki13 Jul 2023 11:34 UTC
−9 points
0 comments1 min readEA link

Me­tac­u­lus’s Series ‘Shared Vi­sion: Pro Fore­caster Es­says on Pre­dict­ing the Fu­ture Bet­ter’

christian13 Jul 2023 1:24 UTC
16 points
0 comments1 min readEA link
(www.metaculus.com)

Elec­tric Shrimp Stun­ning: a Po­ten­tial High-Im­pact Dona­tion Opportunity

MHR13 Jul 2023 0:39 UTC
155 points
22 comments10 min readEA link

Align­ment Me­gapro­jects: You’re Not Even Try­ing to Have Ideas

NicholasKross12 Jul 2023 23:39 UTC
7 points
1 comment1 min readEA link

Claude 2 on Art and EA

Jeffrey Kursonis12 Jul 2023 23:31 UTC
2 points
0 comments1 min readEA link

[Question] What does the launch of x.ai mean for AI Safety?

Chris Leong12 Jul 2023 19:42 UTC
20 points
1 comment1 min readEA link

[Linkpost] NY Times Fea­ture on Anthropic

Garrison12 Jul 2023 19:30 UTC
34 points
3 comments5 min readEA link
(www.nytimes.com)

AISN#14: OpenAI’s ‘Su­per­al­ign­ment’ team, Musk’s xAI launches, and de­vel­op­ments in mil­i­tary AI use

Center for AI Safety12 Jul 2023 16:58 UTC
26 points
0 comments4 min readEA link
(newsletter.safe.ai)

An Overview of the AI Safety Fund­ing Situation

Stephen McAleese12 Jul 2023 14:54 UTC
128 points
12 comments15 min readEA link

An ex­pert sur­vey on so­cial move­ments and protest

James Özden12 Jul 2023 14:08 UTC
92 points
3 comments6 min readEA link

Could unions be an un­der­rated driver for AI safety policy?

Dunning K.12 Jul 2023 13:21 UTC
23 points
6 comments1 min readEA link

A tran­script of the TED talk by Eliezer Yudkowsky

MikhailSamin12 Jul 2023 12:12 UTC
39 points
0 comments1 min readEA link

Free So­cial Anx­iety Treat­ment for 50 EAs (Sign-up here)

John Salter12 Jul 2023 10:11 UTC
29 points
1 comment2 min readEA link

An­nounc­ing the AI Fables Writ­ing Con­test!

Daystar Eld12 Jul 2023 3:04 UTC
76 points
52 comments3 min readEA link

[Question] What is the most con­vinc­ing ar­ti­cle, video, etc. mak­ing the case that AI is an X-Risk

Jordan Arel11 Jul 2023 20:32 UTC
4 points
7 comments1 min readEA link

How the Jains built a cul­ture of re­spect for non­hu­man animals

siwani agrawal11 Jul 2023 18:51 UTC
29 points
1 comment4 min readEA link

Please won­der about the hard parts of the al­ign­ment problem

MikhailSamin11 Jul 2023 17:02 UTC
7 points
0 comments1 min readEA link

Fate­book: the fastest way to make and track predictions

Adam Binks11 Jul 2023 15:13 UTC
135 points
15 comments2 min readEA link
(fatebook.io)

How to reg­u­late cut­ting-edge AI mod­els (Markus An­der­ljung on The 80,000 Hours Pod­cast)

80000_Hours11 Jul 2023 12:36 UTC
25 points
0 comments14 min readEA link

Giv­ing What We Can so­cial pic­nic London

Chris Rouse11 Jul 2023 10:19 UTC
6 points
1 comment1 min readEA link

(How) Is tech­ni­cal AI Safety re­search be­ing eval­u­ated?

JohnSnow11 Jul 2023 9:37 UTC
27 points
1 comment1 min readEA link

[Question] Ca­reer guidance needed for a ca­reer re­search­ing psy­chol­ogy and helping build the EA move­ment further

Riya Gupta11 Jul 2023 8:59 UTC
2 points
1 comment1 min readEA link

Ad­di­tional Scrutiny for In­dian Applicants

Rahul Gupta11 Jul 2023 8:34 UTC
3 points
6 comments1 min readEA link

Have your say on the Aus­tralian Govern­ment’s AI Policy

Nathan Sherburn11 Jul 2023 1:12 UTC
3 points
0 comments1 min readEA link

Have your say on the Aus­tralian Govern­ment’s AI Policy [On­line #1]

Nathan Sherburn11 Jul 2023 0:35 UTC
3 points
0 comments1 min readEA link

AI Wellbeing

Simon 11 Jul 2023 0:34 UTC
11 points
0 comments9 min readEA link

The M Team: How savvy fa­cil­i­ta­tors can del­e­gate tasks

Joe Rogero10 Jul 2023 22:53 UTC
5 points
0 comments4 min readEA link

Mass Me­dia, Pro­pa­ganda, and So­cial In­fluence: Ev­i­dence of Effec­tive­ness from Courch­esne et al. (2021)

Janet Pauketat10 Jul 2023 20:10 UTC
13 points
0 comments30 min readEA link
(www.sentienceinstitute.org)

Giv­ing effec­tively: Con­tem­po­rary les­sons from Dead Aid by Dam­bisa Moyo

Cherish 10 Jul 2023 20:09 UTC
1 point
0 comments1 min readEA link

[Question] Ur­gent Need for Refinancing

Tobias W. Kaiser10 Jul 2023 19:35 UTC
2 points
2 comments1 min readEA link

Cost-effec­tive­ness of pro­fes­sional field-build­ing pro­grams for AI safety research

Center for AI Safety10 Jul 2023 17:26 UTC
36 points
2 comments18 min readEA link

Cost-effec­tive­ness of stu­dent pro­grams for AI safety research

Center for AI Safety10 Jul 2023 17:23 UTC
53 points
6 comments15 min readEA link

Model­ing the im­pact of AI safety field-build­ing programs

Center for AI Safety10 Jul 2023 17:22 UTC
81 points
0 comments7 min readEA link

An­nounc­ing “Fore­cast­ing Ex­is­ten­tial Risks: Ev­i­dence from a Long-Run Fore­cast­ing Tour­na­ment”

Forecasting Research Institute10 Jul 2023 17:04 UTC
160 points
31 comments2 min readEA link

[Question] Do you think the prob­a­bil­ity of fu­ture AI sen­tience(suffer­ing) is >0.1%? Why?

jackchang11010 Jul 2023 16:41 UTC
4 points
0 comments1 min readEA link

In­fo­graph­ics re­port risk man­age­ment of Ar­tifi­cial In­tel­li­gence in Spain

JorgeTorresC10 Jul 2023 14:44 UTC
16 points
0 comments1 min readEA link
(riesgoscatastroficosglobales.com)

Fron­tier AI Regulation

Zach Stein-Perlman10 Jul 2023 14:30 UTC
56 points
0 comments1 min readEA link

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim Buhler10 Jul 2023 13:54 UTC
62 points
31 comments8 min readEA link