We’re sharing some posts from the last week, which I shared in the most recent Digest.
The Digest is a weekly email I send to around 8,000 subscribers. You can look at some recent editions or subscribe here.
(This post is (still) an experiment. Let us know what you think!)We recently shared EA organization updates for March, which highlight some relevant opportunities including upcoming conferences, fellowships, and courses.
We recommend:
How oral rehydration therapy was developed (article by Matt Reynolds, link-posted by Kelsey Piper)
Interview with Tyler Johnston on helping farmed animals, consciousness, and being conventionally good (Amber Dawn, Tyler Johnston, 19 min)
Shallow Investigation: Stillbirths (Joseph Pusey, 17 min) and Exposure to Lead Paint in Low- and Middle-Income Countries (several authors, 3 min)
Paper summary: Are we living at the hinge of history? (Global Priorities Institute, Riley Harris, 6 min-summary of a paper by William MacAskill) (see also: Paper summary: Longtermist institutional reform)
Continued discussion on Rethink Priorities’ Welfare Range Estimates (Michael St Jules, comment)
80k podcast episode on sentience in AI systems (rgb, 16 min)
Two directions for research on forecasting and decision making (Paal Fredrik Skjørten Kvarberg, 25 min)
AI risk
Scott Alexander reacts to OpenAI’s Planning for AGI and beyond (Akash, link-post with highlights)
How bad a future do ML researchers expect? (Katja Grace, 3 min)
Anthropic: Core Views on AI Safety: When, Why, What, and How (jonmenaster, 27 min)
GPT-4 is out (Lizka — me, thread)
A Windfall Clause for CEO could worsen AI race dynamics (Larks, 8 min)
Success without dignity: a nearcasting story of avoiding catastrophe by luck (Holden Karnofsky, 18 min)
Effectiveness of AI Existential Risk Communication (Otto, 5 min) (see also a link-post for a NYT Opinion column on risk from AI and an animation about intelligence)
About the EA community and related topics
FTX Community Response Survey Results (Willem Sleegers, David Moss, 9 min)
Time Article Discussion—“Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed” (Nathan Young, thread)
Share the burden (2ndRichter, 10 min) and How my community successfully reduced sexual misconduct (titotal, 6 min)
Racial and gender demographics at EA Global in 2022 (Amy Labenz, Angelina Li, Eli Nathan, 5 min)
Opportunities and announcements:
Announcing the Open Philanthropy AI Worldviews Contest (Jason Schukraft, Peter Favaloro)
Two University Group Organizer Opportunities: Pre-EAG London Summit & Summer Internship (Joris P, Jessica McCurdy, Jake McKinnon)
Shutting Down the Lightcone Offices (Habryka, Ben Pace)
Classic Forum post:
Three intuitions about EA: responsibility, scale, self-improvement (Richard Ngo)
Highlights from last week
We’re sharing some posts from the last week, which I shared in the most recent Digest.
The Digest is a weekly email I send to around 8,000 subscribers. You can look at some recent editions or subscribe here.
(This post is (still) an experiment. Let us know what you think!)
We recently shared EA organization updates for March, which highlight some relevant opportunities including upcoming conferences, fellowships, and courses.
We recommend:
How oral rehydration therapy was developed (article by Matt Reynolds, link-posted by Kelsey Piper)
Interview with Tyler Johnston on helping farmed animals, consciousness, and being conventionally good (Amber Dawn, Tyler Johnston, 19 min)
Shallow Investigation: Stillbirths (Joseph Pusey, 17 min) and Exposure to Lead Paint in Low- and Middle-Income Countries (several authors, 3 min)
Paper summary: Are we living at the hinge of history? (Global Priorities Institute, Riley Harris, 6 min-summary of a paper by William MacAskill) (see also: Paper summary: Longtermist institutional reform)
Continued discussion on Rethink Priorities’ Welfare Range Estimates (Michael St Jules, comment)
80k podcast episode on sentience in AI systems (rgb, 16 min)
Two directions for research on forecasting and decision making (Paal Fredrik Skjørten Kvarberg, 25 min)
AI risk
Scott Alexander reacts to OpenAI’s Planning for AGI and beyond (Akash, link-post with highlights)
How bad a future do ML researchers expect? (Katja Grace, 3 min)
Anthropic: Core Views on AI Safety: When, Why, What, and How (jonmenaster, 27 min)
GPT-4 is out (Lizka — me, thread)
A Windfall Clause for CEO could worsen AI race dynamics (Larks, 8 min)
Success without dignity: a nearcasting story of avoiding catastrophe by luck (Holden Karnofsky, 18 min)
Effectiveness of AI Existential Risk Communication (Otto, 5 min) (see also a link-post for a NYT Opinion column on risk from AI and an animation about intelligence)
About the EA community and related topics
FTX Community Response Survey Results (Willem Sleegers, David Moss, 9 min)
Time Article Discussion—“Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed” (Nathan Young, thread)
Share the burden (2ndRichter, 10 min) and How my community successfully reduced sexual misconduct (titotal, 6 min)
Racial and gender demographics at EA Global in 2022 (Amy Labenz, Angelina Li, Eli Nathan, 5 min)
Opportunities and announcements:
Announcing the Open Philanthropy AI Worldviews Contest (Jason Schukraft, Peter Favaloro)
Two University Group Organizer Opportunities: Pre-EAG London Summit & Summer Internship (Joris P, Jessica McCurdy, Jake McKinnon)
Shutting Down the Lightcone Offices (Habryka, Ben Pace)
Classic Forum post:
Three intuitions about EA: responsibility, scale, self-improvement (Richard Ngo)