EA & LW Forum Weekly Summary (16th − 22nd Jan ’23)

Supported by Rethink Priorities

This is part of a weekly series summarizing the top posts on the EA and LW forums—you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.

If you’d like to receive these summaries via email, you can subscribe here.

Podcast version: Subscribe on your favorite podcast app by searching for ‘EA Forum Podcast (Summaries)’. A big thanks to Coleman Snell for producing these!

Philosophy and Methodologies

Don’t Balk at Animal-friendly Results

by Bob Fischer

The moral weight project assumes a) hedonism and b) we can use various capacities as proxies for a species’ range of possible welfare states. This post warns against knee-jerk skepticism if research leads to the conclusion that chickens and humans can realize roughly the same amount of welfare at a given time (the ‘Equality Result’). It doesn’t argue for the Equality Result itself.

Three arguments and counter-arguments for skepticism are discussed:

  1. The implications are too huge—under utilitarianism, it means we should massively shift resources toward animals.

    1. Utilitarianism is the reason for the radical implications—the equality result is just the messenger.

  2. Maybe the hedonism assumption is wrong.

    1. Fair, though per previous posts hedonism doesn’t change the bottom line much—even if hedonic goods /​ bads isn’t all of welfare, it’s surely a large part.

  3. Even accepting the assumptions, the Equality Result feels wrong.

    1. This intuition is uncalibrated, affected by many biases.

Conditional on hedonism, the Equality Result wouldn’t be surprising, as it fits with popular theories of valence.

Object Level Interventions /​ Reviews

Existential Risks (including AI)

How many people are working (directly) on reducing existential risk from AI?

by Benjamin Hilton, 80000_Hours

The author estimates 400 FTE (90% CI: 200-1000). Three quarters on technical AI safety research, with the rest distributed between strategy, governance, and advocacy.

Can GPT-3 produce new ideas? Partially automating Robin Hanson and others

by NunoSempere

The author experimented with asking GPT-3 to replicate patterns of insight eg. variations on ‘if you never miss a plane, you’ve been spending too much time at the airport’. It generated somewhat original outputs that matched these patterns, but no great insights. The author noted both davinci-003 and ChatGPT tended to steer toward politically correct outputs.

How we could stumble into AI catastrophe

by Holden Karnofsky

Two stylized stories about how, if transformative AI is developed relatively soon, this could result in global catastrophe.

  1. Misaligned AI: gradually used for bigger tasks, get bigger warning signs, and training approaches become less able to weed out bad behavior (eg. due to deception). Eventual mass deployment in risky domains, and AIs control everything important.

  2. Aligned AI: maybe AI being aligned is the default. Even so, there are pitfalls eg. one government gets there first and seizes worldwide control, AIs are used to create weapons we can’t defend against, digital humans are created without proper rights, general chaos and disruption.

Previous posts in the series have pointed at potential solutions eg. strong alignment research, standards and monitoring, successful & careful AI projects, and strong security.

Heretical Thoughts on AI | Eli Dourado

by 𝕮𝖎𝖓𝖊𝖗𝖆

Linkpost for this blog post, which presents the case for skepticism that AI will be economically transformative near term. The key argument is that the biggest bottlenecks to productivity gains in key industries are often regulatory /​ legal /​ social more than technological.

Transcript of Sam Altman’s interview touching on AI safety

by Andy_McKenzie

Sam Altman, CEO of OpenAI, was interviewed by Connie Loizos—video here. This post calls out some AI-safety relevant parts of the transcript. Key thoughts from Sam are:

About OpenAI

  • OpenAI uses external auditors, red teamers, other labs, and safety organisations to consider impacts.

  • The goal is AGI—all the products, partnerships are in service of that.

  • Microsoft partnership allows access to great supercomputers and infrastructure.

  • Releases will be slower than people would like, and GPT4 won’t match the hype.

About AI in general

  • Short timelines with slow takeoff are best—incrementally putting out better AIs.

  • The biggest risk is accidental misuse, and it could be ‘kill all of us’ bad.

  • Competition and multiple AGIs are positive, to distribute power and reflect multiple worldviews. We should have some hard limits around them, and flexibility within that.

  • People talk about alignment and capabilities as orthogonal, but they’re all basically the same—deep learning progress will solve both.

  • By starting with language models, we can just tell AI the values we want it to act to.

Global Health and Development

Livelihood interventions: overview, evaluation, and cost-effectiveness

by Rethink Priorities, Ruby Dickson, Greer Gosnell, JamesHu, Melanie Basnak

Research report on the effectiveness of three initiatives (bolstering agricultural productivity, technical /​ vocational /​ job search training, and entrepreneurial support) aiming to increase the incomes of adults in poverty. The project was supported by the Livelihood Impact Fund. It’s intended to be a first step in understanding the promise and cost-effectiveness of interventions in this area, and isn’t comprehensive.

Key takeaways include:

  • Agricultural interventions have the highest potential for impact on consumption due to the centrality of subsistence agriculture among the world’s poorest people.

  • Subsidized training can improve income by as much as 55% over following years. The best effects come from formal certification, partially because it aids job mobility.

  • Women’s entrepreneurship programs (with grants or mentorship), in-kind grants, and access to financial services all seriously impact business performance and bottom line.

  • The authors roughly estimate cost-effectiveness for two organisations in the space:

    • Spark Microgrants (gives $8K grants and training to impoverished rural villages to identify and implement a village-level investment): $28 in income generation per dollar spent

    • AkiraChix (trains women for careers in tech): $11 in income generation per dollar spent

Introducing Lafiya Nigeria

by Klau Chmielowska

Lafiya Nigeria is a non-profit that works toward ending maternal mortality in Nigeria, via increasing access to contraceptives and recruiting local nurses and midwives to provide education and distribution.

Their pilot reached 2.4K women, with a cost of $3 per 3-month contraceptive delivered /​ $12 per DALY averted /​ $30 per pregnancy averted. After 6 months, they scaled to 342% of the government’s distribution in the same time scale. They have a need for volunteers for impact analysis, technical advice from family planning experts, and a funding gap of 50K in 2023 to reach their scaling goals.



Evaluating StrongMinds: how strong is the evidence?

by JoelMcGuire

The author conducted a 2021 evaluation of StrongMinds for the Happier Lives Institute (HLI). This post provides a summary of their methodology, in response to a recent post claiming StrongMinds shouldn’t be a top-rated charity (yet).


Overall, they agree the evidence on StrongMinds specifically is weak and implies implausibly large results. However 39 RCTs from similar interventions made up 79% of HLI’s effectiveness estimate. They believe this is strong enough evidence to class StrongMinds as ‘top-rated’. HLI is an independent research institute, and plans to publish research on other interventions (eg. reducing lead exposure) soon. In future, they plan to more explicitly call out in the research if they have concerns or uncertainties in certain data sources.

Opportunities

Announcing aisafety.training

by JJ Hepburn

aisafety.training is a well-maintained living document of AI safety programs, conferences, and other events. Post in the comments if you’re interested in helping maintain it.

Rationality & Life Advice

Confusing the ideal for the necessary

by adamShimi

It’s easy to accidentally turn ideal conditions into necessary conditions. The ideal condition for research is long, uninterrupted sessions of deep work. But there are times it might still be the best use of the 15 minute slots of time peppered between meetings. Noticing and intentionally choosing whether to do something—even if conditions aren’t ideal—both helps you focus on what’s important, and can train skills like the ability to context switch quickly.

Vegan Nutrition Testing Project: Interim Report

by Elizabeth

The author gave nutrition tests to 6 people in the Lightcone office, 4 of whom were vegan or near-vegan. They found ¾ ~vegans had severe iron deficiencies. There were no B12 deficiencies, possibly due to supplementation. Low iron levels are a big deal—they can cost potentially half a standard deviation on multiple cognitive metrics. If you’re restricting meat products or fatigued, it could be worth getting tested.

Community & Media

FLI FAQ on the rejected grant proposal controversy

by Tegmark

Link to an FAQ, which clarifies among other things that Future of Life Institute (FLI) has given Nya Dagbladet Foundation (NDF) zero funding, will not fund them in the future, and rejected their application back in November 2022 after due diligence found NDF was not aligned with FLI’s values or charitable purposes.

The EA community does not own its donors’ money

by Nick Whitaker

Several proposals have been made for EA funding decisions to become more democratized (eg. made collectively, with big funders like Open Philanthropy disaggregated). However, this money isn’t owned by ‘EA’ - it’s owned by people or organisations who listen to /​ are interested in EA eg. Dustin Moskovitz, Cari Tuna, GoodVentures. Demanding control of these funds is likely to be a) ineffective and b) make EA unattractive to other donors.

The author suggests those making proposals should consider who they’re asking to change their behavior, and what actions they’d be willing to take if that behavior wasn’t changed. For instance, a more feasible version of the above proposal might be democratizing the EA Funds program within Effective Ventures (which is more of an EA community project vs. a third party foundation).

Doing EA Better

by ConcernedEAs

Concerns and suggestions on the EA movement written by 10 EAs, primarily before the FTX collapse. Includes a large list of proposals.

Authors’ summary (lightly edited):

  • EA has grown rapidly, and has a responsibility to live up to its goals

  • EA is too homogenous, hierarchical, and intellectually insular, with a hard core of “orthodox” thought and powerful barriers to “deep” critiques

  • Many beliefs accepted in EA are surprisingly poorly supported, and we ignore disciplines with relevant and valuable insights

  • Some EA beliefs and practices align suspiciously well with the interests of our donors, and some of our practices render us susceptible to conflicts of interest

  • EA decision-making is highly centralised, opaque, and unaccountable, but there are several evidence-based methods for improving the situation

Suggested reforms included:

  • Put lots of funding, time, and events into encouraging and engaging with critiques (both on object-level research and on EA community /​ methods broadly).

  • Be more epistemically modest and avoid EA becoming highly associated with specific beliefs or cause areas, focusing instead on EA as a question.

  • Increase diversity both of EAs and of academic disciplines and worldviews we interact with.

  • Make ‘being EA’ less central to people’s identities and job or grant prospects.

  • Distribute EA funds more democratically, and reduce EA organizations’ reliance on funds from these and from tech.

  • Reduce power of community leaders in decision-making and media.

  • Be super transparent (on funding, hiring, decision-making, openness to journalists etc.)

Posts from 2022 you thought were valuable (or underrated)

by Lizka

The recent ‘Forum Wrapped’ initiative allowed people to see what they had upvoted in 2022, and mark some posts as ‘most valuable’. This post shows what posts were most often marked, in addition to which were underrated by karma relative to these votes. See the post for the full list, but the top two most-voted were:

Concrete Biosecurity Projects (some of which could be big) by ASB, eca

Power dynamics between people in EA by Julia Wise

And the top two underrated were:

The great energy descent (short version) - An important thing EA might have missed by Corentin Biteau (discusses the decline in available energy in coming decades)

What you prioritise is mostly moral intuition by James Ozden

2022 EA conference talks are now live

by Eli_Nathan, darnell

Check them out on this youtube channel.

Available talent after major layoffs at tech giants

by nicolenohemi

Job cuts in the tech sector are up 649% in 2022 vs. 2021, and include great engineers, ML researchers, and executives. The author chatted with some of Meta’s probability team (all 50 employees in the team were laid off). Some were already thinking about AI alignment, thought it was important, and were keen to start working in the area. The author suggests making use of this opportunity by contacting recently laid-off employees with talents we need.

Some intuitions about fellowship programs

by Joel Becker

Author’s tl;dr: Making sure >30 participants have regular opportunities to spontaneously gather, active programming, basic food and medical amenities, and common knowledge about visit dates hugely increases the benefit of residential fellowship programs.

Be wary of enacting norms you think are unethical

by RobBensinger

Responses to crises help shape norms. If you want the culture to be a certain way, flesh out the details of that personal vision, anchored to what you think is ethical and important. It’s easy to underestimate the impact you can have by doing this.

The writing style here is bad

by Michał Zabłocki

The author finds posts and comments on the EA forum feel like academic texts, and are taxing to read. They suggest writing more simply to increase accessibility and therefore inclusivity.

The ones that walk away

by Karthik Tadepalli

Explores the reasoning of someone who might want to walk away from the EA community and identity (but not EA moral commitments or organisations). Uses the style of a conversation between two people.

Book critique of Effective Altruism

by Manuel Del Río Rodríguez

A book called The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism will be released a month from now.

Special Mentions

A selection of posts that don’t meet the karma threshold, but seem important or undervalued.

Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia

by Jack_S, jahying

Good Growth is an EA-aligned organization focusing on animal welfare and food systems in Asia. Asia contains >45% of farmed land animals and produces over 89% of farmed fish, but receives only 7% of global animal advocacy funding—partially due to difficulties finding the right projects to fund. In this video, Good Growth highlights five organisations’ strategies in the space, and the projects they need funding for. They also cover their own strategy, which includes providing open-source information on the state of the alternative protein market in various Asian countries.

80,000 Hours career review: Information security in high-impact areas

by 80000_Hours, Cody_Fenwick

Organisations working on major global problems are sometimes special targets for cyberattacks. Information security can help them be secure and successful, and in extreme cases, prevent the distribution of dangerous knowledge like harmful genetic sequences.

This new profile covers why you might want to follow this path, what it looks like, how to assess fit, how to enter the field, and where to work.

Announcing Cavendish Labs

by dyusha, Derik K

New research institute in Vermont focused on AI safety and pandemic prevention. They’re actively searching for collaborators in these areas—get in touch if interested.

What’s going on with ‘crunch time’?

by rosehadshar

If you imagine the distribution of decisions that might impact TAI (transformative AI) outcomes, “crunch time” is any time where they are highly concentrated.

The author notes it may be a misleading concept, as it’s possible decisions are distributed /​ there is no single crunch time, that we don’t know when we’re in crunch time, or that crunch time takes an unusual form (eg. very extended). However, work geared towards identifying the important decisions and scenario planning for them seems valuable regardless.

Didn’t Summarize

Neural networks generalize because of this one weird trick by jhoogland (highly technical post)

Crossposted to LessWrong (13 points, 0 comments)
No comments.