EA & LW Forum Weekly Summary (16th − 22nd Jan ’23)
Supported by Rethink Priorities
This is part of a weekly series summarizing the top posts on the EA and LW forums—you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
If you’d like to receive these summaries via email, you can subscribe here.
Podcast version: Subscribe on your favorite podcast app by searching for ‘EA Forum Podcast (Summaries)’. A big thanks to Coleman Snell for producing these!
Philosophy and Methodologies
by Bob Fischer
The moral weight project assumes a) hedonism and b) we can use various capacities as proxies for a species’ range of possible welfare states. This post warns against knee-jerk skepticism if research leads to the conclusion that chickens and humans can realize roughly the same amount of welfare at a given time (the ‘Equality Result’). It doesn’t argue for the Equality Result itself.
Three arguments and counter-arguments for skepticism are discussed:
The implications are too huge—under utilitarianism, it means we should massively shift resources toward animals.
Utilitarianism is the reason for the radical implications—the equality result is just the messenger.
Maybe the hedonism assumption is wrong.
Fair, though per previous posts hedonism doesn’t change the bottom line much—even if hedonic goods / bads isn’t all of welfare, it’s surely a large part.
Even accepting the assumptions, the Equality Result feels wrong.
This intuition is uncalibrated, affected by many biases.
Conditional on hedonism, the Equality Result wouldn’t be surprising, as it fits with popular theories of valence.
Object Level Interventions / Reviews
Existential Risks (including AI)
by Benjamin Hilton, 80000_Hours
The author estimates 400 FTE (90% CI: 200-1000). Three quarters on technical AI safety research, with the rest distributed between strategy, governance, and advocacy.
The author experimented with asking GPT-3 to replicate patterns of insight eg. variations on ‘if you never miss a plane, you’ve been spending too much time at the airport’. It generated somewhat original outputs that matched these patterns, but no great insights. The author noted both davinci-003 and ChatGPT tended to steer toward politically correct outputs.
by Holden Karnofsky
Two stylized stories about how, if transformative AI is developed relatively soon, this could result in global catastrophe.
Misaligned AI: gradually used for bigger tasks, get bigger warning signs, and training approaches become less able to weed out bad behavior (eg. due to deception). Eventual mass deployment in risky domains, and AIs control everything important.
Aligned AI: maybe AI being aligned is the default. Even so, there are pitfalls eg. one government gets there first and seizes worldwide control, AIs are used to create weapons we can’t defend against, digital humans are created without proper rights, general chaos and disruption.
Previous posts in the series have pointed at potential solutions eg. strong alignment research, standards and monitoring, successful & careful AI projects, and strong security.
Linkpost for this blog post, which presents the case for skepticism that AI will be economically transformative near term. The key argument is that the biggest bottlenecks to productivity gains in key industries are often regulatory / legal / social more than technological.
Sam Altman, CEO of OpenAI, was interviewed by Connie Loizos—video here. This post calls out some AI-safety relevant parts of the transcript. Key thoughts from Sam are:
OpenAI uses external auditors, red teamers, other labs, and safety organisations to consider impacts.
The goal is AGI—all the products, partnerships are in service of that.
Microsoft partnership allows access to great supercomputers and infrastructure.
Releases will be slower than people would like, and GPT4 won’t match the hype.
About AI in general
Short timelines with slow takeoff are best—incrementally putting out better AIs.
The biggest risk is accidental misuse, and it could be ‘kill all of us’ bad.
Competition and multiple AGIs are positive, to distribute power and reflect multiple worldviews. We should have some hard limits around them, and flexibility within that.
People talk about alignment and capabilities as orthogonal, but they’re all basically the same—deep learning progress will solve both.
By starting with language models, we can just tell AI the values we want it to act to.
Global Health and Development
by Rethink Priorities, Ruby Dickson, Greer Gosnell, JamesHu, Melanie Basnak
Research report on the effectiveness of three initiatives (bolstering agricultural productivity, technical / vocational / job search training, and entrepreneurial support) aiming to increase the incomes of adults in poverty. The project was supported by the Livelihood Impact Fund. It’s intended to be a first step in understanding the promise and cost-effectiveness of interventions in this area, and isn’t comprehensive.
Key takeaways include:
Agricultural interventions have the highest potential for impact on consumption due to the centrality of subsistence agriculture among the world’s poorest people.
Subsidized training can improve income by as much as 55% over following years. The best effects come from formal certification, partially because it aids job mobility.
Women’s entrepreneurship programs (with grants or mentorship), in-kind grants, and access to financial services all seriously impact business performance and bottom line.
The authors roughly estimate cost-effectiveness for two organisations in the space:
Spark Microgrants (gives $8K grants and training to impoverished rural villages to identify and implement a village-level investment): $28 in income generation per dollar spent
AkiraChix (trains women for careers in tech): $11 in income generation per dollar spent
by Klau Chmielowska
Lafiya Nigeria is a non-profit that works toward ending maternal mortality in Nigeria, via increasing access to contraceptives and recruiting local nurses and midwives to provide education and distribution.
Their pilot reached 2.4K women, with a cost of $3 per 3-month contraceptive delivered / $12 per DALY averted / $30 per pregnancy averted. After 6 months, they scaled to 342% of the government’s distribution in the same time scale. They have a need for volunteers for impact analysis, technical advice from family planning experts, and a funding gap of 50K in 2023 to reach their scaling goals.
The author conducted a 2021 evaluation of StrongMinds for the Happier Lives Institute (HLI). This post provides a summary of their methodology, in response to a recent post claiming StrongMinds shouldn’t be a top-rated charity (yet).
Overall, they agree the evidence on StrongMinds specifically is weak and implies implausibly large results. However 39 RCTs from similar interventions made up 79% of HLI’s effectiveness estimate. They believe this is strong enough evidence to class StrongMinds as ‘top-rated’. HLI is an independent research institute, and plans to publish research on other interventions (eg. reducing lead exposure) soon. In future, they plan to more explicitly call out in the research if they have concerns or uncertainties in certain data sources.
by JJ Hepburn
aisafety.training is a well-maintained living document of AI safety programs, conferences, and other events. Post in the comments if you’re interested in helping maintain it.
Rationality & Life Advice
It’s easy to accidentally turn ideal conditions into necessary conditions. The ideal condition for research is long, uninterrupted sessions of deep work. But there are times it might still be the best use of the 15 minute slots of time peppered between meetings. Noticing and intentionally choosing whether to do something—even if conditions aren’t ideal—both helps you focus on what’s important, and can train skills like the ability to context switch quickly.
The author gave nutrition tests to 6 people in the Lightcone office, 4 of whom were vegan or near-vegan. They found ¾ ~vegans had severe iron deficiencies. There were no B12 deficiencies, possibly due to supplementation. Low iron levels are a big deal—they can cost potentially half a standard deviation on multiple cognitive metrics. If you’re restricting meat products or fatigued, it could be worth getting tested.
Community & Media
Link to an FAQ, which clarifies among other things that Future of Life Institute (FLI) has given Nya Dagbladet Foundation (NDF) zero funding, will not fund them in the future, and rejected their application back in November 2022 after due diligence found NDF was not aligned with FLI’s values or charitable purposes.
by Nick Whitaker
Several proposals have been made for EA funding decisions to become more democratized (eg. made collectively, with big funders like Open Philanthropy disaggregated). However, this money isn’t owned by ‘EA’ - it’s owned by people or organisations who listen to / are interested in EA eg. Dustin Moskovitz, Cari Tuna, GoodVentures. Demanding control of these funds is likely to be a) ineffective and b) make EA unattractive to other donors.
The author suggests those making proposals should consider who they’re asking to change their behavior, and what actions they’d be willing to take if that behavior wasn’t changed. For instance, a more feasible version of the above proposal might be democratizing the EA Funds program within Effective Ventures (which is more of an EA community project vs. a third party foundation).
Concerns and suggestions on the EA movement written by 10 EAs, primarily before the FTX collapse. Includes a large list of proposals.
Authors’ summary (lightly edited):
EA has grown rapidly, and has a responsibility to live up to its goals
EA is too homogenous, hierarchical, and intellectually insular, with a hard core of “orthodox” thought and powerful barriers to “deep” critiques
Many beliefs accepted in EA are surprisingly poorly supported, and we ignore disciplines with relevant and valuable insights
Some EA beliefs and practices align suspiciously well with the interests of our donors, and some of our practices render us susceptible to conflicts of interest
EA decision-making is highly centralised, opaque, and unaccountable, but there are several evidence-based methods for improving the situation
Suggested reforms included:
Put lots of funding, time, and events into encouraging and engaging with critiques (both on object-level research and on EA community / methods broadly).
Be more epistemically modest and avoid EA becoming highly associated with specific beliefs or cause areas, focusing instead on EA as a question.
Increase diversity both of EAs and of academic disciplines and worldviews we interact with.
Make ‘being EA’ less central to people’s identities and job or grant prospects.
Distribute EA funds more democratically, and reduce EA organizations’ reliance on funds from these and from tech.
Reduce power of community leaders in decision-making and media.
Be super transparent (on funding, hiring, decision-making, openness to journalists etc.)
The recent ‘Forum Wrapped’ initiative allowed people to see what they had upvoted in 2022, and mark some posts as ‘most valuable’. This post shows what posts were most often marked, in addition to which were underrated by karma relative to these votes. See the post for the full list, but the top two most-voted were:
Power dynamics between people in EA by Julia Wise
And the top two underrated were:
The great energy descent (short version) - An important thing EA might have missed by Corentin Biteau (discusses the decline in available energy in coming decades)
What you prioritise is mostly moral intuition by James Ozden
by Eli_Nathan, darnell
Check them out on this youtube channel.
Job cuts in the tech sector are up 649% in 2022 vs. 2021, and include great engineers, ML researchers, and executives. The author chatted with some of Meta’s probability team (all 50 employees in the team were laid off). Some were already thinking about AI alignment, thought it was important, and were keen to start working in the area. The author suggests making use of this opportunity by contacting recently laid-off employees with talents we need.
by Joel Becker
Author’s tl;dr: Making sure >30 participants have regular opportunities to spontaneously gather, active programming, basic food and medical amenities, and common knowledge about visit dates hugely increases the benefit of residential fellowship programs.
Responses to crises help shape norms. If you want the culture to be a certain way, flesh out the details of that personal vision, anchored to what you think is ethical and important. It’s easy to underestimate the impact you can have by doing this.
by Michał Zabłocki
The author finds posts and comments on the EA forum feel like academic texts, and are taxing to read. They suggest writing more simply to increase accessibility and therefore inclusivity.
by Karthik Tadepalli
Explores the reasoning of someone who might want to walk away from the EA community and identity (but not EA moral commitments or organisations). Uses the style of a conversation between two people.
by Manuel Del Río Rodríguez
A book called The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism will be released a month from now.
A selection of posts that don’t meet the karma threshold, but seem important or undervalued.
by Jack_S, jahying
Good Growth is an EA-aligned organization focusing on animal welfare and food systems in Asia. Asia contains >45% of farmed land animals and produces over 89% of farmed fish, but receives only 7% of global animal advocacy funding—partially due to difficulties finding the right projects to fund. In this video, Good Growth highlights five organisations’ strategies in the space, and the projects they need funding for. They also cover their own strategy, which includes providing open-source information on the state of the alternative protein market in various Asian countries.
by 80000_Hours, Cody_Fenwick
Organisations working on major global problems are sometimes special targets for cyberattacks. Information security can help them be secure and successful, and in extreme cases, prevent the distribution of dangerous knowledge like harmful genetic sequences.
This new profile covers why you might want to follow this path, what it looks like, how to assess fit, how to enter the field, and where to work.
by dyusha, Derik K
New research institute in Vermont focused on AI safety and pandemic prevention. They’re actively searching for collaborators in these areas—get in touch if interested.
If you imagine the distribution of decisions that might impact TAI (transformative AI) outcomes, “crunch time” is any time where they are highly concentrated.
The author notes it may be a misleading concept, as it’s possible decisions are distributed / there is no single crunch time, that we don’t know when we’re in crunch time, or that crunch time takes an unusual form (eg. very extended). However, work geared towards identifying the important decisions and scenario planning for them seems valuable regardless.
Neural networks generalize because of this one weird trick by jhoogland (highly technical post)