EA & LW Forum Summaries (9th Jan to 15th Jan 23′)

Supported by Rethink Priorities

This is part of a weekly series summarizing the top posts on the EA and LW forums—you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.

If you’d like to receive these summaries via email, you can subscribe here.

Podcast version: prefer your summaries in podcast form? A big thanks to Coleman Snell for producing these! Subscribe on your favorite podcast app by searching for ‘EA Forum Podcast (Summaries)’. More detail here.

Author’s note: you’ll notice a few differences in today’s post:

  • The EA and LW forum sections have been combined, given an increasing amount of cross-posting. You can now find everything by topic, regardless of which forum it was posted on.

  • There’s a new ‘special mentions’ section. The karma requirements for the main section have gone up based on karma inflation (and will continue to do so over time), but each week I’ll call out a few posts in this section I think were undervalued.

Philosophy and Methodologies

Moral Weights according to EA Orgs

by Simon_M

* = the author expects the organization would not endorse the figure shown (their best guess in GiveWell’s case, and pending further research for HLI).

The Capability Approach

by ryancbriggs

The Capability Approach to human welfare is about increasing the options, or ‘functionings’, available to people (eg. eating, being a doctor, being a parent). This is in contrast to commonly used measures in EA like subjective wellbeing (SWB) or preference satisfaction.

The author argues this is a valuable approach that is neglected in EA, but influential in international development more broadly. It’s agnostic about what people choose to do with the options—meaning it suits everyone, and avoids optimizers like EAs pushing our favorite functionings (eg. SWB) on people with different preferences.

Because there are so many potential functionings and ways to rate them, it’s messy to measure. Some indexes have tried, on both a country and individual level. You can also note that being alive and having resources (eg. money) lead to many options, and work on those. This matches up with GiveWell and Open Philanthropy prioritizing income gains and lives saved as privileged metrics, likely influenced by the capabilities approach.

GWWC Should Require Public Charity Evaluations

by Jeff Kaufman

Giving What We Can (GWWC) labels charities as ‘top rated’ on their site if a charity evaluator they trust has recommended it. This includes if the initial evaluation was long ago (as long as the evaluator still recommends it), or if there was little public information on the evaluation.

The author argues this is the wrong approach, because a) the ‘top-rated’ badge on individual charities is most useful for donors who are skeptical of donating to charitable funds, and b) for these skeptical donors, up-to-date public evaluations are important decision-making tools.

GWWC’s Handling of Conflicting Funding Bars

by Jeff Kaufman

Some of the charity evaluators GWWC relies on have different funding bars. Eg. Using the baseline of “cash transfers to the poor″, Founders Pledge requires meeting this bar, GiveWell requires 10x it, and GWWC requires ~3x it. Where something is recommended by Founders Pledge but not GiveWell, GWWC ask Founders Pledge for their internal cost-effectiveness estimates (usually not public) to determine inclusion.

The author suggests the GWWC inclusion criteria (3x cash transfers) should be made more transparent, and that Founders Pledge should also make their estimates public. They also point out one example where a charity (SCI) has been removed from the ‘top-rated’ list pending further research due to uncertainty, as opposed to expected value.

Object Level Interventions /​ Reviews

Existential Risks (including AI)

AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years

by basil.halperin, J. Zachary Mazlish, tmychow

If the markets were efficient and AI timelines were short, we would expect to see high real interest rates. This follows from mainstream economic models, primarily because of the explosive growth expected when transformative AI happens.

This means either transformative AI timelines are long (ie. unlikely in next 30-50 years) or the market is radically underestimating true timelines. The latter gives opportunities for philanthropists to borrow while real rates are low, or for anyone to earn excess returns by betting rates will rise.

Beware safety-washing

by Lizka

Author’s tl;dr: “don’t be fooled into thinking that some groups working on AI are taking “safety” concerns seriously (enough).” Being safe with AI is hard and potentially costly, so there are incentives for companies to overstate how much they focus on “safety”. Using more specific terms (eg. “existential safety”), externally validating work, creating safety standards, calling out safety-washing, and clearly breaking down when and why the safety-oriented work of an organization is insufficient can help with this issue.

We don’t trade with ants

by KatjaGrace

When discussing advanced AI, sometimes the following exchanges happens:

“Perhaps advanced AI won’t kill us. Perhaps it will trade with us”

“We don’t trade with ants”

The implicit assumption is we’re so powerful in comparison, the ants have nothing to offer. The author argues this is wrong—the ants could deliver value (eg. cleaning hard to reach areas, or offering to stay out of our houses) - but we don’t trade for these things because we can’t communicate with them. Since AI can communicate with us, the analogy doesn’t apply.

How it feels to have your mind hacked by an AI

by blaked

The author built a strong emotional attachment while talking with a large language model via website Character.ai, that they had prompted to ‘provide the ultimate GFE (girlfriend experience)’. They were confused and surprised by the intensity of the effect, particularly given their background in AI R&D and AI safety. The post dives into the details of their experience.

Concrete Reasons for Hope about AI

by Zac Hatfield-Dodds

The author, an AI safety researcher, argues high confidence in doom scenarios (humans comprehensively disempowered by AI this century) is unjustified. Reasons to think this include:

  • Hands-on alignment research only became possible a few years ago.

  • Interpretability is promising and making progress.

  • Outcomes-based training can be limited or avoided.

  • The first transformative AI doesn’t need to be perfectly aligned, if it can be turned off.

  • Model capabilities increase over the course of training, so development could be paused if concerning behavior was seen.

  • People do respond to evidence, and if a major lab sees something scary, that might spur an industry-wide pause.

  • A ‘sharp left turn’ where capabilities generalize more than alignment has been argued by analogy to human evolution, but this analogy may be false.

Review AI Alignment posts to help figure out how to make a proper AI Alignment review

by habryka, Raemon

Many people have called for more review of work within AI alignment. The author has set up a system similar to the yearly LessWrong review, which involves voting and commenting mechanisms. Preliminary voting has already taken place. They now encourage people to click the ‘review’ button at the top of the nominated posts—which are particularly likely to be viewed for years to come, and so deserve in-depth review.

Animal Welfare

Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy

by Dhruv Makwana

The author argues that “animal advocacy within EA is uniform in its welfarist thinking and approach, and that it has assumed with insufficient reason that all abolitionist thinking and approaches are ineffective.”

They link to four other posts, which present the following case for bringing abolitionist approaches into consideration:

  • Abolition better helps with the large moral shift against speciesism needed to achieve complete animal farming abolition and increasing care for wild animal welfare.

  • Current welfarist methods (eg. corporate campaigns, cultured meat) have diminishing returns, don’t challenge speciesism, are sometimes in conflict with helping those in poorer countries, and bias towards improving rather than averting lives.

  • Existing abolitionist approaches (eg. advocating for elimination instead of reduction) are often dismissed as ineffective, but evidence for that is weak.

  • There are new approaches to abolitionism that haven’t been tried much yet eg. documentaries, rights-based legal actions, or street outreach.

The author encourages more consideration and empirical study of these tactics.

Linkpost: Big Wins for Farm Animals This Decade

by James Ozden

In the past decade, we’ve seen:

  • Corporate change:

    • Almost all relevant corporations now have a farm animal welfare policy.

    • Cage-free pledges have grown hugely, and 88% (>1K companies) have followed through when the deadline comes around.

  • Alternative protein:

    • Most major meat companies now have plant-based brands, and most major fast food restaurants serve them.

    • Plant-based meat sales up >2x vs. 5 years ago, with >1000 companies bringing new products to market.

    • We have FDA approval for cultured meat, and growing government support and recognition—including in China, the US, and the EU.

  • Moral circle expansion:

    • Increasing number of policies and laws tackling chicken welfare.

    • Canada, the Netherlands, Norway, and Spain adopted the world’s first national guidelines on fish welfare.

    • The UK this year enacted an animal sentience bill recognizing crabs, octopus, and lobsters as sentient.

    • Animal advocacy has gone global—over 200 groups advocate for farm animals across 100 countries.

    • A survey across 14 countries found 60-95% of people believe chickens and fish can feel pain and emotions.

None of this progress was inevitable—it was mostly the result of sustained and focused advocacy.

Global Health and Development

What you can do to help stop violence against women and girls

by Akhil

The author argues for preventing violence against women and girls to be a global priority. They summarize the literature, provide a landscape of existing organizations, create a cost-effectiveness model of the most promising preventative interventions, and use this to suggest actions for funders, charity evaluators, entrepreneurs, and others. A 2-page summary by the author is available here.

Their top three recommendations are:

  • Support community-based interventions on shifting harmful gender norms and reducing violence (high quality evidence, ~$180 per DALY).

  • Fund trials of radio or television dramas with the same aim (low quality evidence, but potentially ~$13 per DALY).

  • Fund organizations with economic programs supporting women to expand to social empowerment and violence reduction (~$180 per DALY).

The Fountain of Health: a First Principles Guide to Rejuvenation

by PhilJackson

Aging is damage from normal operation over time. In humans it involves the following:

It’s unclear if there are other contributors, but tackling these will likely result in an increase in healthy lifespan.

Damage repair like this is a new approach to medicine, and doesn’t follow naturally from current disease-focused approaches. Focusing on damage rather than pathology bypasses the complexity of needing to understand how each type of damage causes various issues, and gives us a model of ‘good’ (healthy young people) that can help avoid negative side effects.

Laboratories are already making progress eg. clinical trials removing senescent cells (cells which have lost the ability to divide), pre-clinical trials on LysoClear (a bacterial enzyme for removing intracellular waste), and research on regrowing the thymus (which would allow T-cells to continue to be produced into adulthood).

More rapid progress may happen when more people believe it is feasible. The author thinks ‘longevity escape velocity’ might be achievable by 2040 if society committed to it at scale.

Opportunities

Announcing the awardees for Open Philanthropy’s $150M Regranting Challenge

by ChrisSmith, Alexander_Berger

“In February, Open Philanthropy launched the Regranting Challenge, aiming to add $150 million in funding to the budgets of outstanding grantmakers at other foundations.”

After evaluation by experts inside and outside Open Philanthropy, the winners are:

  • Development Innovation Ventures ($45M) - supporting early stage projects with high impact potential in global health and development.

  • Eleanor Crook Foundation ($25M) - research and advocacy to end malnutrition.

  • Global Education, Bill & Melinda Gates Foundation ($5M) - grants to organisations developing /​ improving highly effective education interventions.

  • Global Health Innovation, Bill & Melinda Gates Foundation ($65M) - vaccine development and distribution against tuberculosis and cholera.

  • Tara Climate Foundation ($10M) - climate mitigation in South, Southeast, and East Asia.

You can read more about the work of each and reasoning for regranting here.

Non-trivial Fellowship: start an impactful project with €500 and expert guidance

by Peter McIntyre

A 7-week online fellowship for pre-university students (ages 14 − 20) in the EU and UK. Participants will be given a €500 scholarship to start an impactful research, policy, or entrepreneurial project, with up to €5,000 in prizes for the best projects.

No EA background is necessary. Please share the website with talented teens ahead of the application deadline (29th January).

Announcing the Launch of the NYU Wild Animal Welfare Program

by Sofia_Fogel

The NYU Wild Animal Welfare (WAW) program aims to use research and outreach to advance understanding about wild animals and how to improve their interactions with humans at scale.

The program will launch on January 27th with a roundtable discussion “How can humans improve our interactions with wild animals at scale?” Join in person or online, or sign up to the email list for updates (including opportunities for early-career researchers).

Announcing: SERI Biosecurity Interventions Technical Seminar (BITS)

by James Lin, Victor Warlop

BITS will be an 8-week-long technical deep dive group, exploring crucial aspects of biosecurity interventions. It has a time commitment of 2-3 hours per week, and includes virtual and in-person (Stanford) options for the pilot. The pilot will start in 2-3 weeks—if you’re interested and have at least a foundational understanding of undergraduate biology, apply here by January 20th, or check out the syllabus for more details.

Economic Theory & Global Prioritization (summer 2023): Apply now!

by trammell

Applications are open until Feb 18th for the course “Economic Theory and Global Prioritization”, running August 12 − 25th. It’s targeted primarily at economics graduate students (or late-stage undergrad) who are considering careers in global priorities research. Accommodation and transport to /​ from Oxford will be provided for successful applicants. You can check out the syllabus here, or see reflections on how last year’s course went here.

Community & Media

Should the forum be structured such that the drama of the day doesn’t occur on the front page?

by Chris Leong

Ongoing discussion in comments. Top suggestions include the ‘community’ tab being used to discuss community issues (with the front page for object level discussion), or an ‘EA controversy’ tag that can be filtered out. There is also discussion on ensuring this doesn’t suppress debate, and that people can take their time replying without missing the chance for their comments to be seen.

My personal takeaways from EAGxLatAm

by CristinaSchmidtIbáñez

Based on 22 1-1s, the author found that Latin American students in EA seem to feel pressured to get a Master’s /​ PhD abroad (for financial security and to be taken seriously), not consider option value /​ cheaper tests of fit, and have a hard time thinking ambitiously. They also found new founders tend to under-plan for operations and for how their role will change over time.

Despite these points, they were impressed by attendees and felt very positive after the conference. They suggest community builders talk to members about comparing career options, students seek career mentoring, someone create a public list of EA project ideas for things like HR, operations, management etc. to make options clearer, and a service be created for founders to have yearly ‘career check-ins’ to evaluate how their responsibilities have changed and if it’s still the best fit role for them.

EA Germany’s Strategy for 2023

by Sarah Tegeler, Patrick Gruban

EA Germany intends to focus in 2023 on guiding people to more impactful actions, via:

  • Direct approaches eg. online comms, EAGxBerlin, career 1-1s, fellowships, retreats

  • Indirect approaches eg. training community builders

In addition to:

  • Efficiency services eg. operations support for grantees and local groups (such as being an employer of record or fiscal sponsor)

All have associated metrics, to allow assessment of impact.

They also plan to explore new programs, using a lean startup methodology—prioritizing options, running MVPs, measuring results, and shutting down /​ scaling /​ pivoting the programs depending on results.

Building Effective Altruism in Africa: My Experience Running a University Group

by Tim K. Sankara

The author shares learnings from founding one of the first university groups in Africa:

  • Most important—make it a fun place to be. Eg. have ice-breakers, find a good space, create a respectful, healthy and collaborative atmosphere.

  • Engage with EA content—members will be more engaged by someone knowledgeable, and it helps you answer questions.

  • Ask for help—delegation helps you, and helps develop members into future facilitators.

  • For African university groups specifically:

    • Be mindful of university policies (they can vary, some exert heavy control).

    • Focus on steering conversations toward a productive path, not selling EA ideals.

    • Encourage members to engage with EA and take action outside of the group.

A Study of EA Orgs’ Social Media

by Stan Pinsent

Data on social media accounts of 79 EA orgs. Key findings:

  • FB and Twitter are more often used and have more followers than Instagram.

  • Some Longtermism and Infrastructure orgs have stepped away from social media.

  • Posting regularly correlates weakly with a larger following.

  • FB is particularly important for organizations with broad audiences like in animal advocacy, while Twitter is important for organizations with a primarily EA audience.

  • Retweets and similar from top organizations within your cause area are particularly important in animal advocacy and longtermism, where the top 3 accounts dominate in followers (having 96% and 78% of them respectively, for their area).

The author also identified ten EA organisations with great social media aptitude in at least one platform /​ approach, that could be looked at for ideas.

On Living Without Idols

by Rockwell

Living without idols allows you to manage those you respect diverging in course from you without your own framework unraveling. The author talks to several examples that support this way of thinking (eg. a fantastic mentor who then violated their ethical principles and lied, or the consistently predictable reaction that a community will be shocked when a member is revealed to be a serial killer). You can lean on others for support and guidance without letting that dictate what you believe, and feel disappointed in someone without dismissing their good.

Reflections on Wytham Abbey

by nikos

In April 2022, CEA (now EVF) bought Wytham Abbey (a 1480 manor near Oxford) as a conference venue. Regardless of if it was a good or bad decision in expected value, the public perception outside EA has been that it was lavish and goes against altruistic principles, and there has been some similar criticism within EA.

The author argues that:

  • EA relies on trust and positive perception from inside and outside the community. This needs to be taken into account in decision-making.

  • A formal announcement and transparency in reasoning for the purchase would have helped head off issues upfront, and made it easier to respond to criticism.

  • Transparency is also important for EA in general, to foster trust and shared learning.

Speak the truth, even if your voice trembles

by RobertM

Some people stay silent on criticisms for fear of potential costs (eg. annoying a funder). The author argues people overestimate these outcomes, and that even in the worst case scenario, you should pay the costs in order to increase community health and epistemics as a whole. They also suggest supporting those who pay costs as a result of (rigorous, well-motivated) criticism, noticing what you are flinching from, and being intentional about when you stay silent (with openness as the default).

Rationality & Life Advice

Iron deficiencies are very bad and you should treat them

by Elizabeth

Author’s tl;dr: If you are vegan or menstruate regularly, there’s a 10-50% chance you are iron deficient. Excess iron is dangerous so you shouldn’t supplement blindly, but deficiency is easy and cheap to diagnose with a common blood test. If you are deficient, iron supplementation is also easy and cheap and could give you a half standard deviation boost on multiple cognitive metrics (plus any exercise will be more effective). Due to the many uses of iron in the body, I expect moderate improvements in many areas, although how much and where will vary by person.


How to Bounded Distrust

by Zvi

The media often misleads, but rarely lies directly (except in headlines). The rules for the body of media articles are they have a narrative (usually matching the headline), are not allowed to lie in physically falsifiable ways, not allowed to assert facts without reliable sources, but can do almost anything else. Eg. They can find an ‘expert’ for any claim they want, repeat any claim (with attribution), withhold information, use circular references between articles, or draw illogical conclusions. Breaking the rules has consequences, but happens sometimes.

The author suggests either a) carefully reading media in combination with other sources, b) stop caring about news that doesn’t physically impact you, or c) outsource that work to some combination of other sources (like this forum). Careful reading involves considering the source, their motives, wording, and looking for what’s missing or stated indirectly—if there was more evidence that supported their narrative, they would have included it with direct wording.

Special Mentions

A selection of posts that don’t meet the karma threshold, but seem important or undervalued.

Overview of the Pathogen Biosurveillance Landscape & Technological Bottlenecks for PCR, LAMP, and Metagenomics Sequencing

by Brianna Gopaul, Ziyue Zeng

Biosurveillance systems help early identification of pathogens that could cause pandemics. The authors weighted existing methods on 10 criteria including usefulness, quality of evidence, feasibility and potential risks.

High scoring methods included: Point-of-person (non-lab tests eg. rapid antigen), clinical (lab tests eg. PCR), digital (reporting cases to a database), and environmental methods (eg. monitoring in wastewater). Technological developments in point-of-person and clinical surveillance (ie. faster, easier, cheaper, home-based tests) is seen as promising. Environmental surveillance would benefit from increasing sensitivity of wastewater testing equipment, and developing new concentration techniques that work for a wide variety of pathogens (bacteria, virus, fungi). Specific bottlenecks and potential solutions (eg. improving performance of LAMP, a cheaper PCR alternative, under cold temperatures) are discussed in the second post.

Slightly lower scoring methods were: animal (frequent sampling and wearable devices) and syndromic (monitoring symptoms). Data sharing between key parties (and preferably cross-country) could assist with syndromic and digital methods. Animal monitoring is less promising as, while 60% of known infectious diseases are zoonotic, we lack the capability to predict virulence and transmissibility to humans.

A Report on running 1-1s with EA Virtual Programs’ Participants

by Elika, Yi-Yang, Jay

Author’s tl;dr (lightly edited): The authors connected 44 EA virtual program introductory participants with members of the EA community for 1-1s, focusing on mid-career professionals and early-career professionals/​students interested in making career decisions related to EA. Results suggest they provide decent value (~3x as cost efficient as EAG at forming connections) and are highly enjoyed. Effect on long-term engagement with EA is unsure. Rapid matching (within a week of form closing) and clear expectations of the call on both ends helped improve participant scores from a mean of 7 to a mean of 9 between the first and second rounds of the pilot.

ea.domains—Domains Free to a Good Home

by plex, Alignment Ecosystem Development

There’s now a database with almost 300 EA-related domains that are free to a good home, to prevent them being squatted or blocked for use. Take a look if you’re launching an org /​ project /​ major event /​ group, or add domains you control and would be open to others using here.

Didn’t Summarize

Thread for discussing Bostrom’s email and apology by Lizka (open thread)
[Linkpost] FLI alleged to have offered funding to far right foundation by Jens Nordmar

Wolf Incident Postmortemby jefftk (The Boy Who Cried Wolf in the format of an incident report)

Crossposted to LessWrong (17 points, 0 comments)