Looks like YouGov had the same concern and ran a second poll where they split respondents into three groups for that question (one with no framing, one with the support framing from the post above, and one with a support + oppose framing):
https://today.yougov.com/topics/technology/articles-reports/2023/04/15/ai-nuclear-weapons-world-war-humanity-poll
The simple / no context framing (“Would you support or oppose a six-month pause on some kinds of AI development?”) got the lowest support, but still pretty high at 58%.
Zoe Williams
Post summary (feel free to suggest edits!):
In November 2022, Open Philanthropy (OP) announced a soft pause on new longtermist funding commitments, while they re-evaluated their bar for funding. This is now lifted and a new bar set.The process for setting the new bar was:
Rank past grants by both OP and now-defunct FTX-associated funders, and divide these into tiers.
Under the assumption of 30-50% of OP’s funding going to longtermist causes, estimate the annual spending needed to exhaust these funds in 20-50 years.
Play around with what grants would have made the cut at different budget levels, and using a heavy dose of intuition come to an all-things-considered new bar.
They landed on funding everything that was ‘tier 4’ or above, and some ‘tier 5’ under certain conditions (eg. low time cost to evaluate, potentially stopping funding in future). In practice this means ~55% of OP longtermist grants over the past 18 months would have been funded under the new bar.
(This will appear in this week’s forum summary. If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
The author’s observations from talking to / offering advice to several EA orgs:Many orgs skew heavily junior, and most managers and managers-of-managers are in that role for the first time.
Many leaders are isolated (no peers to check in with) and / or reluctant (would prefer not to do people management).
They suggest solutions of:
Creating an EA manager’s slack (let them know if you’re interested!)
Non-EA management/leadership coaches—they haven’t found most questions they get in their coaching are EA-specific.
More orgs hire a COO to take over people management from whoever does the vision / strategy / fundraising.
More orgs consider splitting management roles into separate people management and technical leadership roles.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
Differing values creates risks of uncooperative behavior within the EA community, such as failing to update on good arguments because they come from the “other side”, failing to achieve common moral aims (eg. avoiding worst case outcomes), failing to compromise, or committing harmful acts out of spite / tribalism.The author suggests mitigating these risks by assuming good intent, looking for positive-sum compromises, actively noticing and reducing our tendency to promote / like our ingroup more, and validating that the situation is challenging and it’s normal to feel some tension.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
Wealthy countries spend a collective $178B on development aid per year − 25% of all giving worldwide. Some aid projects have been cost-effective on a level with Givewell’s top recommendations (eg. PEPFAR), while others have caused outright harm.Aid is usually distributed via a several step process:
Decide to spend money on aid. Many countries signed a 1970 UN resolution to spend 0.7% of GNI on official development assistance.
Government decides a general strategy / principles.
Government passes a budget, assigning $s to different aid subcategories.
The country’s aid agency decides on projects. Sometimes this is donating to intermediaries like the UN or WHO, sometimes it’s direct.
Projects are implemented.
This area is large scale, tractability is unsure but there are many pathways and some past successes (eg. a grassroots EA campaign in Switzerland increased funding, and the US aid agency ran a cash-benchmarking experiment with GiveDirectly), and few organisations focus on this area compared to the scale.
The author and their co-founder have been funded to start an organization in this area. Get in touch if you’re interested in Global Development and Policy.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
Reflections from an organizer of the student organisations Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA).Top things that worked:
Outreach focusing on technically interesting parts of alignment and leveraging informal connections with networks and friend groups.
HAIST office space, which was well-located and useful for programs and coworking.
Leadership and facilitators having had direct experience with AI safety research.
High-quality, scalable weekly reading groups.
Significant time expenditure, including mostly full-time attention from several organizers.
Top things that didn’t work:
Starting MAIA programming too late in the semester (leading to poor retention).
Too much focus on intro programming.
In future, they plan to set up an office space for MAIA, share infrastructure and resources with other university alignment groups, and improve programming for already engaged students (including opportunities over winter and summer break).
They’re looking for mentors for junior researchers / students, researchers to visit during retreats or host Q&As, feedback, and applicants to their January ML bootcamp or to roles in the Cambridge Boston Alignment Initiative.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
The authors broadly recommend the following for EAs from low and middle income countries (LMICs):Build career capital early on
Work on global issues over local ones, unless clear reasons for the latter
Some individuals to do local versions of: community building, priorities research, charity-related activities, or career advising
They discuss pros, cons, and concrete next steps for each. Individuals can use the scale / neglectedness / tractability framework, marginal value, and personal fit to assess options. They suggest looking for local comparative advantage at global priorities, and taking the time to upskill and engage deeply with EA ideas before jumping into direct work.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
Rob paraphrases Nate’s thoughts on capabilities work and the landscape of AGI organisations. Nate thinks:Capabilities work is a bad idea, because it isn’t needed for alignment to progress and it could speed up timelines. We already have many ML systems to study, which our understanding lags behind. Publishing that work is even worse.
He appreciates OpenAI’s charter, openness to talk to EAs / rationalists, clearer alignment effort than FAIR or Google Brain, and transparency about their plans. He considers DeepMind and Anthropic on par and slightly ahead respectively on taking alignment seriously.
OpenAI, Anthropic, and DeepMind are unusually safety-conscious AI capabilities orgs (e.g., much better than FAIR or Google Brain). But reality doesn’t grade on a curve, there’s still a lot to improve, and they should still call a halt to mainstream SotA-advancing potentially-AGI-relevant ML work, since the timeline-shortening harms currently outweigh the benefits.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
The Centre for Enabling EA Learning & Research (CEEALAR) is an EA hotel that provides grants in the form of food and accommodation on-site in Blackpool, UK. They have lots of space and encourage applications from those wishing to learn or work on research or charitable projects in any cause area. This includes study and upskilling with the intent to move into those areas.Since opening 4.5 years ago, they’ve supported ~100 EAs with their career development, and hosted another ~200 visitors for events / networking / community building. It costs CEEALAR ~£800/month to host someone—including free food, logistics, and project guidance. This is ~13% the cost of an established EA worker, and an example of hits-based giving.
They have plans to expand, and are fixing up a next door property that will increase capacity by ~70%. They welcome donations, though aren’t in imminent need (they have 12 − 20 months of runway, depending on factors covered in the post). They’re also looking for a handy-person.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
The Founders Pledge Climate Fund has run for 2 years and distributed over $10M USD.
Because the climate-space has ~$1T per year committed globally, the team believes the best use of marginal donations is to correct existing biases of overall climate philanthropy, fill blindspots and leverage existing attention on climate. The Fund can achieve this more effectively than individual donations because it can make large grants to allow grantees to start new programs, quickly respond to time-sensitive opportunities, and make catalytic grants to early-stage organizations who don’t yet have track records.Examples include substantial increase in growth of grantee Clean Air Task Force, and significant investments into emerging economies that get less from other funders.
Future work will look at where best to focus policy efforts, and the impact of the Russo-Ukrainian war on possible policy windows.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
r.i.c.e collaborates with the Government of Uttar Pradesh and an organization in India to promote Kangaroo Mother Care (KMC), which is a well-established tool for increasing survival rates of low birth weight babies. They developed a public-private partnership to cause the government’s KMC guidelines to be implemented cost-effectively in a public hospital.Their best estimate based on a combination of implementation costs and pre-existing research is that it costs ~$1.8K per life saved. However they are unsure and are planning to compare survival rates in the hospital targeted vs. others in the region next year.
Both Founders Pledge and GiveWell have made investments this year. They welcome further support—you can donate here. Donations will help maintain the program, scale it up, do better impact evaluation, and potentially expand to other hospitals if they find good implementation partners.(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
SoGive is an EA-aligned research organization and think tank. In 2022, they ran a pilot grants program, granting £223k to 6 projects (out of 26 initial applicants):Founders Pledge - £93,000 - to hire an additional climate researcher.
Effective Institutions Project - £62,000 - for a regranting program.
Doebem - £35,000 - a Brazillian effective giving platform, to continue scaling.
Jack Davies - £30,000 - for research improving methods to scan for neglected X-risks.
Paul Ingram - £21,000 - poll how nuclear winter info affects nuclear armament support.
Social Change Lab - £18,400 − 2xFTE for 2 months, researching social movements.
The funds were sourced from private donors, mainly people earning to give. If you’d like to donate, contact isobel@sogive.org.
They advise future grant applicants to lay out their theory of change (even if their project is one small part), reflect on how you came to your topic and if you’re the right fit, and consider downside risk.
The give a detailed review of their evaluation process, which was heavy touch and included a standardized bar to meet, ITN+ framework, delivery risks (eg. is 80% there 80% of the good?), and information value of the project. They tentatively plan to run it again in 2022, with a lighter touch evaluation process (extra time didn’t add much value).
They also give reflections and advice for others starting grant programs, and are happy to discuss this with anyone.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
The author asks whether EA aims to be a question about doing good effectively, or a community based around ideology. In their experience, it has mainly been the latter, but many EAs have expressed they’d prefer it be the former.
They argue the best concrete step toward EA as a question would be to collaborate more with people outside the EA community, without attempting to bring them into the community. This includes policymakers on local and national levels, people with years of expertise in the fields EA works in, and people who are most affected by EA-backed programs.Specific ideas include EAG actively recruiting these people, EA groups co-hosting more joint community meetups, EA orgs measuring preferences of those impacted by their programs, applying evidence-based decision-making to all fields (not just top cause areas), engaging with people and critiques outside the EA ecosystem, funding and collaborating with non-EA orgs (eg. via grants), and EA orgs hiring non-EAs.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
The US poverty threshold, below which one qualifies for government assistance, is $6625 per person for a family of four. In Malawi, one of the world’s poorest countries, the median income is a twelfth of that (adjusted for purchasing power). Without a change in growth rates, it will take Malawi almost two centuries to catch up to where the US is today.This example illustrates the development gap: the difference in living standards between high and low income countries. Working on this is important both for the wellbeing of those alive today, and because it allows more people to participate meaningfully in humanity’s most important century and therefore help those in the future too.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
Some suffering is bad enough that non-existence is preferable. The lock-in of uncompassionate systems (eg. through AI or AI-assisted governments) could cause mass suffering in the future.
OPIS (Organisation for the Prevention of Intense Suffering) has until now worked on projects to help ensure that people in severe pain can get access to effective medications. In future, they plan to “address the very principles of governance, ensure that all significant causes of intense suffering receive adequate attention, and promote strategies to prevent locked-in totalitarianism”. One concrete project within this is a full length film to inspire people with this vision and lay out actionable steps. They’re looking for support in the form of donations and / or time.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
Highlights for ALLFED in 2022 include:Submitted 4 papers to peer review (some now published)
Started to develop country-level preparedness and response plans for Abrupt Sunlight Reduction Scenarios (US plan completed).
Worked on financial mechanisms for food system interventions, including superpests, climate food finance nexus, and pandemic preparedness.
Delivered briefings to several NATO governments and UN agencies on global food security, nuclear winter impacts, policy considerations and resilience options.
Appeared in major media outlets such as BBC Future and The Times.
Improved internal operations, including registering as a 501(c)(3) non-profit.
Delivered 20+ presentations and attended 30+ workshops / events / conferences.
Hired 6 research associates, 4 operations roles, 5 interns, and 42 volunteers.
ALLFED is funding constrained and gratefully appreciates any donations. The heightened geopolitical tensions from the Russo-Ukrainian conflict create a time-limited policy window for bringing their research on food system preparedness to the forefront of decision makers’ minds.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Thanks! And good call, sorry for missing that one—added it into the post :-)
Post summary (feel free to suggest edits!):
Argues that statements by large language models that seem to report their internal life (eg. ‘I feel scared because I don’t know what to do’), isn’t straightforward evidence either for or against the sentience of that model. As an analogy, parrots are probably sentient and very likely feel pain. But when they say ‘I feel pain’, that doesn’t mean they are in pain.
It might be possible to train systems to more accurately report if they are sentient, via removing any other incentives for saying conscious-sounding things, and training them to report their own mental states. However, this could advance dangerous capabilities like situational awareness, and training on self-reflection might also be what ends up making a system sentient.
(This will appear in this week’s forum summary. If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
Some interventions are neglected because they have less emotional appeal. EA typically tackles this by redirecting more resources there. The authors suggest we should also tackle the cause, by designing marketing to make them more emotionally appealing. This could generate significant funding, more EA members, and faster engagement.As an example, the Make-A-Wish website presents specific anecdotes about a sick child, while the Against Malaria Foundation website focuses on statistics. Psychology shows the former is more effective at generating charitable behavior.
Downsides include potential organizational and personal value drift, and reduction in relative funding for Longtermist areas if these are harder to produce emotional content for. They have high uncertainty and suggest a few initial research directions that EAs with a background in psychology could take to develop this further.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
Since Sandra Malagón and Laura González were funded to work on growing the Spanish-speaking EA community, it’s taken off. There have been 40 introductory fellowships, 2 university groups started, 2 camps, many dedicated community leaders, translation projects, 7-fold activity on Slack vs. 2020, and a community fellowship / new hub in Mexico City. If you’re keen to join in, the slack workspace is here, and anyone (English or Spanish speaking) can apply to EAGxLatAm.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)