Here’s just the headings from the updates + implications sections, lightly reformatted. I don’t necessarily agree with all/any of it (same goes for my employer).
Updates
Factual updates (the world is now different, so the best actions are different)
Less money — There is significantly less money available
Brand — EA/longtermism has a lot more media attention, and will have a serious stain on its reputation (regardless of how well deserved you think that is)
Distrust — My prediction is that if we polled the EA community, we’d find EAs have less trust in several institutions and individuals in this community than they did before November. I think this is epistemically correct: people should have less trust in several of the core institutions in the community (in integrity; in motives; in decision-making)
Epistemic updates (beliefs about the world I wish I’d had all along, that I discovered in processing this evidence)
Non-exceptionalism — Seems less likely that a competent group of EAs could expect to do well in arbitrary industries / seems like making money is generally harder (which means the estimate of future funding streams goes down beyond the immediate cut in funding)
Dangerous ideas — We should be more worried that aspects of our memeplex systematically increase the risk of people taking extreme actions that are harmful
By the book — The robustness that comes from doing things by the book seems more important
Uncompromising utilitarianism — We should be more worried about people orienting to utilitarian arguments in absolutist ways that don’t admit other heuristics
Tribalism — I’m more worried that people identifying as EAs is net destructive
Conflicts — I’ve moved towards thinking conflicts of interest, broadly understood, are frequent and really guide people’s thinking
Integrity — I think that upholding consistently high standards of integrity is particularly important
Taking responsibility — Diffusion of responsibility for cross-cutting issues for the EA community can mean nobody works on them
Complicity — Tacit tolerance of bad behaviour is a serious issue
Implications
Implications for object level work:
We should be a bit more positive on people doing crucial work within established institutions
We should have a somewhat higher bar for funding things
We should consider lower salaries
We should care a bit more that plans look robustly good
Here’s just the headings from the updates + implications sections, lightly reformatted. I don’t necessarily agree with all/any of it (same goes for my employer).
Updates
Factual updates (the world is now different, so the best actions are different)
Less money — There is significantly less money available
Brand — EA/longtermism has a lot more media attention, and will have a serious stain on its reputation (regardless of how well deserved you think that is)
Distrust — My prediction is that if we polled the EA community, we’d find EAs have less trust in several institutions and individuals in this community than they did before November. I think this is epistemically correct: people should have less trust in several of the core institutions in the community (in integrity; in motives; in decision-making)
Epistemic updates (beliefs about the world I wish I’d had all along, that I discovered in processing this evidence)
Non-exceptionalism — Seems less likely that a competent group of EAs could expect to do well in arbitrary industries / seems like making money is generally harder (which means the estimate of future funding streams goes down beyond the immediate cut in funding)
Dangerous ideas — We should be more worried that aspects of our memeplex systematically increase the risk of people taking extreme actions that are harmful
By the book — The robustness that comes from doing things by the book seems more important
Uncompromising utilitarianism — We should be more worried about people orienting to utilitarian arguments in absolutist ways that don’t admit other heuristics
Tribalism — I’m more worried that people identifying as EAs is net destructive
Conflicts — I’ve moved towards thinking conflicts of interest, broadly understood, are frequent and really guide people’s thinking
Integrity — I think that upholding consistently high standards of integrity is particularly important
Taking responsibility — Diffusion of responsibility for cross-cutting issues for the EA community can mean nobody works on them
Complicity — Tacit tolerance of bad behaviour is a serious issue
Implications
Implications for object level work:
We should be a bit more positive on people doing crucial work within established institutions
We should have a somewhat higher bar for funding things
We should consider lower salaries
We should care a bit more that plans look robustly good
We should be a bit more positive on research distillation
Implications for community-building activities:
Content (reading lists, talks, etc.) should:
Bit more positive on content from outside EA
Bit more tools-driven, and a bit less answers-driven
Bit more emphasis on the value of looking at things from several perspectives
Focus a bit more on social epistemology
The vibe of community-building activities should:
Lean a bit further away from encouraging people to identify as EA
Lean a bit further away from “we have the answers” and towards “we’re giving you the questions”
Send somewhat fewer in-group signals
Focus on building a culture which is high-integrity
Focus on building a culture which treats consequentialist analysis as just one tool in the toolkit
Focus on building a culture which asks people to make sure they know who has responsibility for things
Structurally, community-building activities should:
Put somewhat lower estimates on the monetary value of outcomes or programs
Be more transparent about these valuations and other tools for decision-making about community building
Scale down activities a little (or slow the growth trajectory)
Scale down salaries a bit
Implications for central community coordination:
We should lean a bit further towards professionalism
We should lean a bit further towards transparency
We should consider creating mechanisms for anonymously sharing updates/impressions
Orgs should be very explicit about what they are and aren’t taking responsibility for
Coordination mechanisms should facilitate making sure someone is taking responsibility for important things
We should ensure that people can access some core discussions by application, not just by networking
We should lean a bit more towards legible invite criteria, especially for flagship events like Coordination Forum
We should lean a bit further towards frugality
Implications for governance:
We should increase oversight of projects and decisions
We should increase transparency of governance
We should err towards doing more impact analyses
Projects and orgs should invite accountability primarily for whether they took responsibility for the right things, and how those things went
We should give less weight to straightforward consequentialist PR arguments
We should spread governance work over more people