I really enjoyed āWhat posts are you planning on writing?ā
This is the lazy version, for people who want a post to exist but want someone else to write it. Given that weāre all stuck inside anyway, Iām hoping we can use this opportunity to get a lot of writing done (like Isaac Newton!)
So: What are some unwritten posts that you think ought to exist?
If you want something to exist badly enough that youād pay for it, consider sharing that information! Thereās nothing wrong with offering money; some really good work has come from small-scale contracting of this kind.
More journalistic articles about EA projects.
I donāt necessarily mean āwritten by journalistsā, though thereās been a lot of good journalistic coverage of EA.
I mean āin the style of long-form journalismā: Telling an interesting story about the work of a person/āorganization, while mixing in the origin story, interesting details about the people involved, photos, etc.
Examples of projects I think could get the journalistic treatment:
Charity Entrepreneurship (the story of an entire incubation program class)
Wave
The EAF Zurich ballot initiative
The Center for Election Science (covering their Fargo approval voting campaign and their work in St. Louis)
The All-Party Parliamentary Group for Future Generations
Thatās a super cool idea.
What writing currently exists like this? Voxās Future Perfect, maybe a few one-off articles in other major publications?
Whereās best to publish this? Feels like a lot of work for a blogpost, but I doubt the NYT is looking for unsolicited submissionsāare there publishing platforms that would be interested in this?
Future Perfect and a few one-off articles, mostly. Tom Chivers is a journalist with strong EA leanings who routinely writes from that perspective.
I wasnāt thinking that these stories would have to be published by a large media outlet; I just want them to exist somewhere so that I can share them with people who are new to the movement.
Getting published on a wider platform could be great for certain orgs (e.g. Wave is just a business, I imagine they wouldnāt mind the attention), but bad for others (CSET generally keeps its work fairly private). Iād hope that anyone writing one of these hypothetical stories would check the orgās publicity preferences before submitting a story anywhere!
I read in as-yet-unpublished post that the best approach for getting published in a major outlet without being on their staff is not to just write something and then send it to various publications, but rather to pick an outlet and optimise the piece (or versions of it) for that outletās style, topic choices, readership, etc. (Iām not sure what the evidence base for that claim was, and have 0 relevant knowledge of my own.)
If that is a good approach, one could still potentially pick a few outlets and write somewhat different versions for each, rather than putting all their eggs in one basket. Or write one optimised version at a time, and not invest additional effort until that one is rejected. And one version could also be posted to the EA Forum and/āor Medium and/āor similar places, in the meantime. (Unless that would reduce odds of publication by a major outlet?)
Makes a lot of sense, Iām sure Vox and the New York Times are interested in very different kinds of submissions, writing with a particular style in mind probably dramatically increases the odds of publication.
I still wonder what the success rate here isācloser to 1% or to 10%? If the latter, I could see this being pretty impactful and possibly scalable.
Iād be really interested in reading an updated post that makes the case for there being an especially high (e.g. >10%) probability that AI alignment problems will lead to existentially bad outcomes.
There still isnāt a lot of writing explaining case for existential misalignment risk. And a significant fraction of whatās been produced since Superintelligence is either: (a) roughly summarizing arguments in Superintelligence, (b) pretty cursory, or (c) written by people who are relative optimists and are in large part trying to explain their relative optimism.
Since I have the (possibly mistaken) impression that a decent number of people in the EA community are quite pessimistic regarding existential misalignment risk, on the basis of reasoning that goes significantly beyond whatās in Superintelligence, Iād really like to understand this position a lot better and be in a position to evaluate the arguments for it.
(My ideal version of this post would probably assume some degree of familiarity with contemporary machine learning, and contemporary safety/ārobustness issues, but no previous familiarity with arguments that AI poses an existential risk.)
My understanding is that Toby Ord does just this in his new book The Precipice (his new AI x-risk estimate is also discussed in his recent 80K podcast interview about the book), though it would still be good to have others weigh in.
I think that chapter in the Precipice is really good, but itās not exactly the sort of thing I have in mind.
Although Tobyās less optimistic than I am, heās still only arguing for a 10% probability of existentially bad outcomes from misalignment.* The argument in the chapter is also, by necessity, relatively cursory. Itās aiming to introduce the field of artificial intelligence and the concept of AGI to readers who might be unfamiliar with it, explain what misalignment risk is, make the idea vivid to readers, clarify misconceptions, describe the state of expert opinion, and add in various other nuances all within the span of about fifteen pages. I think that it succeeds very well in what itās aiming to do, but I would say that itās aiming for something fairly different.
*Technically, if I remember correctly, itās a 10% probability within the next century. So the implied overall probability is at least somewhat higher.
I see, thanks for the explanation!
A post making the case for donating now rather than later.
Patient philanthropy (donating later) has been gaining ground within the EA community. While thereās been some critical discussion, there hasnāt been a post making the positive case for why to donate now since the very early days of EA.
At the suggestion of this question post, Iāll offer that Iāll pay $200 for a good post in this direction. Caveated with the fact that I think most of the value comes from a very good post in this direction, so my bar will be pretty high.
Has anyone done this yet? If so Iād be interested in the article, otherwise Iād be interested in giving it a go
This post probably qualifies, but I didnāt love it. Iād pay out if you wrote a good one. But see note about my bar being high, I definitely donāt want to make promises.
Iām not aware of anything recent that was explicitly pro-āgive nowā. There are some semi-recent posts that weigh both sides of the debate but draw āit dependsā-type conclusions. Iād be interested to see your take!
You can see posts on this topic collected in the ātiming of philanthropyā tag.
I want a post on how to be a good donor.
Context: I work with a small foundation that asks a lot of questions when we investigate charities. We sometimes worry that weāre annoying the charities we work with without providing much value for them or for ourselves, especially since we donāt make grants on the same scale as larger foundations. Even when they tell us our questions are helpful/āreasonable, they obviously have a strong incentive to make us feel happy and valued.
Ideal version of this post: Someone goes to a lot of EA orgs, asks them questions related to the above dilemma, and reports the results.
Other general questions about āwhat donors should knowā would also be neat: How should someone with no special preferences time their donations? How much more valuable is unrestricted than restricted funding? And so on.
This commented was pointed out to me by someone who thought I may be extremely well qualified to answer this.
I have been on the trustee boards of about half a dozen charities and performed short term consulting stints (about a month at a time) for another 10(ish) charities globally, and have seen how each of those engages with its donors.
Iām also know people people/āorganisations surveying charities about questions like how much more valuable is unrestricted than restricted funding.
I would be happy to put something together on this topic, however Iām snowed under with other things for the time being, but could add it to the list and tackle it later?
I also think thisād be useful.
Though I wonder why you suggest that someone should ask these questions to a lot of EA orgs in particular? Did you also mean orgs that arenāt explicitly āEA orgsā but that many EAs see as high-value donation opportunities? And is it possible itād also be valuable to ask non-EA foundations about their practices and thoughts on this matter, at least as an interesting quite different example?
If people want to ask other charities, that also seems fine! I suppose I was assuming that EA charities probably do more engagement with small donors (in the sense of āanswering lots of questions about their workā) than most other charities, and that they might be easier to contact for someone who reads the Forum and sees my post. But Iād guess there would be more value in having a wider sample of organizations.
That all seems to make sense.
As someone dubiously planning a career affiliated with the U.S. Department of Defense, I would really appreciate an analysis of working inside and outside of The System. Historically, have altruists been able to do good from within harmful governments (fascist dictatorships, military juntas, genocidal governments, etc.)? How? Which qualities do altruism-friendly systems have?
YES! Someone do this one!
Iām interested in this too.
In-depth stories of people who had a lot of impact, and the rules of thumb they used /ā how they navigated key decision points, with the intention of drawing lessons from them.
E.g. Interview Holden or Bostrom about each key moment in their career, challenges & decisions they faced, and how they navigated them.
They wouldnāt need to be within EA. It would also be great to have more examples of people like Norman Borlaug, Viktor Zhdanov and Petrov, but ideally focusing on (i) new examples (ii) people who were deliberately trying to have a big impact, and then also with (iii) more interrogation of the strategies they used and how things might have gone differently.
You could write it up as a case study, podcast interview, or journalist-style story.
It would be like Open Philās history of philanthropy project, but focused on individual actors.
Hereās an example of something in the genre: https://āāwww.vox.com/āā22397833/āādexamethasone-coronavirus-uk-recovery-trial
Though ideally it would contain a bunch more detail about the specific decisions they faced, what rules of thumb they used, how theyād ended up in a position to do this kind of thing etc. More critical analysis of their impact vs. the counterfactual would also be good.
Governance innovation as a cause area
Many people are working on new governance mechanisms from an altruistic perspective. There are many sub-categories such as Charter cities, space governance, decentralized governance, RadicalXChange agenda..
Iām uncertain as to the marginal value in such projects, and Iād like to see a broad analysis that can serve as a good prior and analysis framework for specific projects.
Hereās an analysis by 80k. https://āā80000hours.org/āāproblem-profiles/āāimproving-institutional-decision-making/āā
This is not quite what I was going for, even though it is relevant. This problem profile focuses on existing institutions and on methods for collective decision making. I was thinking more in the spirit of market design, where the goal is to generate new institutions with new structures and rules so that people are selfishly incentivised to act in a way which maximizes welfare (or something else).
I think the framing is weird because of EAs allergy to systemic change, but I think on practice all of the points in that cause profile apply to broader change.
No, the analysis does not seem to contain what I was going for.
Curious about what you think is weird in the framing?
The problem framing is basically spot on, talking about how our institution drive our lives. Like I said, basically all the points get it right and apply to broader systemic change like RadX, DAOs, etc.
Then, even though the problem is framed perfectly, the solution section almost universally talks about narrow interventions related to individual decision making like improving calibration.
I doubt that there is any one answer re the marginal value of such projects, because the value depends on what is being governed. For instance, I think a successful implementation of regulatory markets for AI safety would be very valuable, but regulatory markets for corporate law wouldnāt be; yet the same basic framework is being implemented.
For this reason, Iād be more interested in analysis of governance innovation for a particular cause area.
A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is ādo politicsāto promote nonpolitical processes!ā
https://āāforum.effectivealtruism.org/āāposts/āāRfKPzmtAwzSw49X9S/āāopen-thread-46?commentId=rWn7HTvZaNHCedXNi
An analysis of how knowledge is constructed in the EA community, and how much weight we should assign to ideas āsupported by EAā.
The recent question on reviews by non-EA researchers is an example of that. There might be great opportunities to improve EA intellectual progress.
Ooh, I would also very much like to see this post
An AMA from someone who works at a really big foundation that leans EA but isnāt quite āEA-alignedā in the same way as Open Philanthropy (e.g. Gates, Rockefeller, Chan/āZuckerberg, Skoll).
Iām interested to hear how those organizations compare different causes, distribute resources between areas/ādivisions, evaluate the impact of their grantmaking, etc.
Similarly, an AMA from someone working at an EA org who otherwise isnāt personally very engaged with EA. Maybe they really disagree with EA, or more likely, theyāre new to EA ideas and havenāt identified with EA in the past.
Theyāll be deeply engaged on the substantive issues but will bring different identities and biases, maybe offering important new perspectives.
I think it would be cool if someone wrote a post about Bob Purifoy. Heās mentioned several times in Command and Control; briefly, he was an engineer and then manager at Sandia National laboratory, who was influential in nuclear security basically by just being extremely stubborn and motivated by safety. He gave a huge number of briefings (I want to say the number was in the thousands, but I canāt find the reference right now) to policymakers, and occasionally stretched the rules to make nuclear weapons technology more secure.
I think it might provide a helpful model for how people can promote safety within large bureaucracies, even if they are not a top executive.
(I thought at one point I had found a eulogy which gave more information about his work, but I canāt find it now. Possibly someone could reach out to Eric Schlosser, the author of command and control, to see if he has more information.)
A post re-examining the suffering impact of veganism in countries with good average livestock welfare in many product categories. New Zealand, for instance, has grass-fed cows as a norm, egg hens are usually required to have decent amounts of space and wonāt appear to be especially stressed, and the main supermarket chain Countdown just switched to providing mostly āfree farmedā pork (birthing sows seem entirely free, but pigs destined for market are moved to barns that are only limitedly free) (excludes non store brand of pork-based products, but the store brand bacon looks pretty good quality so it might be popular enough).
I get the impression that weāre unlikely to receive this kind of analysis through most channels promoting animal welfare. They might not want to tell you about the good parts. I tend to encounter a lot of copenhagen ethics and consent arguments (which canāt be addressed by improving conditions no matter how much you improve them, which is a bit of a reduction to absurdity of consent arguments).
It may help to draw attention to good policies, focus attention on the worst offenders, and occasionally improve EA nutrition? Promoting animal welfare within the industry is likely to accelerate incremental change from within. Stockpeople who are doing especially well in limiting animal suffering will tend to be proud of their way of doing things and to want to promote it to legislators for both moral and economic reasons.
Having resources like this may also help for being able to come across as balanced and informed when discussing local animal welfare.
Regarding āchange from withinā, I have since found confirmation from the excellent growth economist Mushtaq Kahn https://āā80000hours.org/āāpodcast/āāepisodes/āāmushtaq-khan-institutional-economics/āā people within an industry are generally the best at policing others in the industry, they have the most energy for it, they know how to measure adherence, and they often have inside access. Without them, policing corruption often fails to happen.
Defining āmanagement constraintsā better.
Anecdotally, many EA organizations seem to think that they are somehow constrained by management capacity. My experience is that this term is used in different ways (for example, some places use it to mean that they need senior researchers who can mentor junior researchers; others use it to mean that they need people who can do HR really well).
It would be cool for someone to interview different organizations and get a better sense of what is actually needed here
A detailed study of hyper-competent ops people.
What makes these people so competent? What tools and processes do they use to manage information and set priorities? What does the flow of their workday look like; mostly flitting around between tasks, or mostly focused blocks of time? (And so on.)
More accessible summaries of technical work. Some things I would like summarized:
1. Existential risk and economic growth
2. Utilitarianism with and without expected utility
(You can see my own attempt to summarize something similar to #2 here , as one example.)
On 2, see this post (a link post for this).
I also left some comments on the EA Forum post pulling out the first two theorems and the definitions to state them in way thatās hopefully a bit more accessible, skipping some unnecessary jargon and introducing notation only just before itās used, rather than at the start so you have to jump back. Theyāre still pretty technical, though. Upon reflection, it probably took me more time to write the comments than itāll save people to read my comments instead of reading the parts of the paper where theyāre found. :/ā
There are also several other theorems in that paper.
Thanks! Yeah I should have posted that both of these have now been published, so if anyone else reading this has a request for posts that they havenāt stated publicly, consider doing so!
Iām thinking of doing (1). Is there a particular way you think this should look?
How technical do you think the summary should be? The thing that would be easiest for me to write would require some maths understanding (eg. basic calculus and limits) but no economics understanding. Eg. about as technical as your summary but more maths and less philosophy.
Also, do you have thoughts on length? Eg. do you think a five page summary is substantially more accessible than the paper, or would the summary have to be much shorter than that?
(Iām also interested in what others would find useful)
Awesome!
I personally would suggest a format of:
1. One paragraph summary that any educated layperson easily can understand
2. One page summary that a layperson with college-level math skills can understand
3. 2-5 pages of detail that someone with college-level math and Econ 101 skills can understand
This is just a suggestion though, I donāt have a lot of confidence that itās correct.
Now done here. Itās a ~10 page summary that someone with college-level math can understand (though I think you could read it, skip the math, and get the general idea).
You rock, thanks so much!
āAmerican UBI: for and againstā
āA brief history of Rosicrucianism & the Invisible Collegeā
āWere almost all the signers of the Declaration of Independence high-degree Freemasons?ā
āHave malaria case rates gone down in areas where AMF did big bednet distributions?ā
āWhat is the relationship between economic development and mental health? Is there a margin at which further development decreases mental health?ā
āLiterature review: Dunbarās numberā
āWhy is Rwanda outperforming other African nations?ā
āThe longtermist case for animal welfareā
āPhilosopher-Kings: why wise governance is important for the longterm futureā
āCase studies: when has democracy outperform technocracy? (and vice versa)ā
āExamining the tradeoff between coordination and coercionā
āSpiritual practice as an EA cause areaā
āTools for thought as an EA cause areaā
āIs strong, ubiquitous encryption a net positive?ā
āHow important are coral reefs to ocean health? How can they be protected?ā
āWhat role does the Amazon rainforest play in regulating the North American biosphere?ā
āWhat can the US do to protect the Amazon from Bolsonaro?ā
āCan the Singaporean governance model scale?ā
āIs EA complacent?ā
āFlow-through effects of widespread addictionā
āThe longtermist case for animal welfareā
Have you seen this? https://āāforum.effectivealtruism.org/āāposts/āāW5AGTHm4pTd6TeEP3/āāshould-longtermists-mostly-think-about-animals
I hadnāt, thanks!
Iād be interested in a post by a historian (or very serious amateur historian) on what EAs can learn from the rise and fall of Mohism, the earliest proto-consequentialist school of philosophy/āsocial movement that Iām aware of*.
*Iād also be interested in a more general summarization post detailing other proto-consequentialist schools of philosophy and social movements.
āType errors in the middle of arguments explain many philosophical gotchas: 10 examplesā
āCNS imaging: a review and research agendaā (high decision relevance for moral uncertainty about suffering in humans and non humans)
āMatching problems: a literature reviewā
āEntropy for intentional content: a formal modelā (AI related)
āGraph traversal using negative and positive information, proof of divergent outcomesā (neuroscience relevant potentially)
āOne weird trick that made my note taking 10x more usefulā
Do you mind expanding a bit on CNS Imaging, Entropy for Intentional content, and Graph Traversal?
Investigations into promising new cause areas:
For instance, take one of the issues listed here.
Then interview 2-3 people in the area about (i) what the best interventions are (ii) whoās currently working on it. Write up a summary, and add any of your own thoughts on how promising more work on the area seems.
You could use Open Philās shallow cause reports as a starting template: https://āāwww.openphilanthropy.org/āāresearch/āācause-reports
Iād appreciate a forum post or workshop about how to interpret empirical evidence. Jennifer Doleac gives a lot of good pointers in the recent 80,000 Hours podcast, but I think the EA and public policy communities would benefit from a more thorough treatment.
Should Covid-19 be a priority for EAs?
A scale-neglectedness-tractability assessment, or even a full cost-effectiveness analysis, of Covid as a cause area (compared to other EA causes) could be useful. Iām starting to look into this now ā please let me know if itās already been done.
Was asking myself the same question
What an overdetermined EA is, what the evidence is for them existing, and what it implies for community building strategy.
Evidence that the next ~10 years might be especially influential in terms of community building
Negative income taxes > UBI ?
A short mathematical demonstration of how negative income taxes compares to UBI in terms of economics 101.
Hereās a thread in an EA group about the topics
Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)
If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.
A post about when we should and should not use ālives savedā language in describing EA work.
I find that telling people they can save a life for $5000 often leads to a lot of confusion: Whose life is being saved? What if they die of something else a few months later?Explaining QALYs isnāt too hard if you have a couple of minutes, but you often have a lot less time than that.
Is there some shorthand we can use for āgiving 50 healthy years, in expectation, across a populationā that makes it sound anywhere nearly as good as simply āsaving a lifeā? How important is it to be accurate in this dimension, vs. simply allowing people to conflate QALY/āVSL with āsaving a specific personā?
Credible qualitative and/āor quantitative evidence on the effectiveness of habits, tools, and techniques for knowledge work.
I think it would be really interesting for someone to write about the intellectual history of environmental ethics and animal ethics, and probably environmentalism more broadly. The rift between them dates back at least to the 1980s, and I think itās important for EAs interested in environmentalism or (wild) animal welfare to understand how theyāre building on/āsituated in this discourse.
(Inspired by the recent 80K episode on the intellectual history of x-risk.)
The implications of Brexit for the potential to do good when located in the UK.
Plausibly lessened if the UK has less influence on the world stage. I appreciate this may be seen as a somewhat political post, but I think it may be possible to write it without actually passing judgement on whether Brexit was a good or bad thing on the whole.
Iād be excited to read this.
I want people to write posts about their jobs, and how they got those jobs. I think this will help a lot of people, both with object-level information about getting particular jobs, and by making a meta-level statement that itās not impossible or unrealistic to get a job in EA.
Posts on how people came to their values, how much individuals find themselves optimizing for certain values, and how EA analysis is/āisnāt relevant. Bonus for points for resources for talking about this with other people.
Iād like to have more āIntro to EAā convos that start with, āWhen Iām prioritizing values like [X, Y, Z], Iāve found EA really helpful. Itād be less relevant if I valued [ABC ] instead, and it seems less relevant in those times when I prioritize other things. What do you value? How/āWhen do you want to prioritize that? How would you explore that?ā
I think personal stories here would be illustrative.
A nice example of the second part, value dependence, is Ozy Brennanās series reviewing GiveWell charities.
I care about a lot of different U.S. policy issues and would like to get a sense of their neglectedness and tractability. So Iād love it if someone could do a survey to find out how many people in the U.S. work full time on various issues and how hard it is to get bills passed on them.
An interesting post (although perhaps skewing too negative in premise) would be an article on how best to reach/āwhat options can be advertised or made available to the āReluctant Effective Altruistsā (REAās) of societyā¦ other than giving what they can which is an obvious starting point.
For the purposes of the post the REAās would be those who agree with EA in principle, but are any of the following, and likely fall into more categories besides:
agree in principle but are not skilled/ābelieve themselves to be unable or are simply not motivated enough to gain the relevant skills for the worlds most pressing problems
agree in principle and may even have the necessary skills but are reluctant to change their focus.
those who have the skills/āmotivation but have dissolve-able barriers, be those personal, or societal.
perhaps even those who agree in principle but have been somehow āburnedā previously by altruistic endeavours gone awry.
Of course, time and attention are very precious. Perhaps reaching REAās is best done simply by expounding the logic of Effective Altruism/āassociated concepts as far and wide as possibleā¦ naturally some REAās will shift to become EAās and thats the simplest route.
However, an assessment of the different categories of REA, and how to speak directly to them/āmatching options suited for each, may create a marginal gain that compounds over timeā¦ and wouldnāt otherwise be realised.
When to use quantitative vs qualitative research
Without a framework for thinking about this, Iām often unsure what I should be learning from qualitative studies, and I donāt always know when it makes sense to conduct them. (This seems related to the debate between cleometricians and counterfactual narrative historians; some discussion here, page 18)
I donāt think this can be taught in one post, because you have to be able to actually use the research methods before you can decide which one to use.
Write about the replication crisis in the 80k hours Problem profile style. Basically, write about the problem, apply the SNT framework to it, mention orgs currently working on it, mention potential career options for someone who wants to address this problem etc..
This suggestion came after reading this post.
Posts investigating/ādiscussing any of the questions listed here. These are questions which would be āvaluable for someone to research, or at least theorise about, that the current pandemic in some way āopens upā or will provide new evidence about, and that could inform EAsā future efforts and prioritiesā.
If anyone has thought of such questions, please add them as answers to to that post.
An example of such a question which I added: āWhat lessons can be drawn from [events related to COVID-19] for how much to trust governments, mainstream experts, news sources, EAs, rationalists, mathematical modelling by people without domain-specific expertise, etc.? What lessons can be drawn for debates about inside vs outside views, epistemic modesty, etc.?ā