Suggestion: EAs should post more summaries and collections
Tl;dr I recommend more EAs post summaries of useful ideas and collections of quotes, sources, definitions, terms, etc. Collections in particular can be fairly easy to make when you’re learning about or researching a topic anyway. And both summaries and collections can make the road easier for those who follow.
The problem
Effective altruism is still pretty new, as are several of its allied communities (e.g., the rationality community), and as are many of the topics and concepts of interest to EAs. It seems to me, and I believe to many others, that this leads to the following situations cropping up surprisingly often:
an interesting/useful idea or topic is discussed, but mostly just in person, or in fragmented ways in various places; there’s no good central write-up, summary, or way to find sources on the topic
an interesting/useful term or concept is discussed but never clearly defined, or is used/defined in various somewhat different ways
there are various different terms introduced or used for seemingly quite similar or at least related things, without the sources noting the other seemingly related terms
Essentially, a lot of things of interest to EAs, and a lot of work by EAs, seems to not yet be neatly consolidated and easily searchable. This makes it harder to learn from and build on the work of others. It may also lead to:
some avoidable wheel-reinvention
a confusing proliferation of multiple terms for the same concept
a confusing proliferation of multiple concepts/definitions for the same term
So summarise!
One solution to this is for people who are learning about or researching something anyway to:
Make a link post to a summary of the thing, if a decent summary exists but isn’t already on the EA Forum or LessWrong
Write a summary of the thing themselves, if no decent summary exists at all, or one exists but doesn’t quite capture how EAs want to use that thing
I think that solution is great (as does the forum’s moderation team). And I’ve been doing some of that summarising myself.
For example, a bunch of great stuff had already been written on differential progress/intellectual progress/technological development, information hazards, and moral uncertainty. But a lot of those writings were long and academic, or took a certain prior familiarity with the topic as assumed.
When learning about those topics out of general interest, or in hopes of writing things that built on existing work, I discovered the lack (in my view) of central, accessible summaries on them. So I figured I should try to fill those gaps. I saw this as adding value simply by collecting together, in an accessible way, all the most important points (at least regarding the concepts themselves, not always all of their implications).
And I’ve found similar summary posts by other EAs really helpful myself, and would be keen to see more EAs writing summaries like that. You could set out purposely to do that, or just learn about things and wait until you happen to stumble upon an interesting topic for which no decent summary exists.
(See also “research debt” and “research distillation”.)
So collect!
But writing summaries can take a decent amount of effort and time, and can be challenging. So a quite low effort, easy alternative I’ve started doing is posting “collections” of:
Quotes
E.g., the long reflection seems an important idea, but discussions of it that are visible on the internet seemed quite rare. So I collected all of the relevant quotes I could find, and a handful of other somewhat relevant sources or concepts.
My hope was that this would bring the idea to more people’s attention, give people a better understanding of it, and make it easier for people to build on the idea.
Secondarily, I thought it might help highlight just how little work seemed to have been done on the idea (at least based on what was visible on the internet), and thus prompt people to just get cracking on it, because they really may have a decent chance of coming up with some valuable, new insights.
(I’m using past tense because Toby Ord’s book may have changed matters somewhat, which is exciting.)
Sources on a topic
E.g., when researching information hazards, downside risks, the unilateralist’s curse, civilizational collapse, and differential progress etc. for various posts I was writing, I was making notes of all the relevant sources I found, just to help myself out. And then I realised that I’d noted a decent number of relevant sources, and hadn’t found such a list in my travels, so I posted those collections.
(I don’t think I’d bother posting collections for topics about less obscure topics, such as existential and global catastrophic risks, because in those cases it might be easier for people to just find a recent paper/post and go through its sources.)
Definitions
E.g., I was writing a post that mentioned global catastrophic risks, and realised I wasn’t really sure what the definition for that was. When I looked into that, I found there were several, different definitions floating about. So I collected all the definitions I found.
Terms
E.g., I was writing some posts about probabilities, and realised there were a variety of different terms/descriptions that seemed to relate to a relatively similar concept of how “trustworthy” or “grounded” a given probability is. So I collected all such terms/descriptions which I’d found.
Resources in a particular medium (videos, in my case; podcasts, in the case of a prior list)
In each case, making and posting the collection took between almost no time and not much time, because I was already learning about the topic and collecting things for my own sake anyway.
I’ve also seen other EAs usefully collect:
activities EAs could usefully do (other than working at an EA organisation)
EA-relevant research topics, questions, or projects that could be productively investigated (e.g., here, here, and here)
I’d be keen to see more EAs posting such collections. It seems to me that, as with summaries, such collections should make it easier for people to learn of, learn about, and build on interesting ideas and topics.
Ideally, I’d hope that other people would comment to add to the collections, so that they can serve better as go-to lists for whatever they’re about. This makes the collections more valuable to future readers, as well as to the person who originally made the collection. To encourage this, I’ve always noted in my collections something like “I intend to add to this list over time. If you know of other relevant work, please mention it in a comment”—and indeed, a few kind souls have usefully done so.
As above, but with extra exclamation marks!
So (I gently suggest, to those who feel this would suit them) go forth and get summarising! Build paths as you go about your travels!
And get collecting! Leave a trail of breadcrumbs where you walked, and signposts to interesting regions you haven’t had time to visit yourself!
And then hopefully those coming after you can travel these paths and visit these regions more easily. And hopefully they can race ahead, or to the side, or lay further paths and breadcrumbs and signposts. And hopefully this can all add up to a relatively low-effort way to help our community do good a little better.
P.S.
It also seems to me that summarising and collecting might be able to serve as one small version of something like a “Task Y”: i.e., something which has “some or all of the following properties:
Task Y is something that can be performed usefully by people who are not currently able to choose their career path entirely based on EA concerns*.
Task Y is clearly effective, and doesn’t become much less effective the more people who are doing it.
The positive effects of Task Y are obvious to the person doing the task.”
- EA should taboo “EA should” by 29 Mar 2022 9:07 UTC; 210 points) (
- Results from the First Decade Review by 13 May 2022 15:01 UTC; 163 points) (
- A central directory for open research questions by 19 Apr 2020 23:47 UTC; 163 points) (
- Humanities Research Ideas for Longtermists by 9 Jun 2021 4:39 UTC; 151 points) (
- Holden Karnofsky’s recent comments on FTX by 24 Mar 2023 11:44 UTC; 149 points) (
- A ranked list of all EA-relevant (audio)books I’ve read by 17 Feb 2021 10:18 UTC; 116 points) (
- Database of orgs relevant to longtermist/x-risk work by 19 Nov 2021 8:50 UTC; 104 points) (
- AI timelines by bio anchors: the debate in one place by 30 Jul 2022 23:04 UTC; 93 points) (
- Help with the Forum; wiki editing, giving feedback, moderation, and more by 20 Apr 2022 12:58 UTC; 88 points) (
- Long Reflection Reading List by 24 Mar 2024 16:27 UTC; 83 points) (
- Update on the UK AI Taskforce & upcoming AI Safety Summit by 11 Oct 2023 11:37 UTC; 83 points) (LessWrong;
- You should write on the EA Forum by 29 Apr 2022 14:55 UTC; 73 points) (
- Native languages in the EA community (and issues with assessing promisingness) by 27 Dec 2021 2:01 UTC; 72 points) (
- Distillation and research debt by 15 Mar 2022 11:45 UTC; 69 points) (
- Resource for criticisms and red teaming by 1 Jun 2022 18:58 UTC; 60 points) (
- Link-posting is an act of community service by 16 May 2022 18:25 UTC; 59 points) (
- How have you become more (or less) engaged with EA in the last year? by 8 Sep 2020 18:28 UTC; 57 points) (
- Annotated List of Project Ideas & Volunteering Resources by 6 Jul 2020 3:29 UTC; 57 points) (
- Map of maps of interesting fields by 25 Jun 2023 14:00 UTC; 54 points) (
- Annotated List of EA Career Advice Resources by 13 Jul 2020 6:12 UTC; 43 points) (
- Why EAs researching mainstream topics can be useful by 13 Jun 2021 10:14 UTC; 37 points) (
- Impactful Forecasting Prize for forecast writeups on curated Metaculus questions by 4 Feb 2022 20:06 UTC; 36 points) (LessWrong;
- Reasons for and against posting on the EA Forum by 23 May 2021 11:29 UTC; 32 points) (
- Collection of definitions of “good judgement” by 14 Mar 2022 14:14 UTC; 31 points) (
- 26 Mar 2020 10:58 UTC; 30 points) 's comment on MichaelA’s Quick takes by (
- 32 EA Forum Posts about Careers and Jobs (2020-2022) by 19 Mar 2022 22:27 UTC; 30 points) (
- 24 Feb 2020 17:51 UTC; 29 points) 's comment on MichaelA’s Quick takes by (
- Map of maps of interesting fields by 25 Jun 2023 14:02 UTC; 24 points) (LessWrong;
- 5 Sep 2022 11:04 UTC; 22 points) 's comment on What happens on the average day? by (
- How to get up to speed on a new field of research? by 1 Mar 2021 0:36 UTC; 21 points) (
- 15 Jul 2020 7:17 UTC; 18 points) 's comment on Vaidehi Agarwalla’s Quick takes by (
- 26 Jun 2020 7:17 UTC; 18 points) 's comment on MichaelA’s Quick takes by (
- 28 Feb 2020 17:23 UTC; 18 points) 's comment on MichaelA’s Quick takes by (
- 8 Sep 2020 9:06 UTC; 17 points) 's comment on MichaelA’s Quick takes by (
- 10 Jul 2020 12:53 UTC; 16 points) 's comment on Vaidehi Agarwalla’s Quick takes by (
- 11 Feb 2022 14:48 UTC; 16 points) 's comment on MichaelA’s Quick takes by (
- 30 Mar 2022 1:47 UTC; 16 points) 's comment on Impactful Forecasting Prize for forecast writeups on curated Metaculus questions by (
- 24 Feb 2020 8:31 UTC; 14 points) 's comment on MichaelA’s Quick takes by (
- 28 May 2021 14:34 UTC; 14 points) 's comment on Intervention options for improving the EA-aligned research pipeline by (
- Notes on effective-altruism-related research, writing, testing fit, learning, and the EA Forum by 28 Mar 2021 23:43 UTC; 14 points) (LessWrong;
- 19 Nov 2021 13:49 UTC; 13 points) 's comment on We’re Rethink Priorities. Ask us anything! by (
- 7 May 2020 7:55 UTC; 11 points) 's comment on MichaelA’s Quick takes by (
- 24 Feb 2020 8:45 UTC; 11 points) 's comment on MichaelA’s Quick takes by (
- 20 Nov 2021 16:40 UTC; 10 points) 's comment on EA Communication Project Ideas by (
- 10 May 2020 4:39 UTC; 10 points) 's comment on MichaelA’s Quick takes by (
- 4 Jun 2021 9:59 UTC; 10 points) 's comment on MichaelA’s Quick takes by (
- 31 Oct 2022 20:10 UTC; 8 points) 's comment on Ask (Everyone) Anything — “EA 101” by (
- 20 Jun 2020 10:23 UTC; 8 points) 's comment on Quotes about the long reflection by (
- 27 Feb 2020 7:59 UTC; 7 points) 's comment on MichaelA’s Quick takes by (
- 15 Nov 2020 12:17 UTC; 6 points) 's comment on MichaelA’s Quick takes by (
- 13 Dec 2021 8:12 UTC; 6 points) 's comment on Database of existential risk estimates by (
- 22 Feb 2021 0:21 UTC; 6 points) 's comment on A ranked list of all EA-relevant documentaries, movies, and TV series I’ve watched by (
- 30 Mar 2020 15:04 UTC; 5 points) 's comment on MichaelA’s Quick takes by (
- 8 Apr 2020 8:51 UTC; 5 points) 's comment on MichaelA’s Quick takes by (
- 24 Feb 2020 8:53 UTC; 5 points) 's comment on MichaelA’s Quick takes by (
- Possible directions in AI ideal governance research by 10 Aug 2022 8:36 UTC; 5 points) (
- 26 Jun 2020 0:14 UTC; 5 points) 's comment on Differential technological development by (
- 7 Jan 2021 2:24 UTC; 4 points) 's comment on Vaidehi Agarwalla’s Quick takes by (
- 10 Apr 2020 6:20 UTC; 4 points) 's comment on MichaelA’s Quick takes by (
- 26 Sep 2021 18:33 UTC; 4 points) 's comment on List of AI safety courses and resources by (
- Research Resources and Questions for EAs Interested in Law by 10 Jan 2023 23:44 UTC; 4 points) (
- 14 Dec 2021 8:02 UTC; 4 points) 's comment on 3 suggestions about jargon in EA by (
- 19 Nov 2021 12:54 UTC; 3 points) 's comment on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits by (
- 29 Jan 2021 4:58 UTC; 3 points) 's comment on Important Between-Cause Considerations: things every EA should know about by (
- 22 Sep 2021 9:20 UTC; 3 points) 's comment on MichaelA’s Quick takes by (
- 9 Jan 2022 10:45 UTC; 3 points) 's comment on A guided cause prioritisation flowchart by (
- 1 Aug 2022 16:08 UTC; 3 points) 's comment on niplav’s Quick takes by (
- 8 Nov 2020 7:00 UTC; 2 points) 's comment on Introducing Probably Good: A New Career Guidance Organization by (
- 9 Mar 2020 10:42 UTC; 2 points) 's comment on A List of Things For People To Do by (
- 8 May 2020 7:07 UTC; 2 points) 's comment on MichaelA’s Quick takes by (
- 20 Apr 2021 8:12 UTC; 2 points) 's comment on Deference for Bayesians by (
- 28 Dec 2021 16:16 UTC; 2 points) 's comment on List of EA-related organisations by (
- 13 Jun 2021 15:14 UTC; 2 points) 's comment on Propose and vote on potential EA Wiki entries by (
- 5 May 2022 16:49 UTC; 1 point) 's comment on Categorized EA Forum upvoting by (
- 23 Jun 2020 16:32 UTC; 1 point) 's comment on How should we run the EA Forum Prize? by (
Relatedly, I’m quite keen on the idea of writing and posting short literature reviews. I suspect that when people look into a topic, they sometimes spend 10-40 hours looking into particular sub-topics, just to inform their own decision-making. It doesn’t take much more time input to write your own notes in the format of a literature review, then spend a little while editing it at the end for clarity.
That’s what I did with my short lit reviews on issues related to developing and training management and leadership expertise. Charity Entrepreneurship also write short reports on each of the main options that they consider in various stages of their recommendation process.
One danger of this approach is that by formatting it as a literature review you might come across as over-confident in your findings (even with lots of caveats), and you leave yourself open to accusations of low rigour.
Yeah, I think that’d be another great thing for people to do. Although I’d add that that might be especially valuable when there’s no existing non-EA lit review on the topic (e.g., when the topic is somewhat obscure, or you’re taking a particular angle on a topic that EAs are unusually interested in).
E.g., perhaps there’s already a fairly good lit review on hiring practices in general, and you could add some value by writing a more summarised or updated version. But you might also be able to capture a lot of that value just by making a link post to that review on the forum and noting/summarising a few key points. Meanwhile, there might be no lit review that’s focused on hiring practices for start-up-style nonprofits in particular, so writing that might be especially worthwhile (if that’s roughly the subtopic/angle you were interested in anyway).
On the other hand, I think I would’ve guessed that the topic “how to create a highly impactful/disruptive research team” would be quite far from obscure, and would already have a solid, up-to-date lit review covering what EA people would want to know. But this post suggests there wasn’t an existing lit review with that particular angle, and the post seemed quite interesting and useful to me. So there are probably more “gaps” than I would naively expect, and thus substantial value in lit reviews on certain topics I would naively expect are already “covered”.
Great post! I second this recommendation. Because I’m gathering a lot of material in the course of working on the next edition of the EA Handbook, I’ll look for opportunities to post some of that to the Forum (in cases where doing so won’t overlap too much with the Handbook).
I think that EA Concepts did a good job mitigating this problem regarding concepts, although I’m unsure how many people read it. In general, it seems that it would be better if these sort of collections would be in one place (e.g., like they are in EA Concepts), in addition to being scattered throughout the EA Forum. Maybe an EA wiki (which is already in the works) could solve such a problem, provided enough people use it. Or maybe EA concepts could be expanded. Or maybe we should also post collections of summaries/collections on the Forum.
I agree that having some central directory or collection of the summaries/collections would be ideal. And I think all of those suggestions for achieving that are good.
Also, I think EA Concepts is great. And I think people who haven’t checked it out should, or should keep it in mind when they encounter a concept they’re unfamiliar with. (Conceptually also performs a similar function. It isn’t explicitly EA-focused, but it was made by EAs and covers a lot of concepts EAs like to use.) However:
EA Concepts doesn’t cover everything
The entries are quite short, which is of course valuable in some ways, but also means there could also be value in longer summaries that build on, go beyond, and add detail to what’s in those entries
The entries seem slightly old now, so they may not reflect the latest work, nor contain links to it. A forum post will later suffer the same issue, but it allows comments, so people could comment to add discussion of or links to more recent work. (That said, it seems relatively rare for people to comment on older posts, which I think is a shame.)
This suggests that one possible solution would be for the people behind EA Concepts to crowdsource (and then vet) new entries, and/or updated versions of existing entries.
P.S. #2: There are three questions I have about how one should do the “collections”, which didn’t seem necessary to include in the above post itself, but which I’d be interested in people’s thoughts on.
1. Should the collections be full posts on shortform comments?
So far, I’ve posted all my collections as “shortform” comments, rather than full posts, except one (Quotes about the long reflection). This is because most have been quite short, and pretty much just lists, so they didn’t feel substantial or refined enough to be full posts.
But shortform comments aren’t displayed as prominently upon posting or when using the search function (the search function will show the user’s name but not any snippet of the comment). So there’s a higher chance that people who would’ve been interested in the collection will miss it.
That might make sense, as people have limited attention, and these collections are arguably not especially important. But I’m unsure precisely where to draw the line. And I think it’d be nice if they were easier to find through the search function, at least (e.g., if the first few words of shortform comments were displayed when searching, or something like that).
2. Is my approach to collecting sources actually useful, given the fact that relevant papers will have reference lists?
I think the answer is “Yes”, given that:
relevant papers’ reference lists will contain many not especially relevant sources
relevant papers’ reference lists will typically not contain “informal sources” like blog posts or podcasts, despite these often being quite useful, especially for topics neglected thus far by mainstream academia
I’m collecting sources on relatively neglected topics (e.g., the unilateralist’s curse), or topics where the work so far may be scattered, use different keywords, and/or be hard to spot among lots of other somewhat related work that isn’t quite what I/EAs are after (e.g., for civilizational collapse, arguably)
EAs might not necessarily even think to look into the topic (i.e., the collection might help EAs learn of or come to care about the topic), or might miss the relevant papers in the first place
But I’m not sure about this. Maybe it actually wouldn’t be worth people’s time to make collections of sources in particular. (Although, again, the time investment can be pretty small when you’re learning about a topic anyway.)
EDIT: I’m now more convinced that collecting sources is useful, and upvotes of my collections seem to provide weak evidence in support of that.
3. Would it be better to spend the same amount of time producing a lower number of more sophisticated collections, as proper databases or something (e.g., using Mendeley?)?
Somehow I didn’t learn about these more sophisticated approaches at all during my university studies or when writing a paper, so I don’t know how those approaches work or how valuable they are, and thus I can’t really answer this. But maybe this is the approach I and/or others should be taking, either when posting these collections or just when doing research in general.
Update: The new tags system now captures some of the value “collections” can capture (which is great). Though there are two aspects* of the tag system that mean collections can still add value:
You can only tag things that are already on the EA Forum. Many relevant sources will be elsewhere (e.g. EA-aligned organisations’ own sites, LessWrong, Wikipedia, books). Indeed, the fact relevant sources are dispersed across the web was always a key reason I saw “collections” as useful.
There aren’t tags for all topics. (Arguably, this is a feature, not a bug, but it still provides room for collections to add value.)
So I think there can still be value in collecting sources even on topics that now have tags. In such cases, it seems it would be best to ensure that the EA Forum posts with the relevant tag are in the relevant collection, and vice versa.
And more obviously, there can still be value in collecting sources on topics for which there isn’t a tag. And perhaps if that topic later gets a tag, people can easily kick that tag off using that collection.