Propose and vote on potential EA Wiki entries
2022 update: This is now superseded by a new version of the same open thread.
(I have no association with the EA Forum team or CEA, and this idea comes with no official mandate. I’m open to suggestions of totally different ways of doing this.)
Update: Aaron here. This has our official mandate now, and I’m subscribed to the post so that I’ll be notified of every comment. Please suggest tags!
2021 update: Michael here again. The EA’s tag system is now paired with the EA Wiki, and so proposals on this post are now for “entries”, which can mean tags, EA Wiki articles, or (most often) pages that serve both roles.
The EA Forum now has tags, and users can now make tags themselves. I think this is really cool, and I’ve now made a bunch of tags.
But I find it hard to decide whether some tag ideas are worth including, vs being too fine-grained or too similar to existing tags. I also feel some hesitation about taking too much unilateral action. I imagine some other forum users might feel the same way about tag ideas they have, some of which might be really good! (See also this thread.)
So I propose that this post becomes a thread where people can comment with a tag idea there’s somewhat unsure about, and then other people can upvote it or downvote it based on whether they think it should indeed be its own tag. Details:
I am not saying you should always comment here before making a tag. I have neither the power nor the inclination to stop you just making tags you’re fairly confident should exist!
I suggest having a low bar for commenting here, such as “this is just a thought that occurred to me” or “5% chance this tag should exist”. It’s often good to be open to raising all sorts of ideas when brainstorming, and apply most of the screening pressure after the ideas are raised.
The tag ideas I’ve commented about myself are all “just spitballing”.
Feel free to also propose alternative tag labels, propose a rough tag description, note what other tags are related to this one, note what you see as the arguments for and against that tag, and/or list some posts that would be included in this tag. (But also feel free to simply suggest a tag label.)
Feel free to comment on other people’s ideas to do any of the above things (propose alternative labels, etc.).
Make a separate comment for each tag idea.
Probably upvote or downvote just based on the tag idea itself; to address the extra ideas in the comment (e.g., the proposed description), leave a reply.
Maybe try not to hold back with the downvotes. People commenting here would do so specifically because they want other people’s honest input, and they never claimed their tag idea was definitely good so the downvote isn’t really disagreeing with them.
Also feel free to use this as a thread to discuss (and upvote or downvote suggestions regarding) existing tags that might not be worth having, or might be worth renaming or tweaking the scope of, or what-have-you. For example, I created the tag Political Polarisation, but I’ve also left a comment here about whether it should be changed or removed.
- Our plans for hosting an EA wiki on the Forum by 2 Mar 2021 12:45 UTC; 127 points) (
- Editing Festival: Results and Prizes by 29 May 2021 23:16 UTC; 52 points) (
- Propose and vote on potential EA Wiki articles / tags [2022] by 8 Apr 2022 7:09 UTC; 38 points) (
- Reworking the Tagging System by 24 Jan 2021 18:03 UTC; 24 points) (
- 5 Aug 2020 0:34 UTC; 2 points) 's comment on EA Forum update: New editor! (And more) by (
- 10 May 2022 22:30 UTC; 2 points) 's comment on EA Forum feature suggestion thread by (
- 3 Mar 2021 3:52 UTC; 2 points) 's comment on Our plans for hosting an EA wiki on the Forum by (
Political Polarisation
I already made this tag, but maybe it should be removed.
Arguments against its existence:
Not currently a very commonly discussed topic in EA
Arguably related to the tag Policy Change
Maybe there’s some other tag that would do a better job covering this and related matters. Super rough ideas: Cultural Forces; Culture, Politics, & Norms; Institutions & Norms
Arguments for its existence:
Some EAs seem quite interested in this
Interest may be increasing: There were 3 posts on the topic just this year, which each got decent to large amounts of attention
There may also be a lot of interest in this on LessWrong? If so, this may ultimately spill over to more interest here?
My shortform collection of posts on the topic got 17 karma
UPDATE: I’ve proposed the change to the tag.
Proposal: Change the EA Global tag to EA Conferences.
Since many of the tagged posts are relevant to EA Student Summit, EAGx’s etc. and the description itself is conference posts.
Now vs Later, or Optimal Timing, or Optimal Timing for Altruists, or some other name.
This would be intended to capture posts relevant to the debate over “giving now vs later” and “patient vs urgent longtermism”, as well as related debates like whether to do direct work now vs build career capital vs movement-build, and how much to give/work now vs later, and when to give/work if not now (“later” is a very large category!).
This tag would overlap with Hinge of History, but seems meaningfully distinct from that.
Not sure what the best name would be.
Patient Philanthropy seems like the general category. Not all of it will be about the debate as to whether it’s right, but it seems like a tag that encompasses questions like, “given that I want to give later, how do I do that” seems good.
Thanks for highlighting patient philanthropy as an option, and good point that it’d be good for this tag to not just be about the debate but also how to implement the patient approach.
I’ve now made this tag, though with the name Patient Altruism. I haven’t heard that term used, but it makes sense to me as a generalisation of patient philanthropy to also account for how to use work, not just how to use donations. I’ve now also written a shortform post arguing for the term.
One worry I have is that by saying Patient Altruism rather than Patient vs Urgent Altruism, this tag puts virtuous connotations on one side but not the other. But the version with “vs Urgent” is longer, it perhaps doesn’t as naturally include posts about how to take the patient approach, and I’ve only heard the term “urgent longtermism”, not “urgent philanthropy” (though I do suggest use of the terms “urgent philanthropy” and “urgent altruism” in that shortform post).
Heavy tailed distributions of cost-effectiveness, or some variant thereof, would probably be good. I seem to recall there was such an entry on the old EA Concepts page.
Some examples of pages that would get this tag:
https://forum.effectivealtruism.org/posts/FXaCnPMiw3jWrnkho/cost-effectiveness-distributions-power-laws-and-scale
https://forum.effectivealtruism.org/posts/ntLmCbHE2XKhfbzaX/how-much-does-performance-differ-between-people
https://forum.effectivealtruism.org/posts/54QW6uBjWXJzR7c4E/log-normal-lamentations
The content of the old EA Concepts page is now part of the cost-effectiveness entry. However, it may be worth creating a separate entry on distribution of cost-effectivenss and moving that content there. I’ll do that tomorrow if no one objects by then.
Sorry, I hadn’t seen that. I now added the “cost-effectiveness” tag to the first of these three articles, since that even has “cost-effectiveness” in the title.
The other two articles are actually about differences in performance between people. Potentially that should have its own tag. But it’s also possible that that is too small a topic to warrant that.
I’d also be happy for an article on distribution of cost-effectiveness.
Thanks. I’ll take a look at the articles later today. My sense is that discussion of variation in performance across people is mostly of interest insofar as it bears on the question of distribution of cost-effectiveness, so I’d be tempted to use the distribution of cost-effectiveness tag for those articles, rather than create a dedicated entry.
Biosurveillance
A central pillar for biodefense against GCBRs and an increasingly feasible intervention with several EAs working on it and potentially cool projects emerging in the near future. Possibly too granular as a tag since there’s not a high volume of biosecurity posts which would warrant the granular distinction. But perhaps valuable from a Wiki standpoint with a definition and a few references. I can create an entry, if the mods are okay with it.
Example posts:
https://forum.effectivealtruism.org/posts/NzqaiopAJuJ37tpJz/project-ideas-in-biosecurity-for-eas
Related: GCBR, Biosecurity
Hi Jasper,
I agree that this would be a valuable Wiki article, and if you are willing to write it, that would be fantastic.
Update: I’ve now made this entry.
Surveillance
Some relevant posts:
https://forum.effectivealtruism.org/posts/xoxbDsKGvHpkGfw9R/problem-areas-beyond-80-000-hours-current-priorities#Surveillance
https://forum.effectivealtruism.org/posts/XtNgkrnFnedx92iN3/ben-garfinkel-the-future-of-surveillance
https://forum.effectivealtruism.org/posts/xt7pKYWyvC5Ng4TpS/surveillance-and-free-expression-or-sunyshore
Agree we should have such an entry (I had it in my list of planned articles).
I’m surprised that “cost-effectiveness evaluation” doesn’t exist yet.
Some others that it’s weird enough that they don’t exist yet: “meta-charities”, “advocacy”, “pandemic preparedness”.
A couple of tags that would apply to all of my posts: “aging research”, “scientific research”.
I’d be in favor of all of those tags, except “pandemic preparedness” which I currently think is too overlapping with “Biosecurity”.
I’d say “scientific research” is probably covered by Scientific Progress, Research Methods, and tags about specific areas scientific research can be done in?
I think I’m in favour of a Cost-Effectiveness Evaluation tag. (Or maybe Cost-Effectiveness Analysis? I think that’s the more common term?)
That seems similar to Impact Assessment (a tag I made last month), so some of my thoughts on that tag might also be relevant. But I think Cost-Effectiveness Analysis is probably different enough from existing tags to be worth having.
(Update: I’ve now made this tag.)
Operations
Arguments against
Maybe overlaps somewhat with Org Strategy, EA Hiring, and Entrepreneurship
Maybe not very many posts on the forum that are especially related to operations in particular
Arguments for:
I’d guess there are at least 5 relevant posts
Some posts with the above-mentioned tags might be relevant
I’d guess there’ll be more relevant posts in future
I’d guess at least a few EA forum users would appreciate seeing a collection of posts on this
I like Lists, so get me a List of Lists for my tag List.
There are a number of good posts that are basically lists of links to different articles (like this one). It would be nice to be able to easily access them.
I very much share this affection for lists.
I think Collection and Resources might cover this? E.g., those reading lists from Richard Ngo have each been given that tag.
Do you think there’s still a gap for a List tag, or a way the description of the Collection and Resources tag should be adjusted?
Ahh yes, that covers it. I looked through the list of tags to check if there was already something on there; I guess I missed that one.
When tags were introduced, the post said to “submit new tag ideas to us using this form.” I made a bunch of suggestions (don’t remember what they were) and probably some other people did too. Could someone who has access to results of that form paste all those suggestions here?
That sounds like a great idea!
I think ideally they’d be pasted as separate comments, so they can each be voted up or down separately. (Not saying you were suggesting otherwise.)
Independent impressions or something like that
We already have Discussion norms and Epistemic deference, so I think there’s probably no real need for this as a tag. But I think a wiki entry outlining the concept could be good. The content could be closely based on my post of the same name and/or the things linked to at the bottom of that post.
I agree that it would be good to describe this distinction in the Wiki. Possibly it could be part of the Epistemic deference entry, though I don’t have a strong view on that.
How about something like beliefs vs. impressions?
Yeah, that title/framing seems fine to me
After reviewing the literature, I came to the view that Independent impressions, which you proposed, is probably a more appropriate name, so that’s what I ended up using.
Retreat or Retreats
I think there are a fair few EA Forum posts about why and how to run retreats (e.g., for community building, for remote orgs, or for increasing coordination among various orgs working in a given area). And I think there are a fair few people who’d find it useful to have these posts collected in one place.
Makes sense; I’ll create it.
By the way, we should probably start a new thread for new Wiki entries. This one has so many comments that it takes a long time to load.
Thanks!
And good idea—done
Red teaming or red teams or red team or something like that
Examples of posts that would get this tag:
https://forum.effectivealtruism.org/posts/obHA95otPtDNSD6MD/idea-red-teaming-fellowships
https://forum.effectivealtruism.org/posts/8RcFQPiza2rvicNqw/minimal-trust-investigations
https://forum.effectivealtruism.org/posts/myp9Y9qJnpEEWhJF9/shortform?commentId=hedmemCCrb4jkviAd if it was a top-level post or if we could tag shortforms
https://forum.effectivealtruism.org/posts/oHcRQsiDM9D76aENu/a-red-team-against-the-impact-of-small-donations
Uncertainties:
Should this just cover posts that explicitly discuss things like red-teaming (whether or not they also actually do red-team something), or also posts that are solely examples of red-teaming (without explicit discuss of that)?
I think probably the former?
That’s mainly because otherwise I think the boundaries might be too fuzzy and/or the list of posts tagged might be too long and unfocused. But I could be wrong about that.
Should this just include posts on things like red-teaming EAs’ ideas, or also posts on red-teaming in other contexts (e.g., in governments or as a way of improving decisions in general)?
I moderately confidently think the latter
Related entries
https://www.lesswrong.com/tag/epistemic-spot-check https://www.lesswrong.com/tag/conservation-of-expected-evidence https://forum.effectivealtruism.org/tag/community-epistemic-health
Yes! Will create this later today.
ETA: Now created
Corporate governance
Example of a relevant post: https://forum.effectivealtruism.org/posts/5MZpxbJJ5pkEBpAAR/the-case-for-long-term-corporate-governance-of-ai
I’ve mostly thought about this in relation to AI governance, but I think it’s also important for space governance and presumably various other EA issues.
I haven’t thought hard about whether this really warrants an entry, nor scanned for related entries—just throwing an idea out there.
United Kingdom policy & politics (or something like that)
This would be akin to the entry/tag on United States politics. An example of a post it’d cover is https://forum.effectivealtruism.org/posts/yKoYqxYxo8ZnaFcwh/risks-from-the-uk-s-planned-increase-in-nuclear-warheads
But I wrote on the United States politics entry’s discussion page a few months ago:
And I’d say similar here, hence the proposal to say “policy & politics” rather than just “politics”. (I know some people favour entry names being single categories rather than “and”s, but I think often the most natural cluster for a tag is better pointed at via more than one term.)
Yeah, makes sense. I just created the new article and renamed the existing one. There is no content for now, but I’ll try to add something later.
Career profiles (or maybe something like “job posts”?)
Basically, writeups of specific jobs people have, and how to get those jobs. Seems like a useful subset of the “Career Choice” tag to cover posts like “How I got an entry-level role in Congress”, and all the posts that people will (hopefully) write in response to this.
What about posts that discuss personal career choice processes (like this)?
My personal, quick reaction is that that’s a decently separate thing, that could have a separate tag if we feel that that’s worthwhile. Some posts might get both tags, and some posts might get just one.
But I haven’t thought carefully about this.
I also think I’d lean against having an entry for that purpose. It seems insufficiently distinct from the existing tags for career choice or community experiences, or from the intersection of the two.
Yeah, this seems worth having! And I appreciate you advocating for people to write these and for us to have a way to collect them, for similar reasons to those given in this earlier shortform of mine.
I think career profiles is a better term for this than job posts, partly because:
The latter sounds like it might be job ads or job postings
Some of these posts might not really be on “jobs” but rather things like being a semi-professional blogger, doing volunteering, having some formalised unpaid advisory role to some institution, etc.
OTOH, career profiles also sounds somewhat similar to 80k’s career reviews. This could be good or bad, depending on whether it’s important to distinguish what you have in mind from the career review format. (I don’t have a stance on that, as I haven’t read your post yet.)
Actually, having read your post, I now think it does sound more about jobs (or really “roles”, but that sounds less clear) than about careers. So I now might suggest using the term job profiles.
Thanks, have created this. (The “Donation writeup” tag is singular, so I felt like this one should also be, but LMK if you think it should be plural.)
Either looks good to me. I agree that this is worth having.
Update: I’ve now made this entry.
Consultancy (or maybe Consulting or Consultants or Consultancies)
Things this would cover:
https://forum.effectivealtruism.org/posts/CwFyTacABbWuzdYwB/ea-needs-consultancies
Other posts relevant to the idea of EAs acting as consultants to other EAs
E.g., this shortform of mine and maybe some links provided in it would warrant this tag if they were top-level Forum posts
Posts about pros and cons of EAs doing non-EA consultancy work (e.g. management consultancy), tips for doing that, etc.
https://forum.effectivealtruism.org/posts/RQSZCJCMnop48zwCb/learnings-from-scaling-the-effective-altruism-and-consulting
https://forum.effectivealtruism.org/posts/Kcft6apx5zgNWXFYh/ama-ian-david-moss-strategy-consultant-to-foundations-and
Related entries
career choice | Effective Altruism and Consulting Network | org strategy | working at EA vs. non-EA orgs
(maybe there are also other good choices for related entries)
Yeah, I made a note to create an entry on this topic soon after Luke published his post. Feel free to create it, and I’ll try to expand it next week (I’m a bit busy right now).
Effective Altruism on Facebook and Effective Altruism on Twitter (and more—maybe Goodreads, Instagram, LinkedIn, etc). Alternatively Effective Altruism on Social Media, though I probably prefer tags/entries on particular platforms.
A few relevant articles:
https://forum.effectivealtruism.org/posts/8knJCrJwC7TbhkQbi/ea-twitter-job-bots-and-more
https://forum.effectivealtruism.org/posts/6aQtRkkq5CgYAYrsd/ea-twitterbot
https://forum.effectivealtruism.org/posts/mvLgZiPWo4JJrBAvW/longtermism-twitter
https://forum.effectivealtruism.org/posts/BtptBcXWmjZBfdo9n/ea-facebook-group-greatest-hits-top-50-posts-by-total
Multiple articles about Giving Tuesday.
Also, quite a lot of EA discussion is and has taken place on Twitter and Facebook; there are many EA Facebook groups, etc. Therefore, it seems natural to have entries on EA Twitter and EA Facebook.
At first glance, I’d prefer to have Effective altruism on social media, or maybe actually just Social media, rather than the more fine-grained ones. (Also, I do think something in this vicinity is indeed worth having.) Reasoning:
I’m not sure if any of the specific platforms warrant an entry
If we have entries for the specific platforms, then what about posts relevant to effective altruism on some other platform?
We shouldn’t just create an entry for every other platform there’s at least one post relevant to, nor should we put them all under one of the other single-platform-focused tags.
But having an entry for Facebook, another for Twitter, and another for social media as a whole seems like too much?
Regarding dropping “Effective altruism on” and just saying “Social media”:
Presumably there are also posts on things like the effects of social media, the future trajectory of it, or ways to use it for good or intervene in it that aren’t just about writing about EA on it?
E.g., https://forum.effectivealtruism.org/posts/842uRXWoS76wxYG9C/incentivizing-forecasting-via-social-media
And it seems like it’d be good to capture those posts under the same entry?
Though maybe an entry for social media and an entry for effective altruism on social media are both warranted?
Though also note that there’s already a tag for effective altruism in the media, which has substantial overlap with this. But I think that’s probably ok—social media seems a sufficiently notable subset of “the media” to warrant its own entry.
(Btw, for the sake of interpreting the upvotes as evidence: I upvoted your comment, though as I noted I disagree a bit on the best name/scope.)
(Just wanted to send someone a link to a tag for Social media or something like that, then realised it doesn’t exist yet, so I guess I’ll bump this thread for a second opinion, and maybe create this in a few days if no one else does)
I don’t have accounts on social media and don’t follow discussions happening there, so I defer to you and others with more familiarity.
Maybe we should have an entry for each discipline/field that’s fairly relevant to EA and fairly well-represented on the Forum? Like how we already have history, economics, law, and psychology research. Some other disciplines/fields (or clusters of disciplines/fields) that could be added:
political science
humanities
I think humanities disciplines/fields tend to be somewhat less EA-relevant than e.g. economics, but it could be worth having one entry for this whole cluster of disciplines/fields
social science
But (unlike with humanities) it’s probably better to have entries for each particularly relevant field within this broad cluster
I’m overall in favor.
I wonder if we should take a more systematic approach to entries about individual disciplines. It seems that, from an EA perspective, a discipline may be relevant in a number of distinct ways, e.g. because it is a discipline in which young EAs may want to pursue a career, because conducting research in that discipline is of high value, because that discipline poses serious risks, or because findings in that discipline should inform EA thinking. I’m not sure how to translate this observation into something actionable for the Wiki, though, so I’m just registering it here in case others have thoughts along these lines.
Yeah, I do think it seems worth thinking a bit more about what the “inclusion criteria” for a discipline should be (from the perspective of making an EA Wiki entry about it), and that the different things you mention seem like starting points for that. Without clearer inclusion criteria, we could end up with a ridiculously large number of entries, or with entries that are unwarranted or too fine-grained, or with entries that are too coarse-grained, or with hesitation and failing to create worthwhile entries.
I don’t immediately have thoughts, but endorse the idea of someone generating thoughts :D
I agree that humanities disciplines tend to be less EA-relevant than the social sciences. But I think that the humanities are quite heterogeneous, so it feels more natural to me to have entries for particular humanities disciplines, than humanities as a whole.
But I’m not sure any such entries are warranted; it depends on how much has been written.
Maybe we should have a tag for each individual EA Fund, in addition to the existing tag Effective Altruism Funds tag? The latter could then be for posts relevant to EA Funds as a whole.
There are now 60 posts with the Effective Altruism Funds tag, and many readers may only be interested in posts relevant to one or two of the funds.
Yes, good idea. Feel free to create them, otherwise I’ll do it myself later today or tomorrow.
Markets for Altruism or Market Mechanisms for Altruism or Impact Certificates or Impact Purchases (or some other name)
Tentatively proposed description:
The posts listed here would fit this tag. Some other posts tagged EA Funding might fit as well.
I’m unsure precisely what the ideal scope and name of this tag would be.
I like it. Impact Certificates is more recognizable, but Markets for Altruism is more general. I think I agree with your favoring it.
Cool, thanks for the input—given that, I’ve now made the tag, with the name Markets for Altruism :)
I think it wouldbe useful to be able to see all the posts from a particular organisation all at once on the forum. For the most part, individuals from those organisations post, rather than a single organisation account it can be difficult to see e.g. all of Rethink Priorities’ research on a given topic
Curious to hear if people think it’s better to have tags or sequences for group these posts?
New issue: How do we deal with name changes ? (E.g. EAF became CLR, .impact became rethink charity)
I think it’s nice to have a single tag (the new name) for continuity but sometimes an org had a different focus or projects associated with the old name.
Maybe it’s enough to mention in the tag description “previously called X”?
Update: I’ve now made tags for Rethink Priorities, Future of Humanity Institute, and Global Priorities Institute. I believe I’ve tagged all RP posts. I wasn’t very thorough in tagging FHI or GPI posts. Other people can tag additional FHI and GPI posts, and/or add tags for other orgs.
I think something like this would be a good idea :)
Some thoughts:
One downside could be that we might end up with quite a few of these tags, which then clutter up the tags page.
Maybe it’d be best if the Forum team can set it up so there’s a separate, collapsable part of the tags page just for all the organisation tags?
That might also make it easier for someone who’s looking for org tags in general (without knowing what specific orgs might have tags) to find them.
Most EA organisations probably already have pages on their site where you can find all/most their research outputs. E.g., Rethink Priorities’ publications page.
But one thing tags allows you to do is (from the home page of the forum) filter by multiple tags at once. So you could e.g. filter by both the Rethink Priorities tag and the Wild Animal Welfare tag, to find all of Rethink’s posts related to that topic.
That said, I’ve never actually used the approach of filtering by multiple tags myself.
And the lists of publications on an org’s site may often be organised by broad topic area anyway. Though this could still be useful if you want to see if an org wrote something related to a concept/topic they probably wouldn’t organise their pages by (perhaps because it’s cross-cutting, slightly obscure, or wasn’t the main focus of the post) - e.g., if you want to see whether Rethink has written anything related to patient altruism.
I think tags might be better than sequences for this purpose. One reason is the above-mentioned benefit of allowing for filtering by both org and some tag. Another reason is that these posts usually won’t really be sequences in the usual sense—it won’t be the case that the order of publication is the most natural order of reading, and that one gains a lot from reading them all together. (Though some subset of each org’s posts may be a sequence, e.g. Rethink’s nuclear risk stuff.)
A complexity might be deciding which orgs should have tags—in particular, should orgs which aren’t especially prominent or don’t post often have tags?
Maybe some forum users can just make tags for orgs they want there to be tags for, and then orgs can make tags for themselves if they want, and we can see what results.
(It happens to be that I’ll be working at Rethink soon, but this comment was just my own opinion, and I only used them as an example because Vaidehi did.)
I agree that tags seem better than sequences.
I think rather than specific tags, it may be better to just have them regular tags. This would solve the issue about which organisations get org tags. I think it’s okay for people to tag their own early stage projects or orgs even if they aren’t very big (I’m biased here as I have some projects which I would like to be able to link people to).
I don’t think there’s a lot of risk—having a tag doesn’t mean your project is endorsed by EA or anything, it’s just a organisational tool.
I think this is probably the best strategy!
Also congrats on starting at Rethink :)
A possibility would be to add the organization as a coauthor for all official posts.
I’ve added a Meta-Science tag. I’d love for some help with clarifying the distinction between it and Scientific Progress.
Generally, I imagine meta-science as being more focused on specific aspects of the academic ecosystem and scientific progress to be related more to the general properties of scientific advances. There is clearly an overlap there, but I’m not sure where exactly to set the boundaries.
I think the overlap would be a if say, in the field of survey methodology, someone discovers a new way to measure bias in surveys—this would be a meta-science improvement but also scientific progress in the field of survey methodology
Would be good if tags always had descriptions/definitions of the things they’re for.
Agreed. I think people creating tags should probably always add those descriptions/definitions.
One thing I’d note is that anyone can add descriptions/definitions for tags, even if they didn’t create them. This could be hard if you’re not sure what the scope was meant to be, but if you think you know what the scope was meant to be, you could consider adding a description/definition yourself.
Update: I’ve now made this tag.
[Something about war, armed conflict, or great power conflict]
Arguments against:
Arguably a subset of International Relations.
Also overlaps with some tags like Nuclear Weapons and Existential Risk.
Arguments for:
Arguably a very important subset of International Relations, which might warrant a tag of its own.
Arguably not entirely a subset of International Relations, as things like civil/intrastate armed conflicts could also be important. (But maybe any EA Forum post that covers that would in practice also cover other International Relations things.)
Megaprojects
Would want to have a decent definition. I feel like the term is currently being used in a slippery / under-defined / unnecessary-jargon way, but also that there’s some value in it.
Example posts:
https://forum.effectivealtruism.org/posts/faezoENQwSTyw9iop/ea-megaprojects-continued
things linked to from there
Related entries:
Constraints on effective altruism
Scalably using labour
David Pearce (the tag will be removed if others think it’s not warranted)
Arguments against:
One may see David Pearce much more related to transhumanism (even if to the most altruistic “school” of transhumanism) than to EA (see e.g. Pablo’s comment).
Some of Pearce’s ideas goes against certain established notions in EA: e.g. he thinks sentience of classical digital computers is impossible under the known laws of physics, that minimising suffering should take priority over increasing happiness of the already well-off, that environmental interventions alone, w/o raising individuals’ hedonic setpoints and making these individuals invincible to severe suffering, cannot solve the problem of suffering and achieve sustainable high wellbeing for all.
I also should mention that I’m biased in proposing this tag, as Pearce’s work played a major role in my becoming an EA.
Arguments for:
For over 25 years David Pearce has been researching and writing about addressing the root cause of suffering on the planet using bio/nano/info/robo technology.
Pearce has been raising awareness about, and proposing solutions for, wild-animal suffering at least since 1995.
Several relatively prominent EAs cite Pearce’s work as having major influence on their values, including Brain Tomasik, the Qualia Research Institute’s Andrés Gómez Emilsson, and the Center for Reducing Suffering’s Magnus Vinding. Another recognition of Pearce’s work is his being invited as a speaker for EA Global: Melbourne 2015.
Unlike most other transhumanists, Pearce is antispeciesist and advocates using technology to benefit all sentient life.
Michael is correct that the inclusion criteria for entries of individual people hasn’t been made explicit. In deciding whether a person was a fit subject for an article, I haven’t followed any conscious procedure, but merely relied on my subjective sense of whether the person deserved a dedicated article. Looking at the list of people I ended up including, a few clusters emerge:
people who have had an extraordinary positive impact, and that are often discussed in EA circles (Arkhipov, Zhdanov, etc.)
people who have attained eminence in their fields and who are connected to EA to a significant degree (Pinker, Hassabis, Boeree, etc.)
academics who have conducted research of clear EA relevance (Ng, Duflo, Parfit, Tetlock, etc.)
historical figures that may be regarded as proto-EAs or that are seen as having inspired the EA movement (Bentham, Mill, Russell, etc.)
“core figures” in the EA community (Shulman, Christiano, Tomasik, etc.)
Some people, such Bostrom, MacAskill, Ord, fit into more than one of these clusters. My sense is that David Pearce doesn’t fit into any of the clusters. It seems relatively uncontroversial that he doesn’t fit into clusters 1-4, so the relevant question—at least if one broadly agrees with the approach I’ve taken—is whether he is sufficiently close to the “core” to merit inclusion as part of cluster 5.
As someone who has been involved with EA since its inception and who has (I believe) a reasonably good sense of how central to the movement different people have been, my impression is that Pearce isn’t central enough. If others have different (or similar) impressions, I would encourage them to post them here. We could, alternatively, try to go beyond impressionistic evidence and look at more objective measures, such as citation counts (broadly construed to include not just academic citations but links from the EA Forum and EA Blogs), though conducting that kind of analysis might be time consuming and may not be fully conclusive. Do others have thoughts on how to operationalize the relevant criteria?
FWIW, I think your comment is already a good step! I think I broadly agree that those people who fit into at least one of those clusters should typically have entries, and those who don’t shouldn’t. And this already makes me feel more of a sense of clarity about this.
I still think substantial fuzziness remains. This is mostly just because words like “eminence” could be applied more or less strictly. I think that that’s hard to avoid and maybe not necessary to avoid—people will probably generally agree, and then we can politely squabble about the borderline cases and thereby get a clearer sense of what we collectively think the “line” is.
But I think “people who have had an extraordinary positive impact, and that are often discussed in EA circles (Arkhipov, Zhdanov, etc.)” may require further operationalisation, since what counts as extraordinary positive impact can differ a lot based on one’s empirical, moral, epistemological, etc. views. E.g., I suspect that nil might think Pearce has been more impactful than most people who do have an entry, since Pearce’s impacts are more targeted at suffering reduction. (nil can of course correct me if I’m wrong about their views.)
So maybe we should say something like “people who are widely discussed in EA and who a significant fraction of EAs see as having had an extraordinary positive impact (Arkhipov, Zhdanov, etc.)”? (That leaves the fuzziness of “significant fraction”, but it seems a step in the right direction by not just relying on a given individual’s view of who has been extraordinarily impactful.)
Then, turning back to the original example, there’s the question: Would a significant fraction of EAs see Pearce as having had an extraordinary positive impact? I think I’d lean towards “no”, though I’m unsure, both because I don’t have a survey and because of the vagueness of the term “significant fraction”.
I think there’s a relatively clear sense in which Arkhipov, Borlaug, and similar figures (e.g. winners of the Future of Life Award, names included in Scientists Greater than Einstein, and related characters profiled in Doing Good Better or the 80,000 Hours blog) count as having had an extraordinary positive impact and Pearce does not, namely, the sense in which also Ord, MacAskill, Tomasik, etc. don’t count. I think it’s probably unnecessary to try to specify in great detail what the criterion is, but the core element seems to be that the former are all examples of do-gooding that is extraordinary from both an EA and a common-sense perspective, whereas if you wanted to claim that e.g. Shulman or Christiano are among humanity’s greatest benefactors, you’d probably need to make some arguments that a typical person would not find very persuasive. (The arguments for that conclusion would also likely be very brittle and fail to persuade most EAs, but that doesn’t seem to be so central.)
So I think it really boils down to the question of how core a figure Pearce is in the EA movement, and as noted, my impression is that he just isn’t a core enough figure. I say this, incidentally, as someone who admires him greatly and who has been profoundly influenced by his writings (some of which I translated into Spanish a long time ago), although I have also developed serious reservations about various aspects of his work over the years.
If you mean that the vast majority of EAs would agree that Arkhipov, Borlaug, Zhdanov, and similar figures count as having had an extraordinary positive impact, or that that’s the only reasonable position one could hold, I disagree, for reasons I’ll discuss below.
But if you just mean that a significant fraction of EAs would agree that those figures count as having had an extraordinary impact, I agree. And, as noted in my previous comment, I think that using a phrasing like “people who are widely discussed in EA and who a significant fraction of EAs see as having had an extraordinary positive impact (Arkhipov, Zhdanov, etc.)” would probably work.
And that phrasing also seems fine if I’m wrong about (1), so maybe there’s no real need to debate (1)?
(Relatedly, I also do ultimately agree that Arkhipov etc. should have entries.)
Expanding on (1):
This is mostly due to crucial considerations that could change the sign or (relative) magnitude of the moral value of the near-term effects that these people are often seen as having had. For example:
It’s not obvious that a US-Russia nuclear war during the Cold War would’ve caused a negative long-term future trajectory change.
I expect it would, and, for related reasons, am currently focused on nuclear risk research myself.
But I think one could reasonably argue that the case for this view is brittle and the case for e.g. the extraordinary positive impact of some people focused on AI is stronger (conditioning on strong longtermism).
Some EAs think extinction risk reduction is or plausibly is net negative.
Some EAs think population growth is or plausibly is net negative, e.g. for reasons related to the meat-eater problem or to differential progress.
It’s plausible that expected moral impact is dominated by effects on the long-term future, farm animals, wild animals, invertebrates, or similar, in which case it may be both less clear that e.g. Borlaug and Zhdanov had a net positive impact and less clear that it is “extraordinary” relative to the impact of people whose actions were more targeted to helping those populations.
But it’s also because of uncertainties about whether they really had those near-term effects, whether similar things would’ve happened without them, and—at least in Zhdanov’s case—whether they had other near-term effects that may have been very negative. For example:
My understanding is that it’s not actually very clear whether Arkhipov played a crucial role in preventing a launch.
E.g., Baum, de Neufville, and Barrett write “The second captain, Vassily Arkhipov, has been credited with having vetoed the decision to launch the torpedo over the objections of the two other officers (Lloyd 2002). Sources conflict on whether the submarine crew had the authority to launch the torpedo without direct orders from Moscow. The submarine’s communications officer later said in an interview that Arkhipov did play an important role in calming the captain down, but that while there was a danger of an accident or equipment malfunction, they were never close to intentionally launching the nuclear torpedo (Savranskaya 2007).”
Zhdanov also “chaired the Soviet Union’s Interagency Science and Technology Council on Molecular Biology and Genetics, which among its many functions directed the Soviet biological weapons program” (Wikipedia), which I think makes it plausible that his expected impact (evaluated during the Cold War) on the long-term future was very negative.
My more basic point is just that it seems very hard to say with high confidence what actions had net positive vs net negative impacts and how to rank them, and there’s room for reasonable disagreement.
Again, though, I think we can probably sidestep all of this by just saying “people who are widely discussed in EA and who a significant fraction of EAs see as having had an extraordinary positive impact (Arkhipov, Zhdanov, etc.)”.
For those who may want to see the deleted entry, I’m posting it below:
Thanks again, nil, for taking the time to create this entry and outline your reasoning. After reviewing the discussion, and seeing that no new comments have been posted in the past five days, I’ve decided to delete the article, for the reasons I outlined previously.
Please do not let this dissuade you from posting further content to the Wiki, and if you have any feedback, feel free to leave it below or to message me privately.
I’m sorry to hear this, Pablo, as I haven’t been convinced that Pearce isn’t relevant enough for effective altruism.
Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce (that is not necessarily to say that their entities aren’t warranted!). May it be correct to infer that at least some of these entries received less scrutiny than Pearce’s nomination?
Dylan Matthews
David Chalmers
And perhaps:
Demis Hassabis
K. Eric Drexler
May I ask why five days since the last comment were deemed enough for proceeding to the deletion? Is this part of the wiki’s rules? (If so, it must be my fault that I didn’t have time to reply in time.)
I also wanted to say that despite the disagreement, I appreacite that the wiki has a team commiteed to it.
>Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce
I tried to outline some criteria in an earlier comment. Chalmers and Hassabis fall under the category of “people who have attained eminence in their fields and who are connected to EA to a significant degree”. Drexler, and perhaps also Chalmers, fall under the category of “academics who have conducted research of clear EA relevance”. Matthews doesn’t fall under any of the categories listed, though he strikes me as someone worth including given his leading role at Future Perfect—the only explicitly EA project in mainstream journalism—and his long-standing involvement with the EA movement.
As the example of Matthews shows, the categories I identified aren’t exhaustive. That was just my attempt to retroactively make sense of the tactic criterion I had followed in selecting these particular people. Despite still not having a super clear sense of the underlying categories, I felt reasonably confident that Pearce didn’t qualify because (1) it seemed that there was no other potential category he could fall under besides that of “EA core figure” and (2) he is not, in my opinion, a core figure in EA. Perhaps the closest situation is that of Aubrey de Grey, who is also a leading figure in an adjacent movement, has had some involvement with the EA movement, but isn’t really central to EA (and is, for that reason, also excluded).
With that said, I’m open to criticism for my selection, and in retrospect I am not confident in all the choices I made. For instance, it now seems to me debatable whether we should have an article on Bryan Caplan. We also don’t have articles on some individuals who some may argue, probably with justification, that deserve one, such as Robin Hanson or Nick Beckstead. I’d be happy to reconsider my decisions in these and other cases. In the case of Pearce (or de Grey), I trust my judgment more, though of course there’s still room for reasonable disagreement.
>May I ask why five days since the last comment were deemed enough for proceeding to the deletion? Is this part of the wiki’s rules?
I wasn’t following an explicit rule: I just got the impression that the discussion had come to an end. Pearce’s entry was online for almost two weeks, so after a five-day period of inactivity I thought it was appropriate to make a decision. I am considering ways to make the process of resolving disputes more structured and less opaque (feedback welcome). If there are any further comments that you would like to make, I am happy to reopen the discussion and consider your arguments.
FWIW, I agree that Hassabis and Drexler meet your proposed criteria and warrant entries, and that Chalmers and Caplan probably do (along with Hanson and Beckstead). But Matthews does seem roughly on par with Pearce to me. (Though I don’t know that much about either of their work.)
I also agree that Pearce seems to be a similar case to de Grey, so we might apply a similar principle to both.
Maybe it’d be useful to try switching briefly from the discussion of specific entries and criteria to instead consider: What are the pros and cons of having more or much more entries (and especially entries on people)? And roughly how many entries on people do we ultimately want? This would be similar to the inclusionism debate on Wikipedia, I believe. If we have reason to want to avoid going beyond like 50 or 100 or 200 or whatever entries on people, or we have reason to be quite careful about adding less prominent or central people to the wiki, or if we don’t, then that could inform how high a “bar” we set.
First, I want to make it clear that I don’t question that any of the persons I listed in my previous comment should be removed from the wiki. I just disagree that not including Pearce is justified.
Again, I honestly don’t think that it is true that Chalmers and Drexler are “connected to EA to a significant degree” while Pearce isn’t. Especially Chalmers: from what I know, he isn’t engaged w/ effective altruism, besides once agreeing for being interviewed at the 80,000 Hours podcast.
As for the “attained eminence in their fields” condition, I do see that it may be harder to resolve for Pearce’s case since he isn’t an academic but rather an independent philosopher, writer, and advocate. But if Pearce’s field as suffering abolitionism, then the “attained eminence in their fields” condition does hold, in my view: he both is the founder of the “abolitionist project” and has written extensively on why’s and how’s of the project.
Also, as I mentioned in the original comment proposing the entry, Pearce’s work has inspired many EAs, including Brain Tomasik, the Qualia Research Institute’s Andrés Gómez Emilsson, and the Center for Reducing Suffering’s Magnus Vinding, and the nascent field of welfare/compassionate biology. Also, Invincible Wellbeing research group has been inspired by Pearce’s work as well.
I don’t have any new arguments to make, and I don’t expect anyone involved to change their minds anyway. I only hope it may be worth time of others to contribute their perspectives on the dispute.
And as Michael suggests above, it may be more productive at this point to consider how many entries on EA-relevant persons are desirable in the first place.
Best regards,
nil
Hey nil,
Chalmers was involved with EA in various ways over the years, e.g. by publishing a paper on the intelligence explosion and then discussing it at one of the Singularity Summits, briefly participating in LessWrong discussions, writing about mind uploading, interacting (I believe) with Luke Muehlhauser and Buck Shlegeris about their illusionist account of consciousness, etc.
In any case, I agree with you (and Michael) that it may be more productive to consider the underlying reasons for restricting the number of entries on individual people. I generally favor an inclusionist stance, and the main reason for taking an exclusionist line with entries for individuals is that I fear things will get out of control if we adopt a more relaxed approach. I’m happy, for instance, with having entries for basically any proposed organization, as long as there is some reasonable link to EA, but it would look kind of weird if we allowed any EA to have their own entry.
An alternative is to take an intermediate position where we require a certain degree of notability, but the bar is set lower, so as to include people like Pearce, de Grey, and others. We could, for instance, automatically accept anyone who already has their own Wikipedia entry, as long as they have a meaningful connection to EA (of roughly the same strength as we currently demand for EA orgs). Pearce would definitely meet this bar.
How do others feel about this proposal?
I personally feel that the proposal would allow for the inclusion of a number of people (not Pearce) who intuitively should not have their own Wiki entry, so I’m somewhat reluctant to adopt it. More generally, an advantage of having a more exclusionist approach for individuals is that the class of borderline cases is narrower, and so is therefore the expected number of discussions concerning whether a particular person should or should not be included. Other things equal, I would prefer to have few of these discussions, since it can be tricky to explicitly address whether someone deserves an entry (and the unpleasantness associated with having to justify an exclusionist position specifically—which may be perceived as expressing a negative opinion of the person whose entry is being considered—may unduly bias the discussion in an inclusionist direction).
Perhaps voting on cases where there is a disagreement could achieve a wider inclusiveness or at least less controversy? Voters would be e.g. the moderators (w/ an option to abstain) and several persons who are familiar w/ the work of a proposed person.
It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.
I think discussion will probably usually be sufficient. Using upvotes and downvotes as info seems useful, but probably not letting them be decisive.
This might just be a case where written communication on the internet makes the tone seem off, but “hidden” sounds to me unfair and harsh. That seems to imply Pablo already knew what the inclusion criteria should be, and was set on them, but deliberately withheld them. This seems extremely unlikely.
I think it’s more like the wiki is only a few months old, and there’s (I think) only one person paid to put substantial time into it, so we’re still figuring out a lot of policies as we go—I think Pablo just had fuzzier ideas, and then was prompted by this conversation to make them more explicit, and then was still clearly open to feedback on those criteria themselves (rather than them already being set).
I do agree that it will help now that we have possible inclusion criteria written up, and it would be even better to have them shown more prominently somewhere (though with it still being clear that they’re tentative and open to revision). Maybe this is all you meant?
I didn’t have in mind to sound harsh. Thanks for pointing this out: it now seems obvious to me that that part sounds uncharitable. I do appologise, belatedly :(
What I meant is that currently these new, evolving inclusion criteria are difficult to find. And if they are used in dispute resolutions (from this case onwards), perhaps they should be referenced for contributors as part of the introduction text, for example.
Thanks for the feedback. I have made a note to update the Wiki FAQ, or if necessary create a new document. Feel free to ping me if you don’t see any updates within the next week or so.
Hi nil,
I’ve edited the FAQ to make our inclusion criteria more explicit.
Thanks, Pablo. The criteria will help to avoid some future long disputes (and thus save time for more important things), although it wouldn’t have prevented my creating the entry for David Pearce, for he does fit the second condition, I think. (We disagree, I know.)
[Just responding to one specific thing, which isn’t central to what you’re saying anyway. No need to respond to this.]
For what it’s worth, I think I agree with you re Chalmers (I think Pearce may be more connected to EA than Chalmers is), but not Drexler. E.g., Drexler has worked at FHI for a while, and the FHI office is also shared by GovAI (part of FHI, but worth listing separately), GPI, CEA, and I think Forethought. So that’s pretty EA-y.
Plus he originated some ideas that are quite important for a lot of EAs, e.g. related to nanotech, CAIS, and Paretotopia.
(I’m writing quickly and thus leaning on acronyms and jargon, sorry.)
I should have been more clear about Drexler: I don’t dispute that he is “connected to EA to a significant degree”. But so is Pearce, in my view, for the reasons outlined in this thread.
(I think it’s weird and probably bad that this comment of nil’s has negative karma. nil is just clarifying what they were saying, and what they’re saying is within the realm of reason, and this was said politely.)
+1
To add to arguments for inclusion, here’s an excerpt from an EA Forum post about key figures in the animal suffering focus area.
David Pearce’s work on suffering and biotechnology would be more relevant now than in 2013 due to developments in genome editing and gene drives.
I’m roughly neutral on this, since I don’t have a very clear sense of what the criteria and “bars” are for deciding whether to make an entry about a given person. I think it would be good to have a discussion/policy regarding that.
I think some people like Nick Bostrom and Will MacAskill clearly warrant and entry, and some people like me clearly don’t, and there’s a big space in between—with Pearce included in it—where I could be convinced either way. (This has to do with relevance and notability in the context of the EA Forum Wiki, not like an overall judgement of these people or a popularity contest.)
Some other people who are perhaps in that ambiguous space:
Nick Beckstead (no entry atm)
Elie Hassenfeld (no entry atm, but an entry for GiveWell)
Max Tegmark (no entry atm, but an entry for FLI)
Brian Tomasik (has an entry)
Stuart Russell (has an entry)
Hilary Greaves (has an entry)
(I think I’d lean towards each of them having an entry except Hassenfeld and maybe Tegmark. I think the reason for The Hassenfeld Exception is that, as far as I’m aware, the vast majority of his work has been very connected with GiveWell. So it’s very important and notable, but doesn’t need a distinct entry. Somewhat similar with Tegmark inasmuch as he relates to EA, though he’s of course notable in the physics community for non-FLI-related reasons. But I’m very tentative with all those views.)
This makes sense to me, although one who is more familiar w/ their work may find their exclusion unwarranted. Thanks for clarifying!
In this light I still think an entry for Pearce is justified, to a degree scientifically grounded proposals for abolishing suffering is an EA topic (and this is the main theme of Pearce’s work). But I’m just one input of course.
Regarding Tomasik, we have different intuitions here: if an entry for Tomasik may not be justified, then I would say this sets a high bar which only original EA founders could reach. (For Tomasik himself is a founder of an EA charity—the Foundational Research Institute / Center on Long-Term Risk—has written extensively on many topics highly relevant to EA, and an advisor at the Center for Reducing Suffering, another EA org.) Anyway, this difference doesn’t probably matter in practice since you added that you lean towards Tomasik’s having an entry.
I agree with you that a Tomasik entry is clearly warranted. I would say that his entry is as justified as one on Ord or MacAskill; he is one of half a dozen or so people who have made the most important contributions to EA, in my opinion.
I will respond to your main comment later, or tomorrow.
As noted, I do lean towards Tomasik having an entry, but “co-founder of an EA org” + “written extensively on many topics highly relevant to EA” + “is an advisor for another EA org”, or 1 or 2 of those things plus 1 or 2 similar things, includes a fair few people, including probably like 5 people I know personally and who probably shouldn’t have their own entries.
I do think Tomasik has been especially prolific and his writings especially well-regarded and influential, which is a big part of why I lean towards an entry for him, but the criteria and cut offs do seem fuzzy at this stage.
As the head of the Forum, I’ll second Pablo in thanking you for creating the entry. While I defer to Pablo on deciding what articles belong in the wiki, I thought Pearce was a reasonable candidate. I appreciate the time you took to write out your reasoning (and to acknowledge arguments against including him).
Thank you for appreciating the contribution.
Since Pablo is trusted w/ deciding on the issue, I will address my questions about the decision directly to him in this thread.
Academia or something like that
This could cover things like how (in)efficient academia is, what influences it has had and could have, the best ways to leverage or direct academia, whether people should go into academic or academia-related careers, etc.
E.g., Open Phil’s post(s) on field-building and this post on How to PhD.
Related entries
field-building | meta-science | research methods | research training programs | scientific progress
---
It’s possible that this is made redundant by other tags we already have?
And my current suggested name and scope are vague and just spitballing.
I think this would be a valuable article. Perhaps the title could be refined, but at the moment I can’t think of any alternatives I like. So feel free to create it, and we can consider possible name variants in the future.
Ok, done!
Terrorism
Makes sense. I created it (no content yet).
Update: I’ve now made this tag.
Charitable pledges or Altruistic pledges or Giving pledges (but that could be confused with the Giving Pledge specifically) or Donation pledges or similar
Maybe the first two names are good in that they could capture pledges about resources other than money (e.g., time)? But I can’t off the top of my head think of any non-monetary altruistic pledges.
This could serve as an entry on this important-seeming topic in general, and as a directory to a bunch of other entries or orgs on specific pledges (e.g., Giving Pledge, GWWC Pledge, Generation Pledge, Founders Pledge).
See also this post: https://forum.effectivealtruism.org/posts/W2f7AZEe2kCoZhwrf/a-list-of-ea-donation-pledges-gwwc-etc
What do you think about a tag for posts that include Elicit predictions? I’d like to see all posts that include them and it might be a tiny further reminder to use them more.
This seems plausibly useful to me.
Obviously it’d overlap a lot with the Forecasting tag. But if it’s the case that several posts include Elicit forecasts but most posts tagged Forecasting don’t include Elicit forecasts, then I imagine a separate tag for Elicit forecasts could be useful. (Basically, what I’m thinking about is whether there would be cases in which it’d be useful for someone to find / be sent a collection of links to just posts with Elicit forecasts, with the Forecasting tag not covering their needs well.)
But maybe a better option would be to mirror LessWrong in having a tag for posts about forecasting and another tag for posts that include actual forecasts (see here)? (Or maybe the latter tag should only include posts that quite prominently include forecasts, rather than just including them in passing here and there.) Because maybe people would also want to see posts with Metaculus forecasts in them, or forecasts from Good Judgement Inc, or just forecasts from individual EAs but not using those platforms. And I’d guess it’d make more sense to have one tag where all of these things can be found than to try to have a separate tag for each.
(That’s just my quick thoughts in a tired state, though.)
It could also be handy to have a tag for posts relevant to “Ought / Elicit”—I think it’d probably be good to bundle them together but note Elicit explicitly—similarly to how there’s now tags for posts relevant to each of a few other orgs (e.g. Rethink Priorities, FHI, GPI, QURI). So maybe the combination of a tag for posts that contain actual forecasts and a tag for Ought / Elicit would serve the role a tag for posts containing Elicit forecasts would?
Quadratic voting or Uncommon voting methods or Approval voting or something like that or multiple of these
E.g., this post could get the first and/or second tag, and posts about CES could get the second and/or third tag
Created.
I may try to expand the description to also cover quadratic funding. (Both quadratic voting and quadratic funding are instances of quadratic payments, at least in Buterin’s framing, so we could use the latter for the name of the entry. I used ‘quadratic voting’ because this is the name that people usually associate with the general idea.)
Alignment tax
Here I’m more interested in the Wiki entry than the tag, though the tag is probably also useful. Basically I primarily want a good go-to link that is solely focused on this and gives a clear definition and maybe some discussion.
This is probably an even better fit for LW or the Alignment Forum, but they don’t seem to have it. We could make a version here anyway, and then we could copy it there or someone from those sites could.
Here are some posts that have relevant content, from a very quick search:
https://www.effectivealtruism.org/articles/paul-christiano-current-work-in-ai-alignment/#:~:text=I%20like%20this%20notion%20of,%5Bthe%20systems%5D%20to%20do. (this seems to be the standard go-to at the moment, but it’s a long post where this is only one part, and having an encylopedic rather than conversational style of writing could be helpful)
Also on the Forum: https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment
https://www.lesswrong.com/posts/9et86yPRk6RinJNt3/an-95-a-framework-for-thinking-about-how-to-make-ai-go-well
https://www.lesswrong.com/posts/HEZgGBZTpT4Bov7nH/mapping-the-conceptual-territory-in-ai-existential-safety#Alignment_tax_and_alignable_algorithms
https://www.lesswrong.com/posts/oBpebs5j5ngs3EXr5/a-summary-of-anthropic-s-first-paper-3#Alignment_Tax
https://www.lesswrong.com/posts/yhb5BNksWcESezp7p/poll-which-variables-are-most-strategically-relevant
https://www.lesswrong.com/posts/dktT3BiinsBZLw96h/linkpost-a-general-language-assistant-as-a-laboratory-for
https://www.lesswrong.com/posts/oBpebs5j5ngs3EXr5/a-summary-of-anthropic-s-first-paper-3#Alignment_Tax
https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety
Related entries:
differential progress
AI alignment
AI forecasting
AI governance
Maybe Corporate governance, if that entry is made
The term “safety tax” should probably also be mentioned
Here’s the entry. I was only able to read the transcript of Paul’s talk and Rohin’s summary of it, so feel free to add anything you think is missing.
Thanks, Michael. This is a good idea; I will create the entry.
(I just noticed you left other comments to which I didn’t respond; I’ll do so shortly.)
Brain-computer interfaces
See also the LW wiki entry / tag, which should be linked to from the Forum entry if we make one: https://www.lesswrong.com/tag/brain-computer-interfaces
Relevant posts:
https://forum.effectivealtruism.org/posts/qfDeCGxBTFhJANAWm/a-new-x-risk-factor-brain-computer-interfaces-1
(Maybe there are more—didn’t look hard.)
Looks good. I’ve now created the entry and will add content/links later.
Time-money tradeoffs or Buying time or something like that
For posts like https://forum.effectivealtruism.org/posts/g86DhzTNQmzo3nhLE/what-are-your-favourite-ways-to-buy-time and maybe a bunch of other posts tagged Personal development
Cool, I created the entry here. I may add some text soon.
Criticism of the EA community
For posts about what the EA community is like, as opposed to the core ideas of EA themselves. Currently, these posts get filed under Criticism of effective altruism even though it doesn’t quite fit.
Seems like a good idea!
If we have three criticism tags covering “causes”, “organizations”, and “community”, then having a general “criticism of EA” tag doesn’t seem to make sense. The best alternative seems like “criticism of EA philosophy”.
If I don’t hear objections from Pablo/Michael, I’ll make that change in a week or so and re-tag relevant posts.
So the plan is to have 4 tags, covering community, causes, organizations, and philosophy? Is so, that sounds good to me, I think.
If the idea was to have just three (without philosophy), I’d have said it feels like there’s something missing, e.g. for criticism of the ITN framework or ~impartial welfarism or the way EA uses expected value reasoning or whatever.
Update: I have created Criticism of the effective altruism community.
Arms race or Technology race or Arms/technology race something like that
Related entries
AI governance | AI forecasting | armed conflict | existential risk | nuclear warfare | Russell-Einstein Manifesto
--
I think such an entry/tag would be at least somewhat attention hazardous, so I’m genuinely unsure whether it’s worth creating it. Though I think it’d also have some benefits, the cat is somewhat out of the bag attention-hazard-wise (at least among EAs, who are presumably the main readers of this site), and LessWrong have apparently opted for such a tag (focused solely and explicitly on AI, so a bit more attention hazardous in my view).
Yes, I actually have a draft prepared, though it’s focused on AI, just like the LW article. I’ll try to finish it within the next couple of days and you can let me know when I publish it if you think we should expand it to cover other technological races (or have another article on that broader topic).
Survey or Surveys
For posts that:
discuss results from surveys,
promote surveys, and/or
discussing pros and cons and best practices for using surveys in general and maybe for specific EA-relevant areas (e.g., how much can we learn about technology timelines from surveys on that topic? how best can we collect and interpret that info?).
I care more about the first and third of those things, but it seems like in practice the tag would be used for the second. I guess we could discourage that, but it doesn’t seem important.
“Survey” seems more appropriate for the first and second of those things, while “Surveys” seems more appropriate for the third.
Yeah, makes sense. There’s some overlap with Data, but my sense is that having this other entry is still justified. I don’t have a preference for plural vs. singular.
Ok, now created.
Coaching or Coaching & therapy or something like that
Basically I think it’d be useful to have a way to collect all posts relevant to coaching and/or therapy as ways to increase people’s lifetime impact—so as meta interventions/cause areas, rather than as candidates for the best way to directly improve global wellbeing (or whatever). So this would include things like Lynette Bye’s work but exclude things like Canopie.
In my experience, it tends to make sense to think of coaching and therapy together in this context, as many people offer both services, the boundaries between these concepts/services seem fuzzy, and many of the relevant considerations seem similar. But it could make sense to have two tags.
Examples of posts that would be covered by a tag covering coaching:
https://forum.effectivealtruism.org/posts/DwiJBvnjxjptQePSs/coaching-reduce-struggle-and-develop-talent
https://forum.effectivealtruism.org/posts/axxX5DekQcY9HNkHe/coaching-an-under-appreciated-strategy-among-effective
https://forum.effectivealtruism.org/posts/6BXrSZGayibJjR9uc/long-term-future-fund-november-2019-short-grant-writeups#Damon_Pourtahmaseb_Sasi___40_000_
Most/all posts tagged https://forum.effectivealtruism.org/tag/effective-altruism-coaching
Probably many (but not most) posts tagged https://forum.effectivealtruism.org/tag/personal-development, effective altruism lifestyle, self-care, or https://forum.effectivealtruism.org/tag/management-and-mentoring
Probably some posts tagged https://forum.effectivealtruism.org/tag/career-advising
Probably https://forum.effectivealtruism.org/posts/CwFyTacABbWuzdYwB/ea-needs-consultancies
Examples of posts that would be covered by a tag covering therapy:
https://forum.effectivealtruism.org/posts/6BXrSZGayibJjR9uc/long-term-future-fund-november-2019-short-grant-writeups#Damon_Pourtahmaseb_Sasi___40_000_
Probably many (but not most) posts tagged effective altruism lifestyle, self-care, or https://forum.effectivealtruism.org/tag/personal-development
Yes, makes a lot of sense. Not sure why we don’t have such a tag already.
Weak preference for coaching over coaching & therapy.
Ok, now created, with coaching as the name for now
Management/mentoring, or just one of those terms, or People management, or something like that
This tag could be applied to many posts currently tagged Org strategy, Scalably using labour, Operations, research training programs, Constraints in effective altruism, WANBAM, and effective altruism hiring. But this topic seems sufficiently distinct from those topics and sufficiently important to warrant its own entry.
Sounds good. I haven’t reviewed the relevant posts, so I don’t have a clear sense of whether “management” or “mentoring” is a better choice; the latter seems preferable other things equal, since “management” is quite a vague term, but this is only one consideration. In principle, I could see a case for having two separate entries, depending on how many relevant posts there are and how much they differ. I would suggest that you go ahead and do what makes most sense to you, since you seem to have already looked at this material and probably have better intuitions. Otherwise I can take a closer look myself in the coming days.
Ok, I’ve now made this, for now going with just one entry called Management & mentoring, but flagging on the Discussion page that that could be changed later.
We’ve now redirected almost all of EA Concepts to Wiki entries. A few of the remaining concepts (e.g. “beliefs”) don’t seem like good wiki entries here, so we won’t touch them.
However, there are a couple of entries I think could be good tags, or good additions to existing tags:
Charity recommendations
Focus area recommendations
It seems good to have wiki entries that contain links to a bunch of lists of charity and/or focus area recommendations. Maybe these are worked into tags like “Donation Choice”/”Donation Writeup”, or maybe they’re separate.
(Wherever the entries end up, they should probably link to GWWC’s donation advice page, which is the most thorough I know of.)
Charity evaluators, e.g. GiveWell and Animal Charity Evaluators, have Wiki entries with sections listing their current recommendations. One option is to make the charity recommendations entry a pointer to existing Wiki entries that include such sections. Alternatively, we could list the recommendations themselves in this new Wiki entry, perhaps organizing it as a table that shows, for each charity, which charity evaluators recommend it.
Adjacent communities or something like that is a potential entry/tag (though not very high priority).
Some posts on that theme:
https://forum.effectivealtruism.org/posts/XHHwTu2PCr9CGpLpa/what-is-the-closest-thing-you-know-to-ea-that-isn-t-ea
https://forum.effectivealtruism.org/posts/zA9Hr2xb7HszjtmMx/name-for-the-larger-ea-adjacent-ecosystem
Yeah, how about communities adjacent to effective altruism?
Sounds good! Thanks.
I created a stub. As usual, feel free to revise or expand it.
Update: I’ve now made this entry.
Requests for proposals or something like that
To cover posts like https://forum.effectivealtruism.org/posts/EEtTQkFKRwLniXkQm/open-philanthropy-is-seeking-proposals-for-outreach-projects
This would be analogous to the Job listings tags, and sort of the inverse of the Funding requests tag.
This overlaps in some ways with Get involved and Requests (open), but seems like a sufficiently distinct thing that might be sufficiently useful to collect in one place that it’s worth having a tag for this.
This could also be an entry that discusses pros, cons, and best practices for Requests for proposals. Related entries include Grantmaking and EA funding.
Update: I’ve now made this entry.
Defense in depth
Relevant links/tags:
https://forum.effectivealtruism.org/posts/mvdKkvtfFv4Sa7ufZ/cotton-barratt-daniel-and-sandberg-defence-in-depth-against
https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance
See especially footnote 19
https://forum.effectivealtruism.org/posts/emvBqtzYYRQwGrazx/the-web-of-prevention
Seems like a useful concept for risk analysis and mitigating in general.
Update: I’ve now made this entry.
Semiconductors or Microchips or Integrated circuit or something like that
The main way this is relevant to EA is as a subset of AI governance / AI risk issues, which could push against having an entry just for this.
That said, my understanding is that a bunch of well-informed people see this as a fairly key variable for forecasting AI risks and intervening to reduce those risks, to the point where I’d say an entry seems warranted.
Meta: perhaps this entry should be renamed ‘Propose and vote on potential entries’ or ‘Propose and vote on potential tags/Wiki articles’? We generally use the catch-all term ‘entries’ for what may be described as either a tag or a Wiki article.
Yeah, I considered that a few weeks ago but then (somewhat inexplicably) didn’t bother doing it. Thanks for the prod—I have now done it :)
Update: I’ve now made this entry
career advising or career advice or career coaching or something like that
We already have career choice. But that’s very broad. It seems like it could be useful to have an entry with the more focused scope of things like:
How useful do various forms of career advising tend to be?
What are best practices for career advising?
What orgs work in that space?
E.g., 80k, Animal Advocacy Careers, Probably Good, presumably some others
How can one test fit for or build career capital in career advising?
This would be analogous to how we have an entry for donation choice but also entries for grantmaking, intervention evaluation, and charity evaluation.
Charter cities or special economic zones or whatever the best catchall term for those things + seasteading is
From a quick search for “charter cities” on the Forum, I think there aren’t many relevant posts, but there are:
https://forum.effectivealtruism.org/posts/EpaSZWQkAy9apupoD/intervention-report-charter-cities
https://forum.effectivealtruism.org/posts/9422BL5mDTzWBdPs4/link-the-case-for-charter-cities-within-the-ea-framework-cci
https://forum.effectivealtruism.org/posts/j63K34P9hermM4bfN/why-do-we-need-philanthropy-can-we-make-it-obsolete
https://forum.effectivealtruism.org/posts/Ds8BXE7stuWgA3Go7/the-future-of-earning-to-give
https://forum.effectivealtruism.org/posts/YivqwF4zNCTRSfk73/rachel-glennerster-fireside-chat-2018 (literally just two sentences, though)
Maybe there are other posts that would come up for “special economic zones” or “seasteading”.
Maybe this is too niche a topic to warrant its own entry, given that we already have entries like global health and development and economic growth?
Yes, definitely. I already had some scattered notes on this. There’s also the 80k podcast episode:
Wiblin, Robert & Keiran Harris (2019) The team trying to end poverty by founding well-governed ‘charter’ cities, 80,000 Hours, March 31.
An interview with Mark Lutter and Tamara Winter from the Charter Cities Institute.
Update: I’ve now made this entry
Charity evaluation or (probably less good) Charity evaluator
We already have an entries donation choice, intervention evaluation, and cause prioritisation. But charity evaluation is a major component of donation choice for which we lack an entry. This entry could also cover things about charity evaluation orgs like GiveWell, e.g. how useful a role they serve, what the best practices for them are, and whether there should be one for evaluating longtermist charities or AI charities or whatever.
Downside of this name: Really it might be better to speak of “funding opportunity evaluation” or “project evaluation”, to capture things that aren’t precisely charities. But “charity evaluation” seems like the standard term for this sort of thing, and the entry could just mention that those nearby things also exist and that things about them could get this tag too.
I think this should clearly exist.
Update: I’ve now made this entry.
Effective altruism outreach in schools or High school outreach or something like that
Overlaps with https://forum.effectivealtruism.org/tag/effective-altruism-education , but that entry is broader, and it seems like now there’s a decent amount of activity or discussion about high school outreach specifically. E.g.:
Parts of this EAIF report, and presumably future stuff that comes from some of those grants
https://forum.effectivealtruism.org/posts/HcaB2kJKhxJtS4oGc/some-thoughts-on-ea-outreach-to-high-schoolers
A slack workspace for people involved in such things
I’m in favor.
Barriers to effective giving or Psychology of (in)effective giving or something like that
Bibliography
Why aren’t people donating more effectively? | Stefan Schubert | EA Global: San Francisco 2018
EA Efficacy and Community Norms with Stefan Schubert [see description for why this is relevant]
[Maybe some other Stefan Schubert stuff]
[Probably some stuff by Lucius Caviola, David Reinstein, and others]
Related entries
cognitive bias | cost-effectiveness | donation choice | diminishing returns | effective giving | market efficiency of philanthropy | rationality | scope neglect | speciesism | temporal discounting
---
Relevant posts:
Some Infrastructure Fund / Meta Fund payout reports
Probably other stuff
Yeah, I think Psychology of effective giving is probably the best name. Stefan, Lucius and others have published a bunch of stuff on this, which would be good to cover in the article.
This is one of many emerging areas of research at the intersection of psychology and effective altruism:
- psychology of effective giving (Caviola et al. 2014; Caviola, Schubert & Nemirow 2020; Burum, Nowak & Hoffman 2020)
- psychology of existential risk (Shubert, Caviola & Faber 2019)
- psychology of speciesism (Caviola 2019; Caviola, Everett & Faber 2019; Caviola & Capraro 2020)
- psychology of utilitarianism (Kahane et al. 2018; Everett & Kahane 2020)
I was thinking of covering all of this research in a general entry on the psychology of effective altruism, but we can also have separate articles for each.
I forgot that there was already an EA Psychology tag, so I’ve now just renamed that, added some content, and copied this comment of Pablo’s on that Discussion page.
(It could still make sense for someone to also create entries on those other topics and/or on moral psychology—I just haven’t done so yet.)
Great, thanks.
Apparently there’s a new review article by Caviola, Schubert, and Greene called “The Psychology of (In)Effective Altruism”, which pushes in favour of roughly that as the name.
I also think that, as you suggest, that can indeed neatly cover “psychology of effective giving” (i.e., that seems a subset of “psychology of effective altruism”), and maybe “psychology of utilitarianism”.
But I’m less sure that that neatly covers the other things you list. I.e., the psychology of speciesism and existential risk are relevant to things other than how effective people will be in their altruism. But we can just decide later whether to also have separate entries for those, and if so I do think they should definitely be listed in the Related entries section from the “main entry” on this bundle of topics (and vice versa).
So I think I currently favour:
Haven’t an entry called psychology of (in)effective altruism
With psychology of effective altruism as a second-to-top pick
Probably not currently having a separate entry for psychology of (in)effective giving
But if people think there’s enough distinctive stuff to warrant an entry/tag for that, I’m definitely open to it
Maybe having separate entries for the other things you mention
Psychology of (in)effective altruism is adequate for a paper, where authors can use humor, puns, and other informal devices, but inappropriate for an encyclopedia, which should keep a formal tone.
(To elaborate: by calling the field of study e.g. the ‘psychology of effective giving’ one is not confining attention only to the psychology of those who give particularly effectively: ‘effective giving’ is used to designate a dimension of variation, and the field studies the underlying psychology responsible for causing people to give with varying degrees of effectiveness, ranging from very effectively to very ineffectively. By analogy, the psychology of eating is meant to also study the psychology of people who do not eat, or who eat little. A paper about anorexia may be called “The psychology of (non-)eating”, but that’s just an informal way of drawing attention to its focus; it’s not meant to describe a field of study called “The psychology of (non-)eating”, and that’s not an appropriate title for an encyclopedia article on such a topic.)
Yeah, the ultra-pedantic+playful parenthetical is a very academic thing. “Psychology of effective altruism” seems to cover giving/x-risk/speciesism/career choice—i.e. it covers everything we want.
Given the fact you both say this and the upvotes on those comments, I think we should probably indeed go with “psychology of effective giving” rather than “psychology of (in)effective giving”.[1]
I still don’t think that actually totally covers psychology of speciesism, since speciesism is not just relevant in relation to altruism. Likewise, I wouldn’t say the psychology of racism or of sexism are covered by the area “psychology of effective altruism”. But I do think the entry on psychology of effective altruism should discuss speciesism and so on, and that if we later have an entry for psychology of speciesism they should link to each other.
[1] But FWIW:
I don’t naturally interpret the “(in)” device as something like humour, a pun, or an informal device
I think “psychology of effective altruism” and “psychology of ineffective altruism” do call to mind to distinct focuses, even if I’d expect each thing to either cover (with less emphasis) or “talk to” work on the other thing
Somewhat analogously, areas of psychology that focus on what makes for an especially good life (e.g., humanist psychology) are meaningfully distinct from those that focus on “dysfunction” (e.g., psychopathology), and I believe new terms were coined primarily to highlight that distinction
But I don’t think this matters much, and I’m totally happy for “psychology of effective giving” to be used instead.
(Oh, just popping a thought here before I go to sleep: “moral psychology” is a relevant nearby thing. Possibly it’d be better to have that entry than “psychology of effective altruism”? Or to have both?)
Our World in Data
Some posts where this tag would be particularly relevant:
https://forum.effectivealtruism.org/posts/ZuTZSau76f4wAGT7C/should-i-give-to-our-world-in-data
https://forum.effectivealtruism.org/posts/6dsrwxHtCgYfJNptp/the-world-is-much-better-the-world-is-awful-the-world-can-be
https://forum.effectivealtruism.org/posts/2n8eCkaLkCE5RMiXN/ea-meta-fund-november-2019-payout-report#_3__Our_World_in_Data____30k
But from a quick search, it seems like at least 20 posts mention Our World in Data somewhere, and presumably some of them also say enough about it to warrant a tag.
Thanks. Coincidentally this was published yesterday. But I haven’t done any tagging yet.
Ah, nice. Maybe I searched for the entry shortly before it was published. I’ve now tagged those 3 posts I mentioned, but haven’t checked and tagged other things that come up when you search “Our World in Data”.
There are lots of hits for ‘EA updates’. The three results that I thought deserved to be tagged were precisely the ones you had already identified. I haven’t looked at this exhaustively, though, so if you find other relevant articles, feel free to add the tag to those, too.
Some orgs it might be worth making entries about:
Nonprofits that were incubated by Charity Entrepreneurship
(I think some already have entries, but not all)
Swiss Existential Risk Initiative (CHERI)
https://effectivealtruism.ch/swiss-existential-risk-initiative
Related entries: Stanford Existential Risk Initiative, Simon Institute for Longterm Governance
Thanks, I’m in the process of compiling a master list of EA orgs and creating entries for the missing ones. Would you be interested in looking at the spreadsheet?
Yeah, I’ll send you a DM
Epistemic challenge, or The epistemic challenge, or Epistemic challenges, or any of those but with “to longtermism” added
Relevant posts include the following, and presumably many more:
https://forum.effectivealtruism.org/posts/FhjDSijdWrhFMgZrb/the-epistemic-challenge-to-longtermism-tarsney-2020
https://forum.effectivealtruism.org/posts/z2DkdXgPitqf98AvY/formalising-the-washing-out-hypothesis
https://forum.effectivealtruism.org/posts/jBmLrYJJh4kydhpcD/the-case-for-strong-longtermism
Related entries
cluelessness
longtermism
expected value
forecasting
Another idea: Long-range forecasting (or some other name covering a similar topic).
See e.g. https://forum.effectivealtruism.org/posts/s8CwDrFqyeZexRPBP/link-how-feasible-is-long-range-forecasting-open-phil
Related entries: cluelessness | estimation of existential risk | forecasting | longtermism
Given how much the scope of this entry/tag would overlap with the scope of an epistemic challenge to longtermism tag, and how much both would overlap with other entries/tags we already have, I think we should probably only have one or the other. (I could be wrong, though. Maybe we should have both but with one being wiki-only. Or maybe we should have both later on, once the Wiki has a larger set of entries and is perhaps getting more fine-grained.)
I agree with having this tag and subsuming epistemic challenge to longtermism under it. We do already have forecasting and AI forecasting, so some further thinking may be needed to avoid overlap.
Ok, I’ve now made a long-range forecasting tag, and added a note there that it should probably subsume/cover the epistemic challenge to longtermism as well.
And yeah, I’m open to people adjusting things later to reduce how many entries/tags we have on similar topics.
Is the “epistemic challenge to longtermism” something like “the problem of cluelessness, as applied to longtermism”, or is it something different?
People in EA sometimes use the term “cluelessness” in a way that’s pretty much referring to the epistemic challenge or the idea that it’s really really hard to predict long-term-future effects. But I’m pretty sure the philosophers writing on this topic mean something more specific and absolute/qualitative, and a natural interpretation of the word is also more absolute (“clueless” implies “has absolutely no clue”). I think cluelessness could be seen as one special case / subset of the broader topic of “it seems really really hard to predict long-term future effects”.
I write about this more here and here.
Here’s an excerpt from the first of those links:
Meanwhile, the epistemic challenge is the more quantitative, less absolute, and in my view more useful idea that:
effects probably get harder to predict the further in future they are
this might mean we should focus on the near-term if that gradual decrease in our predictive power outweighs the increased scale of the long-term future compared to the nearer-term.
On that, here’s part of the abstract of Tarsney’s paper:
I think there should either be an entry for each of Accident risk, Misuse risk, and Structural risk, or a single entry that covers all three, or something like that.
Maybe these entries should just focus on AI, since that’s where the terms were originally used (as far as I’m aware). On the other hand, I think the same concepts also make sense for other large-scale risks from technologies.
If the entries do focus on AI, maybe they should have AI in the name (e.g. AI accident risk or Accident risk from AI), or maybe not.
In this case, the reason I’m posting this here rather than just making it is that I’m not sure exactly what form this entry or set of entries should take; I’m quite confident that something like this should exist.
I think the first place these terms were all used may have been this post: https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure
They’re also used here: https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact
There’s an accidental harm article, which is meant to cover the risk of causing harm as an unintended effect of trying to do good, as discussed e.g. here. What you describe is somewhat different, since the risk results not so much from “attempts to do good” but from the development of a technology in response to consumer demand (or other factors driving innovation not directly related to altruism). Furthermore, misuse risk can involve deliberate attempts to cause harm, in addition to unintended harm. I guess all of these risks are instances of the broader category of “downside risk”, so maybe we can have an article on that?
I think there are indeed overlaps between all these things.
But I do think that the application of these terms to technological risk specifically or AI risk specifically is important enough to warrant its own entry or set of entries.
Maybe if you feel their distinctive scope is at risk of being unclear, that pushes in favour of sticking with the original AI-focused framing of the concepts, and maybe just mentioning in one place in the entry/entries that the same terms could also be applied to technological risk more broadly? Or maybe it pushes in favour of having a single entry focused on this set of concepts as a whole and the distinctions between them (maybe called Accident, misuse, and structural risks)?
I also wouldn’t really want to say misuse risk is an instance of downside risk. One reason is that it may not be downside risk from the misuser’s perspective, and another is that downside risk is often/usually used to mean a risk of a downside from something that is or is expected to be good overall. More on this from an older post of mine:
Also, I think I see “accidental harm” as sufficiently covering standard uses of the term “downside risk” that there’s not a need for a separate entry. (Though maybe a redirect would be good?)
EA vs Non-EA Orgs
Proposed tag description:
Alternative tag names:
EA vs Non-EA Organisations
EA and Non-EA Orgs
Explicitly EA Careers vs Other Careers
Explicitly EA vs Other Careers
Other ideas?
Some posts that would fit this tag:
https://forum.effectivealtruism.org/posts/vHPR95Gnsa3Gkgjof/consider-a-wider-range-of-jobs-paths-and-problems-if-you
https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really
https://forum.effectivealtruism.org/posts/Lms9WjQawfqERwjBS/the-career-and-the-community
https://forum.effectivealtruism.org/posts/yAFXfuwsebEhNgLTf/getting-people-excited-about-more-ea-careers-a-new-community
https://forum.effectivealtruism.org/posts/vMpuXz2zqS8iHya7i/ea-jobs-provide-scarce-non-monetary-goods
https://forum.effectivealtruism.org/posts/mPMg9aL3HwGQ3ghK9/eas-working-at-non-ea-organizations-what-do-you-do
Maybe https://forum.effectivealtruism.org/posts/CkYq5vRaJqPkpfQEt/a-framework-for-thinking-about-the-ea-labor-market-1
Maybe https://forum.effectivealtruism.org/posts/EP6X362Q3ziibA99e/show-a-framework-for-shaping-your-talent-for-direct-work
Maybe https://forum.effectivealtruism.org/posts/YHyvjYSEQtp3nfd6c/thoughts-on-80-000-hours-research-that-might-help-with-job
Maybe some posts tagged Criticism (EA Orgs), or under the other Criticism tags
I like it. Maybe “Working at EA vs Non-EA Orgs?”
Cool, done.
I think that that name is clearer, but thought brevity was substantially preferred for tag names. But I’m personally more inclined towards clarity than brevity here, so I’ll use your suggested name. Someone can change it later anyway.
Scalably Using People or Scalably Using Labour or Task Y or something like that
Proposed description:
Notes on that description:
I’ll obviously add the links to the “See also” tags if I actually make the tag; I’m just being lazy here
Not sure all those “See also” tags are relevant enough to mention
That description is assuming the name is something like Scalably Using People.
Task Y is a somewhat different concept, and it’s less immediately obvious how the term links to the concept, so the description would need to be different.
Posts that would warrant this tag include:
https://forum.effectivealtruism.org/posts/uWWsiBdnHXcpr7kWm/can-the-ea-community-copy-teach-for-america-looking-for-task
https://forum.effectivealtruism.org/posts/HBKb3Y5mvb69PRHvP/dealing-with-network-constraints-my-model-of-ea-careers
https://forum.effectivealtruism.org/posts/oNY76m8DDWFiLo7nH/what-to-do-with-people
https://forum.effectivealtruism.org/posts/G2Pfpkcwv3bJNF8o9/ea-is-vetting-constrained
Many of the posts that link to that post (see the “pingbacks” at the end of the post)
A sequence of small posts I’m working on
I think a bunch of other stuff on the Forum too
I’m pro. I’d call it Task Y, though I wouldn’t be surprised if there was a reason not to.
Cool, given that, I’ve now made the tag. I’ve called it Scalably Using People rather than Task Y, with the key reason being that Alex originally described Task Y as being a single task. More generally, I think that the description of Task Y wouldn’t neatly cover things like the vetting-constrained discussion or Jan’s discussion of hierarchical network structures, and I’m hoping for this tag to cover things like that as well. So I see Task Y as a subset of what I’m hoping this tag will cover.
I’m definitely open to people suggesting alternative names, though.
Industrial Revolution
We already have a variety of related tags, like History, Economic Growth, and Persistence of Political/Cultural Variables. But the Industrial Revolution does seem like perhaps the single most notable episode of history for many/all EA cause areas, so maybe we should have a tag just for it?
Some posts that would warrant the tag:
https://forum.effectivealtruism.org/posts/7QiXR2dv8KL4fkf9D/notes-on-henrich-s-the-weirdest-people-in-the-world-2020
https://forum.effectivealtruism.org/posts/TMCWXTayji7gvRK9p/is-democracy-a-fad
Maybe https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1
My guess is that a better tag would be “History of Economic Growth”. Because I can’t picture a case where someone wants to find things about the industrial revolution but not all of economic growth. (Unless they’re doing a specific research project, but that sounds pretty niche.)
But even still, I’d tentatively lean towards economic growth being enough. But I think that depends on how fine-grained our tagging system should be, which I don’t have a strong opinion on.
This seems reasonable. I was also unsure about my suggestion, hence popping it here rather than making it. I’ll hold off for now, at least.
Cultural Evolution
One relevant post: https://forum.effectivealtruism.org/posts/7QiXR2dv8KL4fkf9D/notes-on-henrich-s-the-weirdest-people-in-the-world-2020
I haven’t searched my memory or the Forum for other relevant posts yet.
This would overlap somewhat with the tags for Memetics and Persistence of Political/Cultural Variables.
I’m in favor.
Cool—done.
Update: I’ve now made this tag.
Persistence of Political/Cultural Variables (or Cultural Persistence, or Cultural, Political, and Moral Persistence, or something like that)
First pass at a description:
Posts that would warrant this tag include:
https://forum.effectivealtruism.org/posts/TMCWXTayji7gvRK9p/is-democracy-a-fad
https://forum.effectivealtruism.org/posts/usL8XErNqDxwoNQj8/long-term-influence-and-movement-growth-two-historical-case
Probably many others; I haven’t scanned my memory for relevant posts yet
Seems reasonable to me. Want to go ahead and create it?
Done!
One consideration I just thought of, which I do not recall seeing mentioned elsewhere, is that the optional number of tags depends somewhat on the typical tag use case.
Clicking on an article’s tags to find other related articles
As only a small % of tags apply to any given article, and this % will fall as the number of tags increases, article tag spaces will not become too ‘busy’.
Hence there should be many tags, so that each article can be tagged as usefully as possible.
Clicking on the tag list to find a specific topic
There are already so many tags it is hard to find the one you want.
This is especially an issue because any given concept often has multiple associated words, so you can’t always cntrl-f.
Good points.
Maybe the ideal for future will be to have hierarchies/categories of Forum tags? LessWrong now does this (though I haven’t looked at their system in detail).
Update: I’ve now made this tag.
Fellowships or EA-Aligned Fellowships or Research Fellowships or something like that
Stefan Schubert writes:
Maybe this would be partially addressed via a tag for posts about these things. I imagine that could be useful for people who are considering running or participating in such a fellowship, or people who definitely will and want to get some insights into how best to do so.
I think the sort of thing that’d obviously be covered are the Summer Research Fellowships offered by FHI and CLR, and the Early Career Conference Programme offered by GPI.
I’m not sure whether this tag should also include:
longer things that are still training-ish (e.g., FHI’s 2 year Research Scholars Program)
non-research internships at EA orgs
research fellowships at non-EA (but potentially impactful) orgs
the fellowships some EA university groups run
I’m therefore also not sure what the ideal name and description would be.
(Update: I’ve now made this tag.)
Institutions for Future Generations
This is arguably a subset of Institutional Decision-Making and/or Policy Change. It also overlaps with Longtermism (Philosophy) and Moral Advocacy / Values Spreading. But it seems like this is an important category that various people might want to learn about in particular (i.e., not just as part of learning about institutional decision-making more broadly), and like there are many EA Forum posts about this in particular.
(Update: I’ve now made this tag.)
China (or maybe something broader like BRICS or Rising Powers)
Rough proposed description:
It seems perhaps odd to single China out for a tag while not having tags for e.g. USA, Southeast Asia, ASEAN, United Nations, Middle Powers. But we do have a tag for posts relevant to the European Union. And China does seem like a particularly important topic, and one that it makes sense to have a specific tag for. And maybe we should indeed have tags for United Nations and Middle Powers.
I’d be interested in thoughts on whether BRICS, Rising Powers, or something else would be a better label/scope for this tag than China.
Update: I’ve created the tag “Discussion Norms”
Community Norms/Discussion Norms
Very Bad Description: Posts that discuss norms on how EAs to interact with each other.
Posts this tag could apply to:
Robert Wiblin, Six Ways To Get Along With People Who Are Totally Wrong*
Jess Whittlestone, Supportive Scepticism
Michelle Hutchinson and Jess Whittlestone, Supportive Scepticism in Practice
Owen Cotton-Barratt, Keeping the Effective Altruism movement welcoming
The extraordinary value of ordinary norms by Emily Tench
Me, Suggestions for Online EA Discussion Norms
Considering Considerateness: Why communities of do-gooders should be exceptionally considerate by Stefan Schubert
Issues with existing tags:
Cooperation & Coordination—this seems more high-level or strategic like donor coordination. But I’m quite uncertain—it could be include disucsion norms. I think there’s probably value for a separate tag though, because this is the kind of thing group organisers would find useful and could be quickly shared with people.
Community—applies but is too broad, so doesn’t help identify these posts
Movement Strategy—also too broad
Diversity & Inclusion—too narrow—not all community norms are about D&I
EA Messaging—too narrow—sometimes relevant when disucssing how you might talk to non-EAs in an EA setting (e.g. a newcomer at an event)
My quick, personal take is that:
A tag for Discussion Norms seems useful and distinct from the other tags you mention. It also wouldn’t have to only be about discussion norms for intra-EA interactions—it could also be about discussion norms in other contexts.
“Community Norms” and “Posts that discuss norms on how EAs to interact with each other” feel very broad to me, and it’s harder for me to see precisely what that’s trying to point at that isn’t captured by one of the first three other tags you mention.
But I have a feeling that something like Community Norms/Discussion Norms could have a clear scope that’s useful and distinct from the other tags. Maybe if you just try flesh out what you mean a little more in the description it’d be clear to me?
Maybe what you have in mind will often relate to things like being welcoming, supportive, and considerate? If so, maybe adjusting the tag label or description in light of that could help?
I think Discussion Norms makes sense!
Discussion Norms: Posts about suggested or encouraged norms within the EA community on how to interact with other EAs, which may often relate to being supportive, welcoming and considerate.
It’s still not great, if you had any feedback I’d be keen to hear it!
Anyone have thoughts on this tag? I’m skeptical, but might be more inclined if I saw more applications that were good. Also if it had a description that described it’s naturalness as a category in the EA-sphere. (If this were a business forum it would obviously be good, and maybe it is in this Forum — I’m not sure.)
My quick take is that it does seem like it at least needs a description that explains why it warrants an EA Forum tag. I’d wonder, for example, whether it’s meant to just be about scaling organisations (e.g., EA orgs), or also about scaling things like bednet distribution programs. (Or maybe those two things are super similar anyway?)
Do we need both Longtermism (Philosophy) and Long-Term Future?
Personally, I think those two tags have sufficiently large and separate scopes for it to make sense for the forum to have both tags. (I didn’t create either tag, by the way.)
But the Longtermism (Philosophy) tag has perhaps been used too liberally, including for posts that should’ve only been given tags like Long-Term Future or Existential Risk. Perhaps this is because the Longtermism (Philosophy) tag was around before Long-Term Future was created (not sure if that’s true), and/or because the first two sentences of the Longtermism (Philosophy) tag didn’t explicitly indicate that its scope was limited to philosophical aspects of longtermism only. Inspired by your comment, I’ve now edited the tag description to hopefully help a bit with that.
The tag description used to be:
The tag description is now:
(The second sentence could perhaps be cut.)
For comparison, the tag description of Long-Term Future is:
(Update: I’ve now made this tag.)
Cooperation & Coordination or [just one of those terms] or Moral Trade
(I think I lean towards the first option and away from Moral Trade.)
Proposed description:
Some posts this would cover:
Effective Altruism and Free Riding
Common ground for longtermists
Various things which have the other tags mentioned above
Some of the sorts of things the Center on Long-Term Risk and 80,000 Hours have written previously, though I’m not sure how many of those specific things are on the forum
Arguments against this:
Too broad?
Maybe it just sounds that way, and a different name and/or description would fix that?
Well covered by the other tags mentioned above?
I don’t think so, really
Not enough forum posts this is relevant too?
Even if that’s true, I expect there will be more in future, or that we should just fix that by making link posts to CLR and 80k posts
Alternative idea:
A tag for Game Theory?
But that feels like a less natural category for the forum for me
I lightly think both is better than either one on its own.
Ok, I’ve now made this tag and used the name that includes both terms :)
Maybe some of the existing tags related to politics & policy should be deleted, and a tag for Politics & Policy should replace them?
Some relevant tags that might be on the chopping block: Improving Institutional Decision-Making, Policy Change, Political Polarisation, International Relations, Direct Democracy, and Global Governance.
I think I’m moderately against this idea, as I think the sub-topics are large/important enough to warrant their own tags, even if there’s a lot of overlap. But I thought I’d throw this idea out there anyway.
I like this idea, sort of. I think we should create a politics and policy “mega-tag” (the tags that show up in white, like Existential risk) while keeping the others as sub-tags.
What do you think about the policy change entry? One option is to rename it to just policy and use it as the “mega-tag” you propose.
That’s a good idea.
(If you hate the above idea but also hate disrupting my delicious karma, feel free to downvote that comment and upvote this one to keep the universe in order.
Or vice versa, I guess, if you’re a maverick.)
(Update: I’ve now made this tag.)
Improving Institutional Decision-Making (or similar)
Argument against:
Arguably overlaps somewhat with the existing tags Forecasting, Policy Change, Political Polarisation, International Relations, Direct Democracy, and European Union
It might make more sense to instead change the name and description of Policy Change so it more clearly covers improving institutional decision-making as well
Arguments for:
Seems substantially distinct from any of the above tags, including Policy Change
A major topic in EA (e.g., one of 80k’s main problem areas, has a large FB group)
There are already 64 posts tagged Policy Change, and I’d guess >20 posts could warrant the tag Improving Institutional Decision-Making. So I think even if the topics overlapped quite a bit (which I’m not sure they do), they could each warrant a tag due to being big enough that the non-overlapping part is quite big.
The post that prompted this, because it’s clearly relevant to IIDM but doesn’t seem very relevant to Policy Change: Should We Prioritize Long-Term Existential Risk?
(Update: I’ve now made this tag, with the name Epistemic Humility and a description noting it can be about other, broadly related things as well.)
Social Epistemology & Epistemic Humility or [just one of those terms] or [some other label]
Some posts that might fit this tag:
In defence of epistemic modesty
Some thoughts on deference and inside-view models
EA reading list: cluelessness and epistemic modesty
“Good judgement” and its components
Maybe some other posts tagged Rationality
I really like Social Epistemology except for the crucial flaw that I haven’t heard it called that before. Without the ability for people to recognize it, I think it’s worse than Epistemic Humility. (Normally I’d prefer the more general term, rather than a term for one strategy within the space.)
Do you mean you haven’t heard the term social epistemology, or that you haven’t heard epistemic humility specifically (or debates around that) referred to by the term social epistemology?
I’d envision this tag including not just things like “How epistemically humble should we be, and how should we update given other people’s statements/beliefs?”, but also things like when we should give just our conclusions vs also our reasoning if we’re concerned about information cascades, and to what extent publicly stating explicit estimates will cause anchoring by others. Those things could arguably be seen as about epistemic humility in that they’re about how to communicate given how other people might handle epistemic humility, but saying they’re about social epistemology (or something else) seems more natural to me.
(That said, I think I’m only familiar with the term social epistemology from how it’s occasionally used by EAs, and the Wikipedia article’s lead section makes me uncertain if they’re using the term in the standard way.)
Maybe the best tag label would be Epistemic Humility & Social Epistemology, to put the term that’s more common in EA first? That’s a longer label than average, though.
FWIW, both my suggestion of this tag and my suggestion of the term social epistemology for it were prompted by the following part of Owen Cotton-Barratt’s recent post:
I have now read the post that contains Social Epistemology.
I also wasn’t clear before, but I was biasing towards one shorter label or another.
Global priorities research and macrostrategy.
I wanted to use these tags when asking this question, but they don’t seem to exist.
There is a tag on cause prioritization. But I think it’d be more useful if that tag was focused on content that is directly relevant for prioritizing between causes, e.g. “here is why I think cause A is more tractable than cause B” or “here’s a framework for assessing the neglectedness of a cause”. Some global priorities or macrostrategy research has this property, but not all of it. E.g. I think it’d be a bit of a stretch to apply the cause prioritization label to this (amazing!) post on Quantifying anthropic effects on the Fermi paradox.
I’ve now made a tag for Global Priorities Research. I currently think that anything we would’ve wanted to give a Macrostrategy tag to can just be given a Global Priorities Research tag instead, such that we don’t need a Macrostrategy tag, but feel free to discuss that in the “Discussion” page attached to the GPR tag.
I’m tentatively in favour of Macrostrategy. A big issue is that I don’t have a crisp sense of what macrostrategy is meant to be about, and conversations I’ve had suggests that a lot of people who work on it feel the same. So I’d have a hard time deciding what to give that tag to. But I do think it’s a useful concept, and the example post you mention does seem to me a good example of something that is macrostrategy and isn’t cause prioritisation.
I feel like a tag for Global Priorities Research is probably unnecessary once we have tags for both Cause Prioritisation and Macrostrategy? But I could be wrong. (Also I’m just offering my views as inputs; I have no gate-keeping role and anyone can make whatever tags they want.)
(Update: I’ve now made this tag.)
Moral Uncertainty
Argument against:
Arguably a subset of Moral Philosophy
Overlaps with Meta-Ethics
Argument for:
Arguably an important subset of Moral Philosophy
I’d estimate there’s at least 10 posts on the topic
I’d be in favor.
What’s the intended difference between Meta-Ethics and Moral Philosophy?
As I understand it, ethics is often split into the branches meta-ethics, normative ethics, and applied ethics. I’m guessing the Moral Philosophy tag is meant to cover all of those branches, or maybe just the latter two. Meta-Ethics would just cover questions about “the nature, scope, and meaning of moral judgment” (Wikipedia).
So some questions that wouldn’t fit in Meta-Ethics, but would fit in Moral Philosophy, include:
Should we be deontologists or consequentialists?
What should be considered intrinsically valuable (e.g., suffering, pleasure, preference satisfaction, achievement, etc.)?
What beings should be in our moral circles?
Whereas Meta-Ethics could include posts on things like arguments for moral realism vs moral antirealism. (I’m not sure whether those posts should also go in Moral Philosophy.)
I noticed there’s no Consciousness tag, so I was going to create one, but then I saw the Sentience tag. Perhaps that should be renamed “Sentience / Consciousness”, and/or its description should be tweaked to mention consciousness?
(I’m putting this here so it can be up- or down-voted to inform whether this change should be made. I think the tag pages will later have the equivalent of Wikipedia’s “Talk” pages, at which point I’d put comments like this there instead.)
(Update: This got 2 upvotes, and continues to seem to me like a good idea, so I updated the name and description of this tag accordingly.)
I’ve edited this post to include our official mandate at the top. Thanks for creating it, MichaelA!
[Any 80,000 problem areas and career paths—or the additional problem areas and career ideas they mention—that are not directly covered by existing tags]
I haven’t yet looked through these problem areas and career paths/ideas with this in mind, to see what’s not covered by existing tags and what the arguments for and against creating new tags for these things would be.
(Feel free to comment yourself with specific tag ideas drawn from the 80k problem areas and career paths, or the additional ones they mention.)
(Update: I’ve now made this tag.)
Nanotechnology or Atomically Precise Manufacturing
Arguments against:
Maybe a little niche?
Somewhat well-covered by Existential Risk?
Arguments for:
Not super niche
80k highlight this a potentially important area (though it’s not one of their top priorities)
The small set of (maybe-not-trustworthy) estimates we have suggest nanotech/APM is decently likely to be among the top 10 largest existential risks we know of (given usual ways of classifying things), and perhaps smaller only than AI and bio
Update: I’ve now made this entry
Grantmaking
Overlaps with EA funding and probably some other entries. But that entry is quite broad, and this entry could also cover things like how useful grantmaking is, how to test fit for it, best practices for grantmaking, etc. (which I’m not sure fit perfectly in EA funding).
Would overlap with vetting constraints if we make that entry (I proposed it elsewhere on this post).
I think this entry makes sense. Maybe effective altruism funding should be made more precise, but that’s a separate issue.
Just want to note that:
I still think it’d probably be good for someone to go through 80k articles and see which topics covered warrant a Forum wiki entry
In many cases, the entry’s name and scope might differ a little from the 80k one. E.g. we might want to go with academia and think tanks rather than 80k’s academic research and think tank research
I now realise that, while doing that, it’d also be cool if the person could add 80k links in the Bibliography / Further reading sections of relevant entries
E.g., I just added a link to an 80k article from our Academia entry
(Update: I’ve now made this tag.)
Think tanks
Could draw on and link to this 80k article: https://80000hours.org/career-reviews/think-tank-research/
(Update: I’ve now made this tag.)
Space (or maybe Space Governance, or Space Governance & Colonisation, or something along those lines)
“Governance of outer space” is mentioned by 80k here.
Would perhaps just be a subset of Long-Term Future. But perhaps a sufficiently large and important subset to warrant its own tag.
Some posts this should include:
Will we eventually be able to colonize other stars? Notes from a preliminary review
Space governance is important, tractable and neglected
Off-Earth Governance
An Informal Review of Space Exploration
Maybe Lunar Colony
Maybe Does Utilitarian Longtermism Imply Directed Panspermia?
(Update: I’ve now created this tag.)
Meta-Ethics
Argument against: This is arguably a subset of the tag Moral Philosophy.
Arguments for: This seems like an important subset, which there are several Forum posts about, and which some people might appreciate a specific tag about (e.g., if they’re beginning to grapple with meta-ethics and are less focused on moral philosophy as a whole right now).
Some posts this should include:
All posts in the sequence commencing with this one: Moral Anti-Realism Sequence #1: What Is Moral Realism?
Morality vs related concepts
Maybe Hi, I’m Luke Muehlhauser. AMA about Open Philanthropy’s new report on consciousness and moral patienthood (due to some comments there and some parts of the report)
Maybe a bunch of stuff on AI alignment, cause prioritisation, and/or moral uncertainty?
Maybe Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk
It might be worth going through the Effective Altruism Hub’s resource collections and the old attempts to build EA Wikis (e.g., the Cause Prioritization wiki), to:
See if that inspires useful new entries/tags
E.g., they might cover some topic that we then realise is worth having an entry for
Find resources that can be given a relevant tag, or listed in Bibliography / Further reading / External links sections
I assume some of this has been done already, but someone doing it thoroughly seems worthwhile.
Thanks Michael!
I manage the EA Hub Resources, but much of the content has been slowly getting outdated.
I think the best action will be to incorporate the content in the Learn and Take Action sections of the EA Hub Resources into the EA Forum wiki, and redirect Hub visitors to the wiki. I’m unlikely to have the time to do this soon, so I would be delighted if someone else was keen to do this. Get in touch if you are keen to do this and I can assist + set up redirects when ready! Message me through the forum private messaging.
The rest of the resources are designed for EA group organisers and my current plan is to keep this outside of the wiki (but I’m happy for folks to try to change my mind!). I plan to move this content onto a new website in the next few months as the EA Hub team have decided to narrow their focus to the community directories.
I did this systematically for all the relevant wikis I was aware of, back when I started working on this project in mid 2020. Of course, it’s likely that I have missed some relevant entries or references.
Ah, nice. What about for the EA Hub stuff?
E.g., they’ve got a bunch of stuff on how to talk about EA, running EA-related events, and movement-building. And also curated collections for cause areas. And I don’t think I’ve seen those things linked to from tag pages?
I actually wasn’t aware of their resources section (EA Hub has changed a lot over the years and I haven’t stayed abreast of the latest changes). They used to have a wiki, which I did review, though some pages were not indexed by the Internet Archive. I wonder if they have migrated their old wiki content to the new resources page. In any case, I’ve made a note to investigate this further.
Hey Pablo!
You are right that the wiki is long dead. The current resources section was written independently from the wiki.
As I just commented up the thread, with the new EA Forum wiki (which is wonderful!), I think the content on the EA Hub intended for all EAs should be merged into the wiki, and then I can retire those pages and set up redirects. More than happy to chat more about this!
Thanks for your message! Can you email me at stafforini.com preceded by MyName@, or share an email address where I can reach you?
(EDIT: We have now contacted each other.)
Great that you two have connected!
In the other thread, Catherine says:
Yeah, I don’t think the EA Forum Wiki needs to eat everything else—other options include:
Just include a link in Further reading or Bibliography to the external collection of resources
See e.g. the link to my own collection of resources from here
Look through the collection, give the appropriate tag to the Forum posts that are in that collection, and maybe include links to some other specific things in the Further reading or Bibliography section
Sounds good!
Mind uploads, or Whole brain emulation, or maybe Digital minds
I think that:
These concepts overlap somewhat with artificial sentience
But these concepts (or at least mind uploads and WBE) are also meaningfully distinct from artificial sentience
But I could be wrong about either of those things.
Further reading
Age of Em
https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf
Related entries
artificial sentience | consciousness research | intelligence and neuroscience | long-term future | moral patienthood | non-humans and the long-term future | number of future people
Definitely. I already was planning to have an entry on whole brain emulation and have some notes on it… wait, I now see the tag already exists. Mmh, it seems we missed it because it was “wiki only”. Anyway, I’ve removed the restriction now. Feel free to paste the ‘further reading’ and ‘related entries’ sections (otherwise I’ll do it myself; I just didn’t want to take credit for your work).
Cool, I’ve now added those related entries and the “roadmap” report (Age of Em was already cited).
Non-longtermist arguments for GCR reduction, or Non-longtermist arguments for prioritising x-risks, or similar but with “reasons” instead of arguments, or some other name like that
The main arguments I have in mind are the non-longtermist 4 of the 5 arguments Toby Ord mentions in The Precipice, focusing on the past, the present, civilizational virtues, and cosmic significance.
Ideally, the entry would cover both (a) such arguments and (b) reasons why those arguments might be much weaker than the longtermist arguments and thus might not by themselves justify placing a strong focus on GCR/x-risk reduction (at least if reducing such risks seems much less tractable or neglected than other things).
Examples of posts that this would cover:
The person-affecting value of existential risk reduction
Link-posts to Precipice-focused talks by Ord
Some posts by ALLFED, e.g. https://forum.effectivealtruism.org/posts/XA8QSCL7wZ973i6vr/agi-safety-and-losing-electricity-industry-resilience-cost
I’m not sure whether the label should focus on GCRs or x-risks. I think the present- and civilizational-virtue-focused arguments apply to GCR reduction, but the past- and cosmic-significance-focused arguments probably don’t.
I think this would be a very useful article to have. It seems challenging to find a name for it, though. How about short-termist existential risk prioritization? I am not entirely satisfied with it, but I cannot think of other alternatives I like more. Another option, inspired by the second of your proposals, is short-termist arguments for prioritising existential risk. I think I prefer ‘risk prioritization’ over ‘arguments for prioritizing’ because the former allows for discussion of all relevant arguments, not just arguments in favor of prioritizing.
Hmm, I don’t really like “short-termist” (or “near-termist”), since that only seems to cover what Ord calls the “present”-focused “moral foundation” for focusing on x-risks, rather than also the past, civilizational virtue, or cosmic significance perspectives.
Relatedly, “short-termist” seems like it implies we’re still assuming a broadly utilitarianian-ish perspective but just not being longtermist, whereas I think it’d be good if these tags could cover more deontological and virtue-focused perspectives. (You could have deontological and virtue-focused perspectives that prioritise x-risk in a way that ultimately comes down to effects on the near-term, but not all such perspectives would be like that.)
Some more ideas:
Existential risk prioritization for non-longtermists
Alternative perspectives on existential risk prioritization
I don’t really like tag names that say “alternative” in a way that just assumes everyone will know what they’re alternative to, but I’m throwing the idea out there anyway, and we do have some other tags with names like that
The reasons for caring about x-risk that Toby mentions are relevant from many moral perspectives, but I think we shouldn’t cover them on the EA Wiki, which should be focused on reasons that are relevant from an EA perspective. Effective altruism is focused on finding the best ways to benefit others (understood as moral patients), and by “short-termist” I mean views that restrict the class of “others” to moral patients currently alive, or whose lives won’t be in the distant future. So I think short-termist + long-termist arguments exhaust the arguments relevant from an EA perspective, and therefore think that all the arguments we should cover in an article about non-longtermist arguments are short-termist arguments.
It’s not immediately obvious that the EA Wiki should focus solely on considerations relevant from an EA perspective. But after thinking about this for quite some time, I think that’s the approach we should take, in part because providing a distillation of those considerations is one of the ways in which the EA Wiki could provide value relative to other reference works, especially on topics that already receive at least some attention in non-EA circles.
Hmm. I think I agree with the principle that “the EA Wiki should focus solely on considerations relevant from an EA perspective”, but have a broader notion of what considerations are relevant from an EA perspective. (It also seems to me that the Wiki is already operating with a broader notion of that than you seem to be suggesting, given that e.g. we have an entry for deontology.)
I think the three core reasons I have this view are:
effective altruism is actually a big fuzzy bundle of a bunch of overlapping things
we should be morally uncertain
in order to do good from “an EA perspective”, it’s in practice often very useful to understand different perspectives other people hold and communicate with those people in terms of those perspectives
On 1 and 2:
I think “Effective altruism is focused on finding the best ways to benefit others (understood as moral patients)” is an overly strong statement.
Effective altruism could be understood as a community of people or as a set of ideas, and either way there are many different ways one could reasonably draw the boundaries.
One definition that seems good to me is this one from MacAskill (2019):
“Effective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world. [...]
The definition is: [...] Tentatively impartial and welfarist. As a tentative hypothesis or a first approximation, doing good is about promoting wellbeing, with everyone’s wellbeing counting equally.” (emphasis added, and formatting tweaked)
I think we should be quite morally uncertain.
And many seemingly smart and well-informed people have given non-welfarist or even non-consequentialist perspectives a lot of weight (see e.g. the PhilPapers survey).
And I myself see some force in arguments or intuitions for non-welfarist or even non-consequentialist perspectives.
So I think we should see at least consideration of non-welfarist and non-consequentialist perspectives as something that could make sense as part of the project to “use evidence and reason to do the most good possible”.
Empirically, I think the above views are shared by many other people in EA
Including two of the main founders of the movement
MacAskill wrote a thesis and book on moral uncertainty (though I don’t know his precise stance on giving weight to non-consequentialist views)
Ord included discussion of the previously mentioned 5 perspectives in his book, and has given indication that he genuinely sees some force in the ones other than present and future
These views also seem in line with the “long reflection” idea that both of those people see as quite important
For long-reflection-related reasons, I’d actually be quite concerned about the idea that we should, at this stage of (in my view) massive ignorance, totally confidently commit to the ideas of consequentialism and welfarism
Though one could support the idea of the long reflection while being certain about consequentialism and welfarism
Also, Beckstead seemed open to non-consequentialism in a recent talk at the SERI conference
Relatedly, I think many effective altruists put nontrivial weight on the idea that they should abide by certain deontological constraints/duties, and not simply because that might be a good decision procedure for implementing utilitarianism in practice
Maybe the same is true in relation to virtue ethics, but I’m not sure
I think the same is at least somewhat true with regards to the “past”-focused moral foundation Ord mentions
I find that framing emotionally resonant, but I don’t give it much weight
Jaan Tallinn seemed to indicate putting some weight on that framing in a recent FLI podcast episode (search “ancestors)
On 3:
EA represents/has a tiny minority of all the people, money, political power, etc. in the world
The other people can block our actions, counter their effects, provide us support, become inspired to join us, etc.
How much of each of those things happen will have a huge influence on the amount of good we’re ultimately able to do
One implication is that what other people are thinking and why is very decision-relevant for us
Just as many other features of the world that doesn’t adopt an EA mindset (e.g., the European Union) could still be decision-relevant enough to warrant an entry
One could see speciesism as a more extreme version of this; that’s of course not in line with an impartial welfarist mindset, but impartial welfarists may be more effective if they know about speciesism
Another implication is that being able to talk to people in ways that connect to their own values, epistemologies, etc. (or show them resources that do this, e.g. parts of the Precipice) can be very valuable for advocacy purposes
I’ll respond quickly because I’m pressed with time.
I don’t think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
I don’t understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I don’t see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term ‘welfarism’, but I don’t think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret ‘others’ in “helping others effectively” as “whichever beings count morally”. Relatedly, uncertainty about what counts as a person’s wellbeing is also relevant, at least if we interpret ‘helping’ in “helping others effectively” as “improving their wellbeing”. So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.)
I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I don’t see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons Christians and libertarians have for holding the views that they do, rather than the reasons they expect to be most persuasive to their readers.
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.
---
Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the “core” of EA, as I see it, also doesn’t include anti-ageing work, and maybe doesn’t include a concern for suffering subroutines, but the Wiki covers those things and I think that it’s good that it does so.
(I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on “helping others”. But my basic point here is that the core of EA isn’t the whole of EA and isn’t all that EA Wiki should cover.)
Going back to “the EA Wiki should focus solely on considerations relevant from an EA perspective”, I think that that’s a good principle but that those considerations aren’t limited to “the core of EA”.
---
Was the word “not” meant to be in there? Or did you mean to say the opposite?
If the “not” is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.
Maybe you’re just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ord’s book? (Btw, I do now recognise that I made a mistake in my previous comment—I wrote as if “helping others” meant the focus must be welfarist and impartial, which is incorrect.)
---
I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, whatever that turns out to mean. And I think we aren’t in a position to rule out perspectives that don’t even focus on “helping others”, including virtue-ethical perspectives or cosmic significance perspectives.
I don’t think I’d want the cosmic significance thing to get its own wiki entry, but it seems fair for it to be something like 1 of 4 perspectives that a single entry covers, and in reality emphasised much less in that entry than 1 of the other 4 things (the present-focused perspective), especially if that entry is applying these perspectives to a topic many EAs care about anyway.
---
Your point 3 sounds right to me. I think I should retract the “advocacy”-focused part of my previous comment.
But I think the “understanding these other actors” part still seems to me like a good reason to include entries on things along the lines of moral views that might be pretty foreign to EA (e.g., speciesism or the 3 not-really-helping-others perspectives Ord mentions).
---
Also, I just checked the 2019 EA survey, and apparently 70% of respondents identified with “consequentialism (utilitarian)”, but 30% didn’t, including some people identifying with virtue ethics or deontology. But I’m not sure how relevant that is, given that they might have flavours of virtue ethics or deontology that are still quite distinct from the related perspectives Ord mentions.
---
(Apologies if the amount I’ve written gave a vibe of me trying to batter you into giving up or something—it’s more just that it’d take me longer to be concise.)
(Typing from my phone; apologies for any typos.)
Thanks for the reply. There are a bunch of interesting questions I’d like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesn’t turn out to be an obstacle for agreeing on this particular issue.
Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I don’t have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably won’t be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, I’ve gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. I’m still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
Great, I like the name.
Longtermism (Cause Area)
We have various tags relevant to longtermism or specific things that longtermists are often interested in (e.g., Existential Risk). But we don’t have a tag for longtermism as a whole. Longtermism (Philosophy) and Long-Term Future don’t fit that bill; the former is just for “posts about philosophical matters relevant to longtermism”, and the latter is “meant for discussion of what the long-term future might actually look like”.
One example of a post that’s relevant to longtermism as a cause area but that doesn’t seem to neatly fit in any of the existing longtermism-related tags is Should marginal longtermist donations support fundamental or intervention research? An analogous post that was focused on global health & dev or mental health could be given the tags that cover those cause areas, and one focused on animal welfare could be given the Farm Animal Welfare and Wild Animal Welfare tags (which seem to me to together fill the role of a tag for that whole cause area).
Agreed. Perhaps Longtermism(Philosophy) is redundant because it could be Longetrmism (Cause Area) + Moral Philosophy - if so, I’d suggest changing the name instead of opening a new tag
Hmm, I think I’d agree that most things which fit in both Longtermism (Cause Area) and Moral Philosophy would fit Longtermism (Philosophy). (Though there might be exceptions. E.g., I’m not sure stuff to do with moral patienthood/status/circles would be an ideal fit for Longtermism Philosophy—it’s relevant to longtermism, but not uniquely or especially relevant to longtermism. But those things tie in to potential longtermist interventions.)
But now that you mention that, I realise that there might not be a good way to find and share posts at the intersection of two tags (which would mean that tags which are theoretically redundant are currently still practically useful). I’ve just sent the EA Forum team the following message about this:
So I’ll hold off on making a Longtermism (Cause Area) tag or converting the Longtermism (Philosophy) tag into that until I hear back from the Forum team, and/or think more or get more input on what the best approach here would be.
👍
Change My View!
I found r/ChangeMyView recently and I think it’s the bee’s knees. “A place to post an opinion you accept may be flawed, in an effort to understand other perspectives on the issue.”
There are already a good deal of questions and posts inviting criticism on this forum, and this tag could organize them all for the people who enjoy a good, clean disagreement/discussion. It could be used especially (or only) for ideas with <50% certainty.
The subreddit itself is a cool place to go, but many issues are more fruitfully discussed among fellow EAs, or would just work better on the EA Forum.
I’m happy to learn if Change My View is actually not a good format for discussion—I just found out about it, so no harm done.
(Update: I’ve now made this tag.)
Law
Some posts this could cover:
Introducing the Legal Priorities Project
Various posts tagged International Relations, Global Governance, AI Governance (e.g., posts by GovAI and/or Cullen O’Keefe), Policy Change, Improving Institutional Decision-Making, or European Union
Arguments for:
I have a sense it could be useful to have a tag for each major field/discipline that many EAs are from and/or that is relevant to many EA areas.
The key reason is that this could maybe help people find posts relevant to their backgrounds, and think about ways they can use their backgrounds to advance EA causes.
I think Law would qualify here
I think this is the same reason why there are Facebook groups for the intersections of EA and various disciplines
And there is indeed a decently sized (153 people) FB group for Effective Altruism & Law
There’s already a tag for History, EA Psychology, EA Philosophy, and probably a few other areas
And somewhat analogously, Operations, Entrepreneurship, and Earning to Give
I also suggested a tag for Economics but haven’t made it yet.
I expect there are at least 10 highly relevant posts, and that they could be readily found by going through the tags mentioned above
Arguments against:
Overlaps with the tags mentioned above
Economics
The Economics tag would be for posts focusing on topics in the domain of economics, making particularly heavy use of concepts or tools from economics, or highlighting ways for people with economics backgrounds to do good.
Some posts that would fit:
An introduction to global priorities research for economists
Posts with the Economic Growth tag
Maybe some posts wth the Global Health and Development and Statistical Methods tags
Probably a lot of other posts, but not merely any post that uses terms like “externalities”—hence “particularly heavy use of concepts or tools from economics”
Arguments against this tag:
Overlaps/subsumes with Economic Growth
Overlaps a bit with Global Health and Development and maybe Statistical Methods
Maybe too broad a category, as economics concepts and tools are used at least a bit in a lot of EA stuff
Maybe it wouldn’t make sense to have this tag without having a tag for each of a large array of other fields/professions (e.g., Biology)
But maybe we should have a tag for each of a large array of other fields/professions?
And we already have tags for some, e.g. EA Psychology
Arguments for:
I imagine it could be nice for someone with an economics background to have an easily accessible collection of posts that are especially relevant to their background and that highlight ways for them to contribute
Analogy to the History tag, the Economics of Doing Good (Effective Altruism) Facebook group, the History and Effective Altruism Facebook group, and various other groups
Yeah, this would work. A general econ tag could focus on values other than economic growth, including equity and preventing hyperinflation.
How about a tag for global governance and/or providing global public goods? This is arguably one of the most pressing problems there is, because many of the problems EA works on are global coordination problems, including existential risk (since existential security is a global public good).
I’d agree that a tag for Global Governance would be good (thanks for suggesting it!). This could cover things like:
how much various moves towards more global governance would help with existential risks and other global and/or transgenerational public goods issues
E.g., these two posts
how much various moves towards more global governance could increase risks of totalitarianism
how to best implement or prevent various moves towards global governance
etc.
Personally, I don’t see much value in a tag for something like providing global public goods. This is partly because that matter is common to so many different EA issues. Relatedly, I don’t think many posts are especially focused on global public goods provision, relative to a huge portion of other posts. But that’s just my tentative two cents.
If no one suggests otherwise or does it themselves, I’ll probably create a Global Governance tag in a couple days.
Update: I’ve now made this tag.
Please separate global development from global health.
Global health is one part of global development, which can include political, economic and humanitarian interventions. I write on politics in developing countries, but I’m probably the only one on the forum so I don’t need my own tag.
I agree this should be separated. I’ve made a note to split the articles (and rearrange the content/tags accordingly).
Cluelessness
Arguments against:
Perhaps somewhat niche?
My current independent impression is that cluelessness, or some of the ideas or implications that people associate with it, is a confused and not especially useful idea, and that we shouldn’t really worry about it
(I definitely think we’re very uncertain about a lot of things and should take that very seriously, but that doesn’t require the term “cluelessness”)
(See also)
Arguments for:
Many smart longtermist and/or philosophically minded EAs seem to think cluelessness is a really important idea, and I think some non-EA philosophers write about the idea as well. The outside view says there’s a good chance that they’re right and I’m wrong.
In any case, given that many people talk and think about cluelessness, it might be useful to make it easier for people to find posts about the idea, and then ideally those posts would help their beliefs converge on the truth (which may or may not be that the concept isn’t useful).
Something like Crisis response
Posts that would get this tag:
https://forum.effectivealtruism.org/posts/iW7LjvuSmDEGF5nei/ea-resilience-to-catastrophes-and-allfed-s-case-study
https://forum.effectivealtruism.org/posts/zjMeGcgWpvDcm3CkH/why-short-range-forecasting-can-be-useful-for-longtermism#Variant_2__Combination_with_a_crisis_response_unit
I think some posts in https://forum.effectivealtruism.org/s/dr4yccYeH6Zs7J82s
Update: Someone else seemingly independently created a tag with basically the same scope: https://forum.effectivealtruism.org/tag/emergency-response-teams/
READI Research
https://www.readiresearch.org/
My guess is that this org/collective/group doesn’t (yet) meet the EA Wiki’s implicit notability or number-of-posts-that-would-be-tagged standards, but I’m not confident about that.
Here are some posts that would be given this tag if the tag was worth making:
https://forum.effectivealtruism.org/posts/RhkbNcA729QiZPm5W/phd-scholarships-for-ea-projects-in-psychology-health-iidm
https://forum.effectivealtruism.org/posts/LSbpgFbhRtsfaBDrL/announcing-and-seeking-feedback-on-the-readi-philanthropy
https://forum.effectivealtruism.org/posts/Sy7swEetrtcK2C7q6/research-summary-a-meta-review-of-interventions-that
https://forum.effectivealtruism.org/posts/WTpwDW8dHWzfiaumx/review-what-works-to-promote-charitable-donations
https://forum.effectivealtruism.org/posts/bZKLYFrDp7gvMjfYc/emily-grundy-australians-perceptions-of-global-catastrophic
https://forum.effectivealtruism.org/posts/pwnvva4AHfxYZmaGM/ea-infrastructure-fund-may-august-2021-grant-recommendations
maybe https://forum.effectivealtruism.org/posts/MxXQ2bbL6KPrHDPtz/intervention-options-for-improving-the-ea-aligned-research
Tags for some local groups / university groups
I’d guess it would in theory be worth having tags for EA Cambridge and maybe some other uni/local groups like EA Oxford or Stanford EA. I have in mind groups that are especially “notable” in terms of level and impact of their activities and whether their activities are distinct/novel and potentially worth replicating. E.g., EA Cambridge’s seminar programs seem to me like an innovation other groups should perhaps consider adopting a version of, and with more confidence they seem like a good example of a certain kind of innovative thinking in local group organising.
But I think it would probably be bad if this opened the floodgates to huge numbers of tags for local/uni groups.
So I’m not sure if the best move is to have no such tags, use our best but somewhat “conservative”/”deletionist” judgement on which such tags to create, or try to come up with criteria that are more objective than my comments above and then follow that.
(Also note that I’m no expert on local/university EA groups, my comments above are tentative, and my knowledge of which groups are doing which seemingly cool things is somewhat haphazardly arrived at.)
EDIT:
There’s a tag for EA Israel
There was previous somewhat relevant discussion here and here
Diplomacy
Might overlap too much with things like international relations and international organizations?
Would partly be about diplomacy as a career path.
Probably worth it, if there are enough relevant posts and/or if there’s discussion here or elsewhere about diplomacy as a career path.
Open society
The ideal of an open society—a society with high levels of democracy and openness—is related to many EA causes and policy goals. For example, open societies are associated with long-run economic growth, and an open society is conducive to the “long reflection.” This tag could host discussion about the value of open societies, the meaning of openness, and how to protect and expand open societies.
I agree that the concept of an open society as you characterize it has a clear connection to EA. My sense is that the term is commonly used to describe something more specific, closely linked to the ideas of Karl Popper and the foundations of George Soros (Popper’s “disciple”), in which case the argument for adding a Wiki entry would weaken. Is my sense correct? I quickly checked the Wikipedia article, which broadly confirmed my impression, but I haven’t done any other research.
Yes, I think your sense is correct.
Yeah, maybe something broader like “democracy” or “liberal democracy.” Perhaps we could rename the “direct democracy” tag to “democracy”?
The direct democracy tag is meant for investments in creating specific kinds of change through the democratic process. But people are using it for other things now anyway—probably it’s good to have a “ballot initiatives” tag and rename this tag to “democracy” or something else. Good catch!
Here’s what I did:
I renamed direct democracy to ballot initiative.
I added two new entries: democracy and safeguarding liberal democracy. The first covers any posts related to democracy, while the second covers specifically posts about safeguarding liberal democracy as a potentially high-impact intervention.
I still need to do some tagging and add content to the new entries.
I agree. I’ll deal with this tomorrow (Thursday), unless anyone wants to take care of it.
I do see this concept as relevant to various EA issues for the reasons you’ve described, and I think high-quality content covering “the value of open societies, the meaning of openness, and how to protect and expand open societies” would be valuable. But I can’t immediately recall any Forum posts that do cover those topics explicitly. Do you know of posts that would warrant this tag?
If there aren’t yet posts that’d warrant this tag, then we have at least the following (not mutually exclusive) options:
This tag could be made later, once there are such posts
You could write a post of those topics yourself
An entry on those topics could be made
It’s ok to have entries that don’t have tagged posts
But it might be a bit odd for someone other than Pablo to jump to making an entry on a topic as one of the first pieces of EA writing on that topic?
Since wikis are meant to do things more like distilling existing work.
But I’m not sure.
This is related to the question of to what extent we should avoid “original research” on the EA Wiki, in the way Wikipedia avoids it
See also
Some other entry/tag could be made to cover similar ground
Update: I’ve now made this entry.
Alternative foods or resilient foods or something like that
A paragraph explaining what I mean (from Baum et al., 2016):
This is what ALLFED focus on, but other research or implementation work has been done on this topic separately from ALLFED (e.g.), and I currently think the topic is sufficiently plausibly important to get its own entry+tag rather than just being seen as just “that thing ALLFED does”.
Considerations regarding what to call this:
Alternative foods is the typical name, but sounds like it could mean alternative proteins or a bunch of other things
Apparently ALLFED are going to rebrand this as resilient foods, which does seem much clearer to me
But the fact that this term is currently not the standard term is a mark against it
I’m in favor. Very weak preference for alternative foods until resilient foods becomes at least somewhat standard.
I now feel that a number of unresolved issues related to the Wiki ultimately derive from the fact that tags and encyclopedia articles should not both be created in accordance with the same criterion. Specifically, it seems to me that a topic that is suitable for a tag is sometimes too specific to be a suitable topic for an article.
I wonder if this problem could be solved, or at least reduced, by allowing article section headings to also serve as tags. I think this would probably be most helpful for articles that cover particular disciplines, such as psychology or computer science. Here it seems that it makes most sense to have a single article covering each discipline, yet multiple tags discussing different aspects of the discipline, such as research on that discipline, careers in that discipline, or applications of that discipline. Currently we take a hybrid approach, sometimes having entries for the discipline as a whole and sometimes for specific aspects of it.
Another advantage of allowing article sections to be used as tags is that some tags are currently associated with a very large number of posts. This suggests that a more fine-grained taxonomy of tags would organize the contents of Forum better, and allow users to find the material they want more easily.
A complication is that not all section headings will be suitable for tags. This issue could be solved in various ways. For example, the search field that opens when the user clicks on ‘Add tag’ could by default only show the tags corresponding to article titles, just as it does currently. However, the user could be given the choice of expanding the tag to display the corresponding headings, and allow them to select among any of these. Perhaps headings already selected as tags by previous users could be shown by default in future searches.
I’m not particularly confident that this is a good idea. But it does seem like something at least worth discussing further.
These are reasonable concerns, but adding hundreds of additional tags and applying them across relevant posts seems like it will take a lot of time.
As a way to save time and reduce the need for new tags, how many of your use cases do you think would be covered if multi-tag filtering was supported? That is, someone could search for posts with both the “psychology” and “career choice” tags and see posts about careers in psychology. This lets people create their own “fine-grained taxonomy” without so many tags needing to have a bunch of sub-tags.
I think something along these lines feels promising, but I feel a bit unsure precisely what you have in mind. In particular, how will users find all posts tagged with an article section heading tag? Would there still be a page for (say) social psychology like there is for psychology, and then it’s just clear somehow that this page is a subsidiary tag of a larger tag?
Inspired by that question, I think maybe a more promising variant (or maybe it’s what you already had in mind) is for some article section headings to be hyperlinked to a page whose title is the other page’s section heading and whose contents is that section from the other page, below which is shown all the tags with that section heading tag. Then if a user edits the section or the “section’s own page”, the edit automatically occurs in the other place as well.
And from “the section’s own page” there’s something at the top that makes it clear that this entry is a subsidiary entry of a larger entry and people can click through to get back to the larger one. Maybe the “something at the top” would look vaguely like the headers of posts that are in sequences? Maybe then you could even, like with sequences, click an arrow to the right or left to go to the page corresponding to the previous or following section of the overarching entry?
Stepping back, this seems like just one example of a way we could move towards more explicitly having a nested hierarchy of entries where the different layers are in some ways linked together. I imagine there are other ways to do that too, though I haven’t brainstormed any yet.
I am considering turning a bunch of relevant lists into Wiki entries. Wikipedia allows for lists of this sort (see e.g. the list of utilitarians) and some (e.g. Julia Wise) have remarked that they find lists quite useful. The idea occurred to me after a friend suggested a few courses I may want to add to my list of effective altruism syllabi. It now seems to me that the Wiki might be a better place to collect this sort of information than some random blog. Thoughts?
Quick thoughts:
I think more lists/collections would be good
I think it’s better if they’re accessible via the Forum search function than if they’re elsewhere
I think it’s probably better if they’re EA wiki entries than EA Forum posts or shortforms because that makes it easier for them to be collaboratively built up
And this seems more important for and appropriate to a list than an average post
Posts are often much more like a particular author’s perspective, so editing beyond copyediting would typically be a bit odd (that said, a function for making suggestions could be cool—but that’s tangential to the main topic here)
I don’t think I see any other advantage of these lists being wiki entries rather than posts or shortforms
I think the only disadvantages of them being lists are that then we might have too many random or messy lists that have an air of official-ness or that the original list creator gets less credit for their contributions (their name isn’t attached to the list)
But the former disadvantage can apply to entries in general and so we already need sufficient policies, other editors, etc. to solve it, so doesn’t seem a big deal for lists specifically
And the former disadvantage can also apply to entries in general and so will hopefully be partially solved by things like edit counters, edit karma, “badges”, or the like
So overall this seems worth doing
Less important:
Various “collections” on my own shortform might be worth making into such entries
Though I think actually most of them are better fits for the bibliography pages of existing entries
(And ~ a month ago I added a link to those collections, or to all relevant items from the collections, to the associated entries that existed at the time)
Something like regulation
Intended to capture discussion of the Brussels effect, the California effect, and other ways regulation could be used for or affect things EAs care about.
Would overlap substantially with the entries on policy change and the European Union, as well as some other entries, but could perhaps be worth having anyway.
Update: I’ve now made this entry.
software engineering
Some relevant posts:
https://forum.effectivealtruism.org/posts/ChXZ2SZGaAzRqLM6D/ama-jp-addison-and-sam-deere-on-software-engineering-at-cea
https://forum.effectivealtruism.org/posts/bud2ssJLQ33pSemKH/my-current-impressions-on-career-choice-for-longtermists#Software_engineering_aptitude
https://forum.effectivealtruism.org/posts/Ejfaog2t72szuMFrp/conversation-on-forecasting-with-vaniver-and-ozzie-gooen#Importance_of_software_engineering_vs__other_kinds_of_infrastructure_
Related entries
artificial intelligence | public interest technology | SparkWave
[Though I think there was a discussion about how often we should include org tags in Related entries, and I can’t remember what was said, so maybe SparkWave should be excluded.]
Looks good to me.
Vetting constraints
Maybe this wouldn’t add sufficient value to be worth having, given that we already have scalably using labour and talent vs. funding constraints.
I think there should definitely be a place for discussing vetting constraints. My only uncertainty is whether this should be done in a separate article and, if so, whether talent vs. funding constraints should be split. Conditional on having an article on vetting constraints, it looks to me that we should also have articles on talent constraints and funding constraints. Alternatively, we could have a single article discussing all of these constraints.
I think I agree that we should either have three separate entries or one entry covering all three. I’m not sure which of those I lean towards, but maybe very weakly towards the latter?
Just discovered Vaidehi made a collection of discussions of constraints in EA, which could be helpful for populating whatever entries get created and maybe for deciding on scopes etc.
Mmh, upon looking at Vaidehi’s list more closely, it now seems to me that we should have a single article: people have proposed various other constraints besides the three mentioned, and I don’t think it would make sense to have separate articles for each of these, or to have an additional article for “other constraints”. So I propose renaming talent vs. funding constraints constraints in effective altruism. Thoughts?
I think that that probably makes sense.
Done. (Though I used the name constraints on effective altruism, which seemed more accurate. I don’t have strong views on whether the preposition should be ‘in’ or ‘on’, however, so feel free to change it.)
The article should be substantially revised (it was imported from EA Concepts), I think, but at least its scope is now better defined.
Great. Let’s have three articles then. Feel free to split the existing one, otherwise I’ll do that tomorrow. [I know you like this kind of framing. ;) ]
Vetting constraints dovetails nicely with talent vs. funding constraints. I’m not totally convinced by the scalably using labour entry, though. One possibility would be to just replace it by a vetting constraints entry. Alternatively, it could be retained but renamed/reconceptualised.
Yeah, scalably using labor just doesn’t strike me as a natural topic for a Wiki entry, though I not sure exactly why. Maybe it’s because it looks like the topic was generated by considering an interesting question—”how should the EA community allocate its talent?”—and creating an entry around it, rather than by focusing on an existing field or concept.
I’d be weakly in favor of merging it with vetting constraints.
I’m currently in favour of keeping scalably using labour, though I also made the entry so this shouldn’t be much of an update (it’s not like a “second vote”, just a repeat of the first vote after hearing the new arguments).
One consideration I’d add is that maybe it’s a more natural topic for a tag than a wiki entry? It seems to me like having a tag for posts relevant to a (sufficiently) interesting and recurring question makes sense?
Fwiw, I think that “scalably using labour” doesn’t sound quite like a wiki entry. I find virtually no article titles including the term “using” on Wikipedia.
If one wants to retain the concept, I think that “Large-scale use of labour” or something similar would be better. There are may Wikipedia article titles including the term “use of [noun]”. (Potentially nouns are generally better than verbs in Wikipedia article titles? Not sure.)
Intelligence assessment or Intelligence (military and strategy) or Intelligence agencies or Intelligence community or Intelligence or something
I don’t really like any of those specific names. The first is what Wikipedia uses, but sounds 100% like it means IQ tests and similar. The second is my attempt to put a disambiguation in the name itself. The third and and fourth are both too narrow, really—I’d want the entry to not just be about the agencies or community but also about the type of activity they undertake. The fifth is clearly even more ambiguous than the first, but is also the term that’s most commonly used, I think—it’s usually just clear from context, which doesn’t work as well for an entry/tag.
(Edit: I’ve now made this entry.)
Independent research
Proposed text:
Uncertainties about that text:
Should collaborative research definitely (rather than arguably) count as independent research? Should it definitely not count?
Should research conducted by people who are employed, but with the research being separate from their employment, count?
I think clearly yes if they’re e.g. employed as a bank clerk but doing AI safety research in their free time.
Not sure for e.g. the research I do that’s not exactly Rethink Priorities or FHI research, but that one or both orgs are still happy exists.
Examples of posts that would warrant this tag:
Various LTFF payout reports
Some text from the latest LTFF report that could be drawn on when discussing advantages and disadvantages within this entry:
Looks good, thanks!
Edit: I’ve now made this entry.
Longtermist Entrepreneurship Fellowship
I think this is only mentioned in three Forum posts so far[1], and I’m not sure how many (if any) would be added in future.
It’s also mentioned in this short Open Phil page: https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-jade-leung
I’m also not sure if the name is fully settled—different links seem to use different names, or to not even use a capitalised name.
[1] https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-grant-recommendations#Longtermist_Entrepreneurship_Fellowship
https://forum.effectivealtruism.org/posts/6x2MjPXhpPpnatJFQ/some-promising-career-ideas-beyond-80-000-hours-priority#Nonprofit_entrepreneurship
https://forum.effectivealtruism.org/posts/SppupBEiPCAYA5nLW/cea-s-2020-annual-review#Internal
I’m in favor, though there’s so little public information at this stage that inevitably the entry won’t have any substantive content for the time being.
Weapons of mass destruction
Related entries
anthropogenic existential risk | armed conflict | biosecurity | global governance | Nuclear Threat Initiative | nuclear warfare | peace and conflict studies | terrorism
Looks good.
Cool—given that, I’ve now made this (though without adding body text or tagging things, for time reasons).
Make entries for many of the concepts featured on Conceptually?
I read the content on that site in 2019 and found it useful. I haven’t looked through what concepts are on there to see which ones we already have and which ones might be worth adding, but I expect it’d be useful for someone to do so. So I’m noting it here in case someone else can do that (that’d be my preferred outcome!), or to remind myself to do it in a while if I have time.
I like Conceptually, and during my early research I went through their list of concepts one by one, to decide which should be covered by the EA Wiki, though I may have missed some relevant entries. Thoughts on which ones we should include, that aren’t already articles or are listed in our list of projected entries?
Update: I’ve now made this entry.
Fermi estimation or Fermi estimates
Overlaps with some other things in the Decision Theory and Rationality cluster of the Tags Portal.
I agree that this should be added. I weakly prefer ‘Fermi estimation’.
Demandingness objection
I’d guess there are at least a few Forum posts quite relevant to this, and having a place to collect them seems nice, but I could be wrong about either of those points.
I agree it’s relevant. But we already have an article: demandingness of morality.
(It’s likely you haven’t seen it because many of these articles were Wiki-only until very recently.)
Yeah, I just spotted that and the fact I had a new notification at the same time, and hoped it was anything other than a reply here so I could delete my shamefully redundant suggestion before anyone spotted it :D
(I think what happened is that I used command+f on the tags portal before the page had properly loaded, or something.)
Antimicrobial resistance or Antibiotic resistance
Not sure enough EAs care about this and/or have written about this on the Forum for it to warrant an entry/tag?
(I don’t personally have much interest in this topic, but I’m just one person.)
A couple relevant posts I stumbled upon:
https://forum.effectivealtruism.org/posts/8ERp3GbQ54Fw8ehuQ/antibiotic-resistance-and-meat-why-we-should-be-careful-in
https://forum.effectivealtruism.org/posts/2qXfME3Rrcd7mdnMr/
Update: I’ve now made this tag.
Something like Bayesianism
Arguments against having this entry/tag:
Maybe the topic is sufficiently covered by the entries on Epistemology and on Decision theory?
Yeah, perhaps name it Bayesian reasoning or Bayesian epistemology?
Cognitive biases/Cognitive bias, and/or entries for various specific cognitive biases (e.g. Scope neglect)
I feel unsure whether we should aim to have just a handful of entries for large categories of biases, vs one entry for each of the most relevant biases (even if this means having 5+ or 10+ entries of this type)
My sense is that it would be desirable to have both an overview article about cognitive bias, discussing the phenomenon in general (e.g. the degree to which humans can overcome cognitive bias, the debate over how desirable it is to overcome them, etc.) as well as articles about specific instances of it.
I think you mean it’d be desirable to have both a general article on cognitive bias and one article each for various specific instances of it?
Rather than having just one general article that covers both the topic as a whole and specific instances of it?
Given my assumed interpretation of what you meant, I’ve now made an entry for Cognitive biases and another for Scope neglect. People could later add more, or delete some, or whatever.
(I’ve now copied the content of this thread to the Discussion page on the Cognitive biases entry. If you or others would like to reply, please do so there.)
Update: I’ve now made this entry.
Instrumental vs. epistemic rationality
Some brief discussion here.
These terms may basically only be used on the LessWrong community, and may not be prominent or useful enough to warrant an entry here. Not sure.
I think this would be useful to have.
Metaethical uncertainty and/or Metanormative uncertainty
These concepts are explained here.
I think it’s probably best to instead have an entry on “Normative uncertainty” in general that has sections for each of those concepts, as well as sections that briefly describe (regular) Moral uncertainty and Decision-theoretic uncertainty and link to the existing tags on those concepts. (Also, the entry on Moral uncertainty could discuss the question of how to behave when uncertain what approach to moral uncertainty is best, which is metanormative uncertainty.) This is because I think there are relatively few posts specifically on Metaethical and Metanormative uncertainty, and some of those that there are are also relevant to other types of normative uncertainty in a broad sense.
But it’s possible that “Normative uncertainty” is best defined as uncertainty just about regular normative ethics, such that it shouldn’t be seen as covering metaethical and metanormative uncertainty. And it’s also possible that, in any case, those concepts are important enough to warrant their own entries.
Subjective vs. objective normativity
See here and here
Update: I’ve now made this entry.
Disentanglement research
Defined here: https://forum.effectivealtruism.org/posts/RCvetzfDnBNFX7pLH/personal-thoughts-on-careers-in-ai-policy-and-strategy
Off the top of my head, I’m not sure how many posts would get this tag. But I know at least that one would, and I’d guess we’d find several more if we looked.
And in any case, this seems to be a useful concept that’s frequently invoked in the EA community, so having a short wiki entry on it might be good (even ignoring tagging).
Related entries:
https://forum.effectivealtruism.org/tag/scalably-involving-people https://forum.effectivealtruism.org/tag/research-methods
ETA: I’ve just seen this post: How would you define “disentanglement research”? The existence of that post updates me towards slightly more confidence that this entry would be worth having. And the content in that post could be useful for this entry.
Another suggestion: Research distillation or Research debt or similar
We could have:
an entry for this and another for disentanglement research (with links between them)
one entry covering both
one entry that’s mainly on one topic but briefly mentions/links to the other
neither
What I have in mind is what’s discussed here: https://distill.pub/2017/research-debt/
Off the top of my head, I’m not sure how many posts would get this tag. But maybe some would?
And in any case, this seems to me to be a useful concept that’s sometimes invoked in the EA community, so having a short wiki entry on it might be good (even ignoring tagging). But I’m less confident I’ve heard this mentioned a lot in EA than I am with disentanglement research.
This is obviously very similar to the idea of a research summary. But I think that these terms and the Distill article add some value. And the research summary tag is currently only for research summaries, not for discussion of the value of or best practices for distilling research or making summaries.
Related entries:
https://forum.effectivealtruism.org/tag/scalably-involving-people https://forum.effectivealtruism.org/tag/research-methods
Tag portal question/suggestion:
Many tags are probably relevant for more than one of the categories/clusters used on the tag portal. For example, Economic growth is currently listed under global health & development, but it’s also relevant to Long-Term Risk and Flourishing and to Economics & Finance and probably some other things.
Currently, I think each tag is only shown in one place on the portal. That might be the best move.
But maybe they should instead be mentioned in every place where they’re (highly) relevant, and where people might expect to find them? E.g., if I checked Economics & Finance and saw no tag for Economic growth, I might assume that that tag doesn’t exist and so try to make it.
Maybe there’s some elegant third option to handle this?
Crypto or something like that
Some EAs are working on or interested in things like crypto and blockchain, either as investment opportunities or as tools that might be useful for accomplishing things EAs care about (e.g., mechanism design, solving coordination problems). Maybe there should be a tag for posts relevant to such things. I’d guess that there are at least 3 relevant Forum posts, though I haven’t checked.
There are also at least two 80,000 Hours episodes that I think are relevant:
Vitalik Buterin on better ways to fund public goods, the blockchain’s failures so far, & how it could yet change the world
Radical institutional reforms that make capitalism & democracy work better, and how to get them
I would prefer Blockchain, as it is more general than cryptocurrency and doesn’t confuse people with the field of cryptology
Good points. I’ve now created the tag and used the name Blockchain.
Reasonable because the generality, though I think the cryptography ship has long, long since sailed.
😢
Seems good. Maybe we should crosspost one of the recent articles on Sam Bankman-Freid.
I’ve now created the tag. Feel free to make those crossposts and give them the tag, of course :)
(I won’t do it myself, as I have little knowledge about or personal interest in blockchain stuff myself.)
Update: I’ve now made this entry
Non-Humans and the Long-Term Future
Why I propose this:
The following sorts of topics come up decently often:
Is longtermism focused only on humans?
Should longtermists focus on improving wellbeing for animals?
Should longtermists focus on improving wellbeing for artificial sentiences?
Are existential risks just about humans?
Topics like “Will most moral patients in the long-term future be humans? Other animals? Something else? By how large a margin?” also come up sometimes (though less often)
I think it’d be good to collect posts relevant to those things
Examples of posts that would warrant this tag:
https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals
https://forum.effectivealtruism.org/posts/XrRGKSvntGCZAajGk/longtermism-and-animal-advocacy
https://forum.effectivealtruism.org/posts/cEqBEeNrhKzDp25fH/the-importance-of-artificial-sentience
https://forum.effectivealtruism.org/posts/ocmEFL2uDSMzvwL8P/possible-misconceptions-about-strong-longtermism
Alternative tag name options:
Non-Humans and the Far Future
Longtermism and Non-Humans
Update: I’ve now made this entry
Positive futures (or Utopias, or Ideal futures, or something like that)
Proposed description:
Or maybe the “reasons to care about this topic” part is too long/my-own-opinion-y to include in a tag description?
Here’s an example of a post that would warrant this tag: Characterising utopia. I think many/most posts tagged Fun Theory on LessWrong would also fit this tag.
There are two people I would’ve sent this tag page to this week, if it existed and was populated with a few posts, and I think their upcoming work may warrant this tag. This is what prompts me to suggest this.
A bit more on my thinking on this, from a shortform post of mine:
Fermi Paradox
Arguments for having this tag:
Seems a potentially very important macrostrategy question
There are at least some posts relevant to it
Arguments against:
Not sure if there are more than a few posts highly relevant to this
Maybe this is not a prominent enough topic to get its own tag, rather than just being subsumed under the Space and Global Priorities Research tags
This is currently a wiki-only tag. I doubt many posts are relevant to this, and I suspect that “Space” should work for all of them, but we’re still in the process of figuring out how useful a tag has to be to be worth adding to the tagging menu.
EA fellowships
I think it might be useful to have a post on EA fellowships, meaning things like the EA Virtual Programs, which “are opportunities to engage intensively with the ideas of effective altruism through weekly readings and small group discussions over the course of eight weeks. These programs are open to anyone regardless of timezone, career stage, or anything else.” (And not meaning things like summer research fellowships, for which there’s the Research Training Programs tag.)
I think this’d be a subset of the Event strategy tag.
But I’m not sure if there are enough posts that are highly relevant to EA fellowships for it to be worth having this tag in addition to the Event strategy tag. And maybe a somewhat different scope would be better (i.e., maybe something else should be bundled in with this).
Update: I’ve now made this tag.
ITN
Proposed description:
I think this post would entirely be a subset of Cause Prioritization, but it seems like an important subset with a bunch of posts in it, and which people might sometimes want to seek out specifically.
Two posts that would fit in this tag, and that contain links to a bunch of other posts that’d fit, are [WIP] Summary Review of ITN Critiques and Factors other than ITN?
These posts would also fit:
https://forum.effectivealtruism.org/posts/Eav7tedvX96Gk2uKE/the-itn-framework-cost-effectiveness-and-cause
https://forum.effectivealtruism.org/posts/fR55cjoph2wwiSk8R/formalizing-the-cause-prioritization-framework
https://forum.effectivealtruism.org/posts/9CWNenpN3xGoMZpoN/understanding-and-evaluating-ea-s-cause-prioritisation
https://forum.effectivealtruism.org/posts/5zSeorTbiKxdmzJq4/introducing-the-stock-issues-framework-the-int-framework-s
Advanced Military Technology (or some other related name)
Proposed description:
Other tags that this overlaps with include: AI Governance, Atomically Precise Manufacturing, Biosecurity, Space, Scientific Progress, Nuclear Weapons, and Biosecurity.
I think that this tag would be entirely a subset of Armed Conflict, but in my view an important subset. I think it would be a superset of Autonomous Weapons. I don’t think it would be a subset or superset of any of the other tags I mentioned (as each of those areas can but won’t always be about advanced military technologies).
One post that would fit here but might not fit any of the others of those tags except Armed Conflict is What’s the big deal about hypersonic missiles?
I agree with whoever upvoted the other of the two tags you made this day but not this one. I would want to see more posts that formed a natural cluster around this concept. The one example is good, but I can’t recall any others.
Yeah, that makes sense. I’ll hold off unless I encounter additional relevant posts.
(Update: I’ve now made this tag.)
Impact Assessment (or maybe something like Impact Measurement or Measuring Impact)
Proposed rough description:
A handful of the many posts that this tag would fit:
Rethink Priorities Impact Survey
Should surveys about the quality/impact of research outputs be more common?
Rethink Priorities 2019 Impact and Strategy
2017 LEAN Impact Assessment: Qualitative Findings and other posts in that series
Lessons from a full-time community builder. Part 1 of 4. Impact assessment
How the Giving Games Project Tracks Its Impact
Maybe Asking for advice
Maybe Do research organisations make theory of change diagrams? Should they?
Maybe Effective Altruism Foundation: Plans for 2020 for the section on “Review of 2019”
In addition to the three tags mentioned as “See also”, this tag would perhaps overlap a bit with the tags:
Forecasting
Org Update
Cause Prioritization
Community Projects
Criticism (EA Cause Areas)
Criticism (EA Movement)
Criticism (EA Orgs)
Data (EA Community)
EA Funding
Global Catastrophic Risk
Argument against:
Obviously very related to Existential Risk, and to various other tags like Civilizational Collapse & Recovery and Nuclear Weapons
Argument for:
Conceptually distinct from all of those other tags (but it’s still possible that, in practice, posts that are partly about GCRs specifically will always partly be about one of the related tag topics as well)
Very important topic in itself, in my view
Some posts that might fit this tag but not the Existential Risk tag:
Food Crisis—Cascading Events from COVID-19 & Locusts
*updated* CTA: Food Systems Handbook launch event
Nonlinear Fund
Maybe it’s too early to make a tag for that org?
“Economic Policy” or “Macroeconomic Stabilization”
Pros:
Macroeconomic stabilization is one of the areas that Open Phil works on, but it’s not frequently discussed in the EA community. This tag could be specific to macro stabilization or it could encompass all areas of economic policy (aside from economic growth, which already has a tag).
Land use reform already has a tag.
Cons:
Could already be encompassed by Policy Change or Less-discussed causes.
Simulation Argument
Arguments for having this tag:
Seems a potentially very important macrostrategy question
There are at least some posts relevant to it
Arguments against:
Not sure if there are more than a couple posts highly relevant to this
Maybe this is not a prominent enough topic to get its own tag, rather than just being subsumed under the Global Priorities Research tag
This is currently a wiki-only tag. I doubt many posts are relevant to this, but we might make it usable again — we’re still in the process of figuring out how useful a tag has to be to be worth adding to the tagging menu.
Can I create a tag called “EA Philippines”, for posts by people related to EA Philippines, such as about our progress or research? I’d like to easily see a page compiling posts related to EA Philippines. I could create a sequence for this, but a sequence usually implies things are in a sequential order and more related to each other. But our posts will likely be not that related to each other, so a tag would likely be better.
A counterargument is I currently don’t see any tags for any EA chapter, except for EA London updates, But these aren’t about EA London specifically—they’re just the updates they compile on the EA movement. Adding in one tag for one chapter seems harmless, but if eventually 50-100 chapters do this, things might get disorganized. Curious to hear others’ thoughts on this!
Quick thoughts:
There’s been some discussion of “country-specific tags” (and region-specific tags) here
I think perhaps decisions about general principles for country-specific tags and general principles for EA-chapter-specific tags should be made in tandem
E.g., because it’d be a bit weird to have both a tag for the Philippines as a country (e.g., about the relevance of that country for EA cause areas) and a tag for EA Philippines
Maybe the best option would be to just have country- or region-specific tags that also serve sort-of like EA-chapter-specific tags, unless there are e.g. more than 10 posts relevant to that EA chapter specifically, or more than 20 posts that’d be in the whole tag?
(This is just one possible, quickly thought up principle)
But I’m not actually sure what the principles should be
E.g., if something like the above principle is adopted, I’m not sure what numbers should be used (I chose 10 and 20 pretty randomly)
And I’m not sure how that sort of principle should interact with the option of region-specific tags
E.g., maybe it’d be best to just have a tag like Southeast Asia, and let that play roles similar to that that would be played by country-specific and EA-chapter-specific tags for each country in that region?
Or maybe if there’s a tag for Southeast Asia, that’s so broad that it then becomes useful to have an EA Philippines tag (but without there being need for a Philippines tag)?
I think it’s a good idea to go with a Philippines tag rather than an EA Philippines tag. Both are quite interchangeable because 100% of past posts (there’s 5 of them) related to the Philippines are also written by people in EA Philippines, and 100% of past posts by EA Philippines are related to the Philippines.
I think this will continue for quite a few years for ~80-100% of posts, since we expect only a few people to not be affiliated with EA Philippines but still be writing about the Philippines. I think that 90-100% of posts by EA Philippines will relate to the Philippines.
I also agree that for national EA groups, rather than have an EA-chapter-specific tag as well as a country-specific tag, we should just have the country-specific tag.
I don’t understand how a post related specifically to an EA chapter wouldn’t also be related to the country, so I think one country tag (rather than a country and a chapter tag) is enough.
I would prefer to just have a Philippines tag already rather than a Southeast Asia tag. This is because:
I think we’ll hit 10 posts soon, i.e. by the midpoint of 2021
We already have 5 past posts that could be tagged under Philippines
I have ~3 more posts coming up (likely this month) that would also be tagged under Philippines
Therefore rather than tagging these posts as under Southeast Asia, then having to move them to Philippines after we hit 10 posts, I’d rather we just have them tagged as under the Philippines already.
I think the principle should be like “If there are 5 or more posts already for a specific country or EA national chapter, and if you would want to create a tag for easier visibility of posts related to that country/chapter, then you should create a tag for that specific country already.” Let me know what you think of this principle!
That sounds good to me :)
(Though of course this is just one person’s thoughts—I have no official role in the EA Forum; I’m just a nerd for tags.)
Alright. I’ve gone ahead and made the Philippines tag here, along with a description for it. I’ve also tagged all 5 pasts posts on this topic already. The description I wrote could be a template for how other country-specific tags should be like. I felt that the description you wrote for China didn’t apply as much to the Philippines tag.
If you or anyone else wants to let me know if the description is alright, or if I should change anything, let me know!
The description looks good to me!
And I agree that it seems like it could be a useful example/template for other country-specific tags to draw on.
Country-specific tags
I just saw “creation of country specific content”as an example among the higher rated meta EA areas in the recent article What areas are the most promising to start new EA meta charities—A survey of 40 EAs. What do you think about introducing tags for specific countries? E.g. I’d already have a couple of articles in mind that would be specifically interesting for members of German/Austrian/Swiss communities.
Personally, I think:
it probably makes sense to have at least some tags to mark that posts are relevant to particular countries/regions
but that this should probably be something like 2-20 tags, just in the cases where there are several posts for which the tag would be useful
Rather than e.g. a tag for every country (which I’m not saying you proposed!)
Relevant prior tags and discussion
There are already tags for China and the European Union. The tag description for the China one (which I wrote) could perhaps be used as a model/starting point for other such tags:
And when I proposed the China tag, I wrote:
Yes, I also had something like 5-15 tags in mind. Your proposal for China makes sense to me, though I had a more “internal” perspective in mind, where EAs from the US/UK/Australia/Germany/Canada/etc. could get an overview of articles that are relevant for their specific country and are maybe indirectly encouraged to add something. So I’d write it as
Looking at the EA Survey results on geographic distribution, I’d maybe do
US
UK
Austria-NZ
Germany-Austria-Switzerland
Canada
Netherlands
France
Scandinavia
Southeast Asia
Latin America
Should we have a tag for “Feedback Request”?
We in EA Philippines have made 2 posts (and have another upcoming one) already that were specifically for requesting feedback from the global EA community on an external document we wrote, before we post this document for the general public. See here and here as examples from EA PH, and this other example from a different author.
I think it happens quite often that EAs or EA orgs ask for feedback on an external document or on a writeup they have rough thoughts on, so I think it’s worth having this tag.
A potential counterargument to this being a tag is that lots of authors (or most authors) would want feedback on their posts anyway, and it’s hard separating which ones are feedback requests and which ones aren’t. I guess the use of this tag is ideally for posts that authors specifically want answers to a few questions for, or if they want feedback on an external document, rather than just getting general feedback on their article. Would appreciate any thoughts on this!
Another potential argument in favor of having a tag for Feedback Request is it might encourage EAs to share work with each other and get feedback more often, which is likely a good thing.
In my workplace at First Circle, we have a process called “Request for Comment” or “RFC” where we write documents in a specific format and share them on an #rfc slack channel, so that people know we want feedback on a proposal or writeup in order to move forward with our work. This was very effective in getting people to share work, get feedback on work asynchronously rather than via a synchronous meeting, and to streamline and house one place for feedback requests. Maybe a tag for “Feedback Request” could also streamline things?
For example, if an EA wants to see what they could give feedback on, they could click this tag to check out things they could give feedback on.
It could also be good practice for authors of feedback requests to put a deadline on when they need feedback on something by. This is so people backreading know if they should still give feedback if a deadline has passed.
I made a tag for requests, which I think applies here if there is a specific request for feedback with timeframe. I’ll write a short post about it now.
Yeah, I think I’d personally lean towards letting the thing Brian is describing be covered by the Requests (Open) tag. This is partly because, as Brian notes, “lots of authors (or most authors) would want feedback on their posts anyway, and it’s hard separating which ones are feedback requests and which ones aren’t.”
I’m also not really sure I understand the distinction, or the significance of the distinction, between that wanting feedback on an external doc before sharing it more beyond the EA community and wanting feedback on a post before that, or an adapted form of that, is shared beyond the EA community. (One part of my thoughts here is that I think a decent portion of posts may ultimately make their way into things shared beyond the EA community, and sometimes the authors won’t be sure in advance which posts those are. E.g., MacAskill’s hinge of history post is now an academic working paper.)
That said, I’ve also appreciated the existence of Slack channels where people can solicit feedback from colleagues. (I’ve appreciated that both as an author and as a person who enjoys being helpful by giving feedback.) And the EA Editing & Review facebook group seems to demonstrate some degree of demand for this sort of thing in EA. So maybe there’s a stronger case for the tag than I’m currently seeing.
(OTOH, maybe the need could be well-met just by using the Requests (Open) tag and posting in EA Editing & Review?)
If a Feedback Request tag is made, perhaps it’d be worth linking in the tag description to Giving and receiving feedback, Asking for advice, and/or Discussion Norms?
Oh cool, yeah I guess this works!
Sorry if offtopic but how do I remove a tag after wrongly using it?
If you mean un-tagging a page, you vote down its relevance by hovering over the tag on the page and clicking the < arrow. If the relevance score gets to or below 0, the tag is removed.
If you mean deleting a tag entirely (not just from one page), I think you’d have to message the EA Forum team?
More info on tags here and here.