Humanities Research Ideas for Longtermists
Summary
This post lists 10 longtermism-relevant project ideas for people with humanities interests or backgrounds. Most of these ideas are for research projects, but some are for summaries, new outreach content, etc. (See below for what I mean by “humanities.”)
The ideas, in brief:
Study future-oriented beliefs in certain religions or groups
Study the ways in which incidental qualities become essential to institutions
Explore fiction as a tool for moral circle expansion
Study how longtermists use different forms of media and how this might be improved
Study how non-EAs actually view AI safety issues, and how we got here
Produce anthropological/ethnographic studies of unusually relevant groups
Apply insights from education, history, and development studies to creating a post-societal-collapse recovery plan
Study notions of utopias
Analyze social media (and online forums) in the context of longtermism
Use tools from non-history humanities fields to aid history-oriented projects relevant for longtermism
Why it might be helpful to produce lists of projects for people with humanities backgrounds (or interests) to work on
-
Deliberately looking for and studying topics that are humanities-oriented could be a way to discover longtermist interventions that are hard to notice or tackle from other angles (e.g., a STEM angle), improve our views on known causes and interventions, and find topics that are better fits for some people than existing (non-humanities) project ideas would be.
-
If it is relatively easy to produce such lists, it suggests that we are systematically missing humanities ideas and tools from our reasoning, and that this gap is not explainable by a natural disconnect between longtermist values or concerns and non-STEM areas.[1] (If we had exhausted humanities approaches to longtermism, it would probably be hard to find previously unnoticed topics that seem reasonable.) It seems valuable to have diversity in backgrounds and perspectives, and the existence of this gap suggests that supporting humanities projects might be a way to improve on that front.
-
Collections like this can consolidate existing ideas and resources in one place, making it easier to find projects and collaborate as a community.
-
I am aware of talented people who have been put off EA (and longtermism) due to their general sense that the humanities are considered worthless. My sense is that EAs do see value in the humanities, and it might be worth making this clearer.
-
(Personal note) this project was helpful for me as a way to explore longtermist research.
Scope and disclaimers
The focus of the post is on the humanities disciplines most neglected in EA and longtermism, so I didn’t focus on history, philosophy, and psychology. (Those might also be neglected in the community, but there has been at least some mention of how they could be relevant for longtermism in places like the Forum.)[2] My use of the word “humanities″ is loose—for this project, I accepted some fields that might be considered social sciences instead. In practice, I think the ideas listed here are most related to anthropology, archival studies, area studies, art history, (comparative) literature, (comparative) religion studies/theology, education, and media studies.
The list is not meant to be exhaustive by any means; in particular, the selection of topics here is heavily influenced by my own academic background (literature, sort-of-history, art, math). Some of the ideas are ideas for bringing existing research into EA rather than ideas for producing totally new research. It is also important to note that I have very little background in most of the areas involved in this list, and I wouldn’t be surprised if deeper research discovered that some of these topics have been covered by EAs. Finally, my understanding of religion is biased towards Christianity/Islam/Judaism, and my model is not great for other religions.
Reviewers of a draft of this post suggested additional resources that might be relevant for different topics; I linked these resources but have not read them all myself.
I am interning at Rethink Priorities for the summer, and wrote this post as a starter project.
Further actions
I would be really excited to see other people make lists like this, or comment on this one with other ideas, links to work that might have already been done in the area, or notes on what you think could be most and least useful.
If someone actually takes up one of these projects, it would be good to note this in a comment on this post. (And maybe we’ll come up with a better system for coordinating soon.)
Related links
Some EA Forum Posts I’d like to write (generated this project)
A central directory for open research questions (has many links to other research-topic lists)
Improving the EA-aligned research pipeline: Sequence introduction
I did not include all the topics I generated for this project in this post. You can see more of the the full list of projects I brainstormed at this link. For the post itself, I took a sample that seemed relatively promising and spanned a broad array of disciplines and approaches, and wrote those ideas out more carefully.
The list
1. Study future-oriented beliefs in certain religions or groups
If we can identify such beliefs and practices, we can use them to inform outreach.
Some potential subtopics
-
What ideologies or practices have made people seriously care about and try to protect the welfare of future generations? (E.g., bloodline-based beliefs: what influence do they have on people’s actions with respect to the future?)
-
What ideologies or practices have made people care a lot about the preservation of humanity/society?
-
What ideologies or practices have made people not care about the future? (Fatalism? Individualist mindsets?)
-
Study institutions or practices that morally align with consequentialist ideas and consider how they affected or did not affect future-oriented action. (This could help to identify outreach or impact possibilities, give us a sense of the robustness of certain ideas, direct our work by borrowing from older traditions, etc.)[3]
-
What are the implications of the above for the outreach and/or value-spreading efforts we should engage in or support?
Fields: Religion studies, anthropology, history
2. Study the ways in which incidental qualities become essential to institutions
What can this tell us about the stabilization of norms or values within cultures, movements, institutions? This might be relevant for community-building and for discussions of patient philanthropy. It might also aid in forecasting if we notice real patterns that would let us predict that a certain institution was in the process of adopting an incidental practice as part of its identity.
Some potential subtopics
Case studies of long-running institutions, religions, or movements and their change over time; how they diverged from their foundations or original mission
Case studies of institutions that survive dramatic historical or political moments mostly unchanged.
How did core values of religions/movements/institutions change as they grew? (Is there an equivalent concern for a “philosophy” or a movement?) (Potential case study: Church of LDS, or Mormonism.)
How would we notice this happening in a movement like EA or a philosophy like longtermism?
Are there asymmetrical patterns in the value drift we can detect in certain movements and institutions? Can we expect our values to improve or deteriorate in certain ways? How would this inform our approach to patient philanthropy?
Fields: Theology, religions studies, anthropology, history
3. Explore fiction as a tool for moral circle expansion
A typical narrative (outside of EA) is that reading fiction helps develop empathy. If true, and if moral circle expansion is important, it suggests that we might want to dedicate more energy to fiction. There might also be lessons to learn from case studies of sympathetic portrayals of nonhuman beings (and the possible connections with anthropomorphization) in fiction. [4]
Some potential subtopics
Investigate the legitimacy of the claim that fiction helps develop empathy (are there reasonable studies on this?). How does empathy development correspond to moral circle expansion?
If fiction is reasonably a tool for general empathy/moral circle expansion, what sort of fiction is good at this? Are there particular case studies that stand out?
Is typical young-adult (YA) or narrative fiction good for this? (The idea is that the reader must empathize with someone unlike themselves.)
Are satires noticeably good at pointing out moral contradictions?
Are things different when people read foreign works in translation?
Collect and study media that might help people learn to empathize with future beings (both as an independent longtermist cause and as a way to study the mechanism of empathy development).
List and analyze depictions of future people that succeed at evoking empathy and prompting action. (Discussion of one possible example.)
Consider depictions of other beings that are outside the typical moral circle and see if some types of portrayals evoke actionable empathy
e.g., do Pixar films that anthropomorphize fish or ants lead to better treatment of them? (Or at least lead to pushes for better treatment, even if the pushes themselves were not effective?)
Alternatively, if we find that they do evoke empathy but it leads to ineffective changes, can we channel the empathy of such media productively?
Does anthropomorphization in media create harmful prejudices or poorly calibrated expectations about the future and the potential sentience of various beings?
Consider historical cases where creators of fiction or popular media have unexpectedly deep moral/progressive/political stances with respect to the moral status of beings. (Or, unexpectedly shallow stances?) 7. Some possible examples[5]: Voltaire, Čapek, Tolstoy Fields: (Comparative) literature, education, media studies, art history, psychology, anthropology, history
4. Study how longtermists use different forms of media and how this might be improved
Are we unhelpfully ignoring some forms of media? Or, using them in ways that are historical accidents but in practice less helpful? Should we encourage more creative media?
Some potential subtopics
-
Analyze images used in EA or longtermist discussions and outreach.[6]
-
Are there accidental patterns in the images EAs (or various EA organizations) use? This would be helpful to explicitly take into account in case it does not align with our interests. (My personal/anecdotal take is that they often fall into tropes and activate my knee-jerk reaction against socialist realism — if this is true, it may be bad for outreach. Alternatively, it might reinforce stereotypes of EA, longtermism or our causes, etc.) [7]
-
Do the images we use meaningfully inform our thinking on some subjects? (If we use certain images of specific concepts, will they inappropriately inform our models of things we should keep open minds about? As a silly illustration: if our images of conflict always show tanks, maybe that unnecessarily focuses our thinking on land war.)
-
-
Suggest or support ways to diversify media for EA outreach (moving beyond nerdy podcasts and academic or intellectual writing); consider the pros and cons of these forms of media for different functions. Different forms of media to consider[8]
-
Animations/Vox-style short videos about EA/longtermism[9]
-
Comics about EA/longtermism
-
Board/video games, like the paperclips clicker game, an AI policy game (that I haven’t tried) and this vegan game (which I also haven’t tried)
-
Fiction, including fanfiction and interactive fiction
-
Discussions and work on this topic: When can Writing Fiction Change the World?, Please use art to convey EA!, Ben West’s comment on an EA Forum post (outreach to high schoolers), Effective altruism art and fiction
-
-
See if someone has set up an author/creator success predictability study that avoids survivorship bias and set up better systems for tracking the impact and quality of nonstandard media (it’s harder to evaluate reach and truthfulness), or try to do this yourself. (Same thing with influencers?)
-
Consider downsides of using non-prose media more frequently (and specific downsides of certain media). Some examples:
Non-prose media might be lower-fidelity, i.e. it might be harder to make sure that the message one wants to convey is actually the message that gets conveyed.
Non-prose media are generally slower media; they’re harder to produce fast, and might take more resources.
Non-prose media might be worse for asymmetric weapons
Fields: media studies, visual arts, comparative literature, market research
5. Study how non-EAs actually view AI safety issues, and how we got here
My sense is that how most people feel about AI is probably strongly shaped by various historical propaganda efforts and fiction media (e.g. blockbuster sci-fi) rather than current reality or scenario analysis that’s aimed more at realism and usefulness than entertainment. This might suggest that we should study popular media on AI more carefully. We could also study other ideas of technology-stemming risks as proxies for the perception of AI. (However, I’m not sure if the best approaches are humanities-based or surveys, or how much public opinion actually matters for e.g. AI governance questions.)
Some potential subtopics
Study the psychology & anthropology phenomena of fear of GMO, vaccination, “unnaturalness,” alternative proteins, etc., both to compare and find patterns relevant for AI, and for more independent goals like animal welfare and catastrophe recovery planning.
Survey the history of the idea of AI in popular discourse. (How important has fiction been in shaping popular understanding of AI and its risks?)
Study representations of AI in contemporary media.
Potential resources:
Fields: Literature, history, anthropology, media studies, psychology
6. Produce anthropological/ethnographic studies of unusually relevant groups
We might want to study the EA community itself as well as communities that are important for specific cause areas or interact with known x-risks and pathways for potential mitigation strategies (e.g. ML labs).
Some potential subtopics
-
Ethnographic studies of directly important communities, like ML labs. These can give us better models of risks and protections, possible interventions (e.g. make sure that the people who work in these are aware of x-risks, etc.), and generally help us identify the levers we could pull.
Possible resources: Reducing long-term risks from malevolent actors, and Safety culture (Wikipedia)
-
Study our own community. (Relevant: I want an ethnography of EA[10])
A careful analysis of the EA community could reveal some of our epistemic biases, potential pitfalls, unhelpful practices (like jargon), and possible low-hanging fruit in terms of improvement, expansion, etc. An ethnography might also help us assess more specific criticisms of EA.
Compare the EA community to other groups and movements. (Or a smaller project: produce a list of EA-adjacent communities.)
-
Study groups that have specific qualities we want to emulate.
Some possible examples: open source info groups like Bellingcat (which tend to rely on volunteers and tools we might want to learn), or Teach for America’s (movement-)scaling.
Fields: anthropology/ethnography, history
7. Apply insights from education, history, and development studies to creating a post-societal-collapse recovery plan
Help produce civilization recovery materials in case of a catastrophe that doesn’t quite destroy everyone, but which brings down most of the institutions and systems humanity has developed. (Studying and planning for this seems incredibly hard, but the payoff could be big enough that it might be worth looking into.)
Some potential subtopics
Consider which important aspects of society are the least likely to re-emerge without specific planning. Why? What are the blockers/bottlenecks? How does that differ across collapse scenarios?
How can we support possible efforts to rebuild? (Consider things like scientific/cultural/moral recovery.)
Some relevant posts:
Fields: Education, anthropology, archival studies, history
8. Study notions of utopias
If we have a better understanding of attitudes to utopias (and accounts of “flourishing futures”), we might be able to better direct our outreach and advocacy efforts.
Some potential subtopics
-
Is it helpful to have a clearer picture of utopia (“flourishing futures”) to work towards? In particular, does it help people become more future-oriented? Does it help motivate EAs? Does it distract people from reasonable interventions and/or more urgent issues?
-
Does creating and promoting pictures of flourishing futures harm (or help) outreach or the reputation in some ways (e.g. via appearing naive, outlandish, callously disregarding of present suffering, or reminiscent of totalitarian ideologies)?
-
What (if any) are helpful images/notions of utopia for any given concrete purpose?
-
What do notions of utopia across religions and (sub)cultures look like?[11]
If things are meaningfully different across cultures, we might want to shy away from concrete images of utopia.
Should we try to find compromises between these? What are ways of creating widely appealing visions of utopia?
Fields: comparative literature, art history, religion studies, history, psychology
9. Analyze social media (and online forums) in the context of longtermism
We can consider social media both as a factor that shapes our modes of internal communication (context for longtermist discussions) and as a tool for outreach and influencing decisions and opinions.
Some potential subtopics
An analysis of our online forums’ architecture in the interest of culture-shaping and bias- or gap-catching.
An analysis of the language of EAs/longtermists (along these lines) with the goal of consciously shaping outreach and discussion on e.g. longtermism. What are the discourse norms and common rhetorical strategies that have developed? Which, if any, seem to be unproductive?
-
Meme culture—is it helpful in EA? Is it a good outreach tool? How should we improve our modes of interacting with memes? [12]
-
Similar questions about the rationalist community
-
Study social media and advertising strategies to understand how much humans are susceptible to certain persuasion strategies at different points in their life. Is this a risk factor for AI-aided totalitarian state possibilities? Can we set up safeguards for pathways of persuasion (especially ones that target vulnerable people?)
Should longtermists use social media more often as an advocacy or advising tool? How useful are Twitter and other online platforms as tools for influencing high-stakes decision-making?
Did the actions of people with large Twitter followings who tweeted about pandemic interventions affect real (CDC, WHO, and US gov) decisions in measurable ways? Some case studies here could be Nate Silver (e.g. vaccine side-effects), Matt Yglesias (mid-pandemic, vaccine prioritization), and Zeynep Tufekci (early on, masks), this study.
To what extent should longtermists focus on producing and communicating research publically instead of just circulating it internally or in the narrow academic sphere?
Which platforms are more conducive to this sort of influence?
How tractable is it to become influential (specifically for high-stakes decisions) on something like Twitter?
Does the existence of this form of influence pose risks; does this increase the chances of reactive decision-making? I can imagine that it might give unqualified “influencers” or viral pieces of media authority they do not deserve. Are there ways to improve this?
Fields: Comparative literature? Media studies, psychology, market research
10. Use tools from non-history humanities fields to aid history-oriented projects relevant for longtermism
This likely entails using cultural artefacts as sources of data.
Some potential subtopics
-
Study scientific or tech-oriented communities throughout history (to improve our understanding of the history of science and technology)
-
Consider places of cultural or intellectual exchange beyond a narrow academic focus— studies of such places exist, but might not have bled into EA/longtermist communities. (E.g. study the ways early modern European ship-designers spread their developments & interacted with academic communities.[13])
-
Produce a compilation of (good) analyses of knowledge diffusion systems (processes by which truths became “accepted”)
Philosophical models of such pathways
Specific historical moments of reflection on knowledge diffusion (e.g. Robert Boyle’s writing)
-
-
Comb for past verifiable prediction sets to see when long-term predictions were reasonable and if there are noticeable patterns. (Pull predictions from broad and good document samples— this could include personal correspondences, fictional work, etc. Alternatively, try producing a complete analysis of the implicit forecasts in the writings of historical people who are relevant for EA.[14])
-
Consider whether it is possible to use cultural artefacts as a source of data for forecasting.
For example, if many literary works published in a time/place are suddenly more sympathetic to some kind of animal, does that correlate with or predict broad and measurable shifts in the social status of the animal?[15] Are there certain kinds of cultural artefacts that are more or less useful for this sort of thing? If there is some correlation, what are the causal directions?
-
An analysis of the hinge-of-history idea:
Formalize the thesis or question.
Attempt to make it less individualistic (i.e. are there ways to avoid defining the question in terms of the ease-of-impact of an individual actor?).
Produce a more careful outside-view analysis of the history of people and communities thinking that they were living at incredibly influential times. (E.g. The Great Horse Manure Crisis of 1894)
Fields: Archival studies, anthropology, literature or media studies, history, history of science
A few final notes or reminders on further actions
The goal of this list is to provide a starting point and an example — not a complete classification of longtermist humanities topics to explore.
I am quite uncertain about most things here, and do not have a lot of training in the relevant fields, so you should not take this list as a strong statement about the value of doing serious research on these topics.
I would be really excited to see other people make lists like this, or comment on this one with other ideas, links to work that might have already been done in the area, or notes on what you think could be most and least useful.
If someone actually takes up one of these projects, it would be good to note this in a comment on this post. (And maybe we’ll come up with a better system for coordinating soon.)
Credits
This essay is a project of Rethink Priorities.
It was written by Lizka, an intern at Rethink Priorities. Thanks to Janique Behman, Neil Dullaghan, David Mathers, Peter Wildeford, and especially Michael Aird and my supervisor Linch Zhang for their helpful feedback. Any mistakes are the fault of Linch Zhang. If you like our work, please consider subscribing to our newsletter. You can see all our public work to date here.
Notes
- ↩︎
For context, it took me around 5 hours to generate the full list of ideas, although it took significantly longer to organize them, select and elaborate on my favorite ones, and edit them for clarity.
- ↩︎
As one example, the list of “Research questions organized by discipline” compiled by 80,000 Hours has history, philosophy, and psychology questions, but not really other humanities/social sciences questions. The 2019 EA Survey found that 14.2% of respondents had studied “Social Sciences”, 13.4% had studied Philosophy, 13.1% had studied Arts & Humanities, 7.5% had studied Psychology (and 24% had studied Computer Science). I’m not entirely sure how to interpret this, given that survey-takers could select multiple responses and some of the categories are often loosely understood, but this does seem to imply that philosophy and psychology are fairly well represented. [Note: addition from 2022: here’s a great post on how to apply a background in psychology to AI safety.]
- ↩︎
Some writing on topics around this already exists; Long-Term Influence and Movement Growth: Two Historical Case Studies (considers state consequentialism, a.k.a. “Mohist consequentialism”), What are some historical examples of people and organizations who’ve influenced people to do more good?, and off the forum: Consequences of Compassion: An Interpretation and Defense of Buddhist Ethics (note that I have not read this book). Ideas from Buddhism (and presumably other religions) can inform philosophical frameworks of consequentialism (or theories of well-being), like in this paper. Also could be relevant for this topic more broadly: Against moral advocacy.
- ↩︎
The Sentience Institute’s work is likely very relevant.
- ↩︎
It would probably be better to crowdsource the list of examples, though; the selection of examples I can come up with myself will be very biased.
- ↩︎
This post, which presents and discusses two different infographics based on The Precipice, could be relevant.
- ↩︎
A sketch of how this could be done; find a good sample of EA images, get someone with a background in visual arts/market research/art history/design/some other relevant field or skill to produce a list of trends or possible concerns, then test these specific questions on audiences in a more quantitative way. It’s also possible that it’s easier to just talk to people involved in putting images out there to figure out their main purposes (Facebook banners? etc.) and uses, and how they get created or selected.
- ↩︎
Individual EA examples of many of these media do exist, but it seems hard to find them, and my sense is that we could do more of this. I would appreciate any links or suggestions on how to improve on that!
- ↩︎
This episode of the Clearer Thinking podcast might be relevant, although I have not listened to it yet.
- ↩︎
It seems that there have been attempts to produce ethnographies, but it isn’t clear how successful they were. (Which might be an argument against thinking it will succeed in the future, but more details would be helpful.)
- ↩︎
Possible methodology: use cultural artefacts (e.g. fiction) to generate ideas and hypotheses, and then test those hypotheses/construct validity using survey construction tools (borrowing preference-determination tools from psychology). Holden Karnofsky discusses something like this in an 80,000 Hours podcast episode.
- ↩︎
Studying memes is also mentioned in this AI Governance research agenda
- ↩︎
Relevant readings: Steven Shapin, “Pump and Circumstance: Robert Boyle’s Literary Technology,” Social Studies of Science 14, no. 4 (1984), Brian Ogilvie, The Science of Describing: Natural History in Renaissance Europe (University of Chicago Press, 2008), Pamela Long, Artisan/Practitioners and the Rise of the New Science, 1400-1600 (Oregon State University Press, 2011), Technology Trap (about the Industrial Revolution, haven’t read myself)
- ↩︎
Two such people could be Benjamin Franklin and Jeremy Bentham, although depending on the methodology, selection bias could be a concern.
- ↩︎
This would be susceptible to selection biases, so it would be necessary to make sure to include cases where this is not true. The first step might be to develop coherent inclusion criteria to produce a list of cases. The following links were suggested as sources of methodologies to emulate: link 1 and link 2.
- Big List of Cause Candidates by 25 Dec 2020 16:34 UTC; 276 points) (
- Hello from the new Content Specialist at CEA by 8 Mar 2022 12:08 UTC; 184 points) (
- A central directory for open research questions by 19 Apr 2020 23:47 UTC; 163 points) (
- Big List of Cause Candidates: January 2021–March 2022 update by 30 Apr 2022 17:21 UTC; 123 points) (
- Ask (Everyone) Anything — “EA 101” by 5 Oct 2022 10:17 UTC; 110 points) (
- Native languages in the EA community (and issues with assessing promisingness) by 27 Dec 2021 2:01 UTC; 72 points) (
- About my job: “Content Specialist” by 8 Sep 2023 18:55 UTC; 66 points) (
- What posts would you like someone to write? by 27 Feb 2024 10:30 UTC; 60 points) (
- Seeking social science students / collaborators interested in AI existential risks by 24 Sep 2021 21:56 UTC; 58 points) (
- Miscellaneous & Meta X-Risk Overview: CERI Summer Research Fellowship by 30 Mar 2022 2:45 UTC; 39 points) (
- EA Updates for July 2021 by 2 Jul 2021 14:39 UTC; 37 points) (
- Easy Altruism by 6 Jun 2022 19:02 UTC; 34 points) (
- EA Organization Updates: June 2021 by 26 Jun 2021 0:37 UTC; 29 points) (
- EA Organization Updates: August 2021 by 27 Aug 2021 11:11 UTC; 16 points) (
- GWWC June 2021 Newsletter by 29 Jun 2021 4:11 UTC; 14 points) (
- 10 Jun 2021 4:27 UTC; 6 points) 's comment on Some EA Forum Posts I’d like to write by (
- Possible directions in AI ideal governance research by 10 Aug 2022 8:36 UTC; 5 points) (
- 14 Dec 2021 8:17 UTC; 5 points) 's comment on A central directory for open research questions by (
- 20 Oct 2022 5:36 UTC; 4 points) 's comment on Open Thread: June — September 2022 by (
- 課題候補のビッグリスト by 20 Aug 2023 14:59 UTC; 2 points) (
:D Good line. I hope you snuck this in and Linch didn’t notice.
:D
Thanks for this post!
I agree with your points in the “Why it might be helpful to produce lists of projects for people with humanities backgrounds (or interests) to work on” section, and think each of those 10 ideas do seem at least worth some people considering. (Obviously you and I discussed that earlier—I’m just saying it publicly too!)
I’ve now added this post to my central directory of open research questions, to hopefully increase the chance that people come across this collection later when looking for research ideas.
Just flagging that I would be excited to connect with anyone who is working on / considering working on 4) how longtermists use different forms of media and how this might be improved, 5) how non-EAs view AI safety issues, 8) notions of utopias, 9) social media in the context of longtermism; all of which relate to new projects at the Future of Life Institute. Feel free to reach out!
Hi Georgiana! Would love to chat (I think we overlapped digitally at SRF in FHI!). Proposed something similar here and delighted to see similar motivations / hopes, and would love to discuss support / co-creation / potential collaboration! https://mirror.xyz/qualiatinker.eth/6c4VLPaS3hqpuWT2iz4yEXRRMHFtMa4vilZvT5lKdmI
Let me know how best to reach out, or you can reach me at jasmine@verses.xyz!
Hi Georgiana! Would love to chat (I think we overlapped digitally at SRF in FHI!). Proposed something similar here and delighted to see similar motivations / hopes, and would love to discuss support / co-creation / potential collaboration! https://mirror.xyz/qualiatinker.eth/6c4VLPaS3hqpuWT2iz4yEXRRMHFtMa4vilZvT5lKdmI
Great list—even though the EA community certainly doesn’t exclude or disvalue the humanities, I think it can be perceived as such. As someone with deep pulls to narrative + cultural change practitioners, I particularly like that you’ve included literature/media here—narrative change is a nascent field but an oft-touted accomplishment is the legalization of gay marriage: Cultural change in acceptance of LGBT people: lessons from social marketing
If narrative can influence policy then this kind of work does seem important for building out institutions capable of governing for the long-term.
Quick note: I’m considering switching thesis topics to “Did the actions of people with large Twitter followings who tweeted about pandemic interventions affect real (CDC, WHO, and US gov) decisions in measurable ways? Some case studies here could be Nate Silver (e.g. vaccine side-effects), Matt Yglesias (mid-pandemic, vaccine prioritization), and Zeynep Tufekci (early on, masks).”
Not at all firm on this but just wanted to make a note here, as I would love to talk about how to make my Public Policy thesis EA-aligned!
seriously, send help.
Update: This article seems to be pretty relevant to the above question.
Unfortunately, I’m starting to think my interest is even more qualitative than the above. So I’m not sure how much I’ll be contributing to that research question.
Hi folks! Thank you so much for the warm reception this post has received so far. I’m actively trying to improve my EA-aligned research and writing skills, so I would really appreciate any constructive feedback you might be willing to send as a comment or a private message. (Negative feedback is especially appreciated.) If you are worried about wording criticism in a diplomatic way, Linch (my supervisor) has also offered to perform the role of a middleman.
Of course, we would also appreciate being informed if any of the proposed research ideas actually change your decisions (e.g. if you end up writing a paper or thesis based on an idea listed here). (And I would be really curious to see where that goes.)
On a different note, there are additional posts that I would have linked to this one if I had published later. In particular, the Vignettes Workshop (AI Impacts) , Why EAs researching mainstream topics can be useful (note: Michael and I both work at Rethink Priorities), this post about a game on animal welfare that just came out (I haven’t tried the game), and this question about the language Matsés and signaling epistemic certainty .
Hi folks, I want to second Lizka in saying that if you have any feedback, feel free to do any of: comment here, PM me on this site or email me at linch@rethinkpriorities.org.
I’m especially excited for people to point out empirical or conceptual errors here, as the person at fault for all mistakes in this post. :)
Yeah, I think this is an interesting idea. This morning, I made a Slack workspace for “EA Creatives & Communicators”, to provide a space for interactions between people in the EA community who aim to do good through various types of creative or communications activities—e.g., by covering EA-relevant topics or important messages via documentaries, other types of videos, short stories, maybe journalism. These interactions could involve things like asking for advice/feedback, sharing tips and resources, and finding collaborators. If anyone else is interested in joining that Slack, send me a message.
There was a direct catalyst for me making that Slack other than this post, but it’s quite possible that this post primed me to respond to that catalyst by making that Slack, rather than just giving the one specific person I was talking to some suggested names and tips. (So in case this post did prime me for that, thanks again for your work on it!)
This is a great post. Further, some completed basic research in the humanities/social sciences could provide useful insights for longtermism without the need to complete any original research. For example, reading through some historical case studies and synthesizing potential takeaways for longtermism.
Notably, research for longtermism can easily overlap with other cause areas, such as reducing existential risk or catastrophes. There’s low-hanging fruit here.
I’m currently working (Summer 2021) with Effective Altruism for Christians on increasing research in theology/religion and EA, so I have a special interest in the first item on the list, “1. Study future-oriented beliefs in certain religions or groups”. Recommendations are welcome!