I think you’re right that some of the abundance ideas aren’t exactly new to EA folks, but I also think it’s true that: (1) packaging a diverse set of ideas/policies (re: housing, science, transportation) under the heading of abundance is smart and innovative, (2) there is newfound momentum around designing and implementing an abundance-related agenda (eg), and (3) the implementation of this agenda will create opportunities for further academic research (enabling people to, for instance, study some of those cruxes). All of this to say, if were a smart, ambitious, EA-oriented grad student, I think I would find the intellectual opportunities in this space exciting and appealing to work on.
lilly
Currently, the online EA ecosystem doesn’t feel like a place full of exciting new ideas, in a way that’s attractive to smart and ambitious people.
I think one thing that has happened is that as EA has grown/professionalized, an increasing share of EA writing/discourse is occurring in more formal outlets (e.g., Works in Progress, Asterisk, the Ezra Klein podcast, academic journals, and so on). As an academic, it’s a better use of my time—both from the perspective of direct impact and my own professional advancement—to publish something in one of these venues than to write on the Forum. Practically speaking, what that means is that some of the people thinking most seriously about EA are spending less of their time engaging with online communities. While there are certainly tradeoffs here, I’m inclined to think this is overall a good thing—it subjects EA ideas to a higher level of scrutiny (since we now have editors, in addition to people weighing in on Twitter/the Forum/etc about the merits of various articles) and it broadens exposure to EA ideas.
I also don’t really buy that the ideas being discussed in these more formal venues aren’t exciting or new; as just two recent examples, I think (1) the discourse/opportunities around abundance are exciting and new, as is (2) much of the discourse happening in The Argument. (While neither of these examples is explicitly EA-branded, they are both pretty EA-coded, and lots of EAs are working on/funding/engaging with them.)
Thanks for writing this. It feels like the implicit messaging, ideas, and infrastructure of the EA community have historically been targeted towards people in their 20s (i.e., people who can focus primarily on maximizing their impact). A lot of the EA writing (and EAs) I first encountered pushed for a level of commitment to EA that made more sense for people who had few competing obligations (like kids or aging parents). This resonated with me a decade ago—it made EA feel like an urgent mission—but today feels more unrealistic, and sometimes even alienating.
Given that the average age of the EA community is increasing, I wonder if it’d be good to rethink this messaging/set of ideas/infrastructure; to create a gentler, less hardheaded EA—one that takes more seriously the non-EA commitments we take on as we age, and provides us with a framework for reconciling them with our commitment to EA. (I get the sense that some orgs—like OP, which seems to employ older EAs on average—do a great job of this through, e.g., their generous parental leave policies, but I’d like to see the implicit philosophy connoted by these policies become part of EA’s explicit belief system and messaging to a greater extent.)
I watched this with a non-EA (omnivore) friend, and we both found it compelling, informative, and not preachy. Nice job!! With respect to the advocacy ask you make at the end: we would benefit from further guidance on how exactly to do this, and what practical steps (aside from changing diets) people who care about these issues should take. For instance, I don’t have a great sense of how to talk about factory farming, because it’s hard to broach these issues without implicitly condemning someone’s behavior (which often feels both socially inappropriate and counterproductive). It would be easier to broach this, I think, if there were specific positive actions I could recommend, or at least some concrete guidance on what to say versus not say when this comes up. I have been a vegetarian for many years, so this kind of conversation has come up organically many times (often over meals), and my natural inclination is always to quickly explain why I don’t eat meat and then change the subject, so I don’t make whoever asked uncomfortable (since often they’re eating meat). But presumably there’s a better way to approach this, so I’m curious if you/others have thoughts, or if there’s research on this.
One q: why is viewer minutes a metric we should care about? QAVMs seems importantly different from QALYs/DALYs, in that the latter matter intrinsically (ie, they correspond to suffering associated with disease). But viewer minutes only seem to matter if they’re associated with some other, downstream outcome (Advocacy? Donating to AI safety causes? Pivoting to work on this?). By analogy, QAVMs seems akin to “number of bednets distributed” rather than something like “cases of malaria averted” or “QALYs.”
The fact that you adjust for quality of audience seems to suggest a ToC in the vein of advocacy or pivoting, but I think this is actually pretty important to specify, because I would guess the theory of change for these different types of media (eg, TikToks vs long form content) is quite different, and one unit of QAVM might accordingly translate differently into impact.
I would also guess that the overwhelming majority (>95%) of highly impactful jobs are not at explicitly EA-aligned organizations, just because only a minuscule fraction of all jobs are at EA orgs. It can be harder to identify highly impactful roles outside of these specific orgs, but it’s worth trying to do this, especially if you’ve faced a lot of rejection from EA orgs.
Okay, so a simple gloss might be something like “better futures work is GHW for longtermists”?
In other words, I take it there’s an assumption that people doing standard EA GHW work are not acting in accordance with longtermist principles. But fwiw, I get the sense that plenty of people who work on GHW are sympathetic to longtermism, and perhaps think—rightly or wrongly—that doing things like facilitating the development of meat alternatives will, in expectation, do more to promote the flourishing of sentient creatures far into the future than, say, working on space governance.
I apologize because I’m a bit late to the party, haven’t read all the essays in the series yet, and haven’t read all the comments here. But with those caveats, I have a basic question about the project:
Why does better futures work look so different from traditional, short-termist EA work (i.e., GHW work)?
I take it that one of the things we’ve been trying to do by investing in egg-sexing technology, strep A vaccines, and so on is make the future as good as possible; plenty of these projects have long time horizons, and presumably the goal of investing in them today is to ensure that—contingent on making it to 2050—chickens live better lives and people no longer die of rheumatic heart disease. But the interventions recommended in the essay on how to make the future better look quite different from the ongoing GHW work.
Is there some premise baked into better futures work that explains this discrepancy, or is this project in some way a disavowal of current GHW priorities as a mechanism for creating a better future? Thanks, and I look forward to reading the rest of the essays in the series.
Not saying something in this realm is what’s happening here, but in terms of common causes of people identifying as EA adjacent, I think there are two potential kinds of brand confusion one may want to avoid:
Associations with a particular brand (what you describe)
Associations with brands in general:
I think EAs often want to be seen as relatively objective evaluators of the world, and this is especially true about the issues they care about. The second you identify as being part of a team/movement/brand, people stop seeing you as an objective arbiter of issues associated with that team/movement/brand. In other words, they discount your view because they see you as more biased. If you tell someone you’re a fan of the New York Yankees and then predict they’re going to win the World Series, they’ll discount your view relative to if you just said you follow baseball but aren’t on the Yankees bandwagon in particular. I suspect some people identify as politically independent for this same reason: they want to and/or want to seem like they’re appraising issues objectively. My guess is this second kind of brand confusion concern is the primary thing leading many EAs to identify as EA adjacent; whether or not that’s reasonable is a separate question, but I think you could definitely make the case that it is.
It’s a tractability issue. In order for these interventions to be worth funding, they should reduce our chance of extinction not just now, but over the long term. And I just haven’t seen many examples of projects that seem likely to do that.
This is a cool idea! Will this be recorded for people who can’t attend live?
Edit: nevermind, I think I’m confused; I take it this is all happening in writing/in the comments.
Without being able to comment on your specific situation, I would strongly discourage almost anyone who wants to have a highly impactful career from dropping out of college (assuming you don’t have an excellent outside option).
There is sometimes a tendency within EA and adjacent communities to critique the value of formal education, or to at least suggest that most of the value of a college education comes via its signaling power. I think this is mistaken, but I also suspect the signaling power of a college degree may increase—rather than decrease—as AI becomes more capable, and it may become harder to use things like, e.g., work tests to assess differences in applicants’ abilities (because the floor will be higher).
This isn’t to dismiss your concerns about the relevance of the skills you will cultivate in college to a world dominated by AI; as someone who has spent the last several years doing a PhD that I suspect will soon be able to be done by AI, I sympathize. Rather, a few quick thoughts:
Reading the new 80k career guide, which touches on this to some extent (and seeking 80k advising, as I suspect they are fielding these concerns a lot).
Identifying skills at the intersection of your interests, abilities, and things that seem harder for AI to replace. For instance, if you were considering medicine, it might make more sense to pursue surgery rather than radiology.
Taking classes where professors are explicitly thinking about and engaging with these concerns, and thoughtfully designing syllabi accordingly.
In the past 30 years, HIV has gone from being a lethal disease to an increasingly treatable chronic illness.
Yeah, I think these are great ideas! I’d love to see the Forum prize come back; even if there was only a nominal amount of (or no) money attached, I think it would still be motivating; people like winning stuff.
Thanks for writing this! Re this:
Perhaps the most straightforward way you can help is by being more active on the Forum. I often see posts and comments that don’t receive enough upvotes (IMO), so even voting more is useful.
I’ve noticed that comments with more disagree than agree votes often have more karma votes than karma. Whether this is good or bad depends on the quality of the comment, but sometimes the comments are productive and helpful, and so the fact that people are downvoting them seems bad for a few reasons: first, it disincentivizes commenting; second, it incentivizes saying things that you think people will agree with, even at the expense of saying what is true. (Of course, it’s good to try to frame things more persuasively when this doesn’t come at the cost of speaking honestly.) The edit here provides an example of how I think this threatens to undermine epistemic and discursive norms on the Forum.
I’m not sure what the solution is here—I’ve suggested this previously, but am not sure it’d be helpful or effective. And it may turn out that this issue—how does the Forum incentivize making/promote helpful comments that people disagree with?—is relatively intractable, or hard to solve without making sacrifices in other domains. (Another thought that occurred to me is doing what websites like the NYT do: having “NYT recommended comments” and “reader recommended comments,” but I assume the mods don’t want to be in the business of weighing in on the merits of particular comments.)
In developing countries, infectious diseases like visceral gout (kidney failure leading to poor appetite and uric acid build up on organs), coccidiosis (parasitic disease causing diarrhoea and vomiting), and colibacillosis (E. coli infection) are common.
I don’t think visceral gout is an infectious disease. I also don’t think chickens can vomit. Two inaccuracies in this one sentence just made me wonder if there were other inaccuracies in the article as well (though I appreciate how deeply researched this is and how much work went into writing it).
Thanks for your very thoughtful response. I’ll revise my initial comment to correct the point I made about funding; I apologize for portraying this inaccurately.
Your points about the broadening of the research agenda make sense. I think GPI is, in many ways, the academic cornerstone of EA, and it makes sense for GPI’s efforts to map onto the efforts of researchers working at other institutions and in a broader range of fields.
And thanks also for clarifying the purpose of the agenda; I had read it as a document describing GPI’s priorities for itself, but it makes more sense to read it as a statement of priorities for the field of Global Priorities Research writ large. (I wonder if, in future iterations of the document—or even just on the landing page—it might be helpful to clarify this latter point, because the documents themselves read to me as more internal facing, e.g., “This document outlines some of the core research priorities for the economics team at GPI.” Outside researchers not affiliated with GPI might, perhaps, be more inclined to engage with these documents if they were more explicitly laying out a research agenda for researchers in philosophy, economics, and psychology aiming to do impactful research.)
Thanks for sharing this! I think these kinds of documents are super useful, including for (e.g.) graduate students not affiliated with GPI who are looking for impactful projects to focus their dissertations on.
One thing I am struck by in the new agenda is that the scope seems substantially broader than it did in prior iterations of this document; e.g., the addition of psychology and of projects related to AI/philosophy of mind in the philosophy agenda. (This is perhaps somewhat offset by what seems to be a shift away from general cause prioritization research.)
I am wondering how to reconcile this apparent broadening of mission with what seems to be a decreasing budget (though maybe I am missing something)
—it looks likeOP granted~$3 million to GPI approximately every six months between August 2022 and October 2023, but there are no OP grants documented in the past year; there was alsonoGlobal Priorities Fellowship this year, and my impression is that post-doc hiring ison hold.Am I right to view the new research agenda as a broadening of GPI’s scope, and could you shed some light on the feasibility of this in light of what (at least at first glance) looks like a more constrained funding environment?
EDIT: Eva, who currently runs GPI, notes that my comment paints a misleading picture of the funding environment. While she writes that “the funding environment is not as free as it was previously,” the evidence I cite doesn’t really bolster this claim, for reasons she elaborates on. I apologize for this.
No shade to the mods, but I’m just kind of bearish on mods’ ability to fairly determine what issues are “difficult to discuss rationally,” just because I think this is really hard and inevitably going to be subject to bias. (The lack of moderation around the Nonlinear posts, Manifest posts, Time article on sexual harassment, and so on makes me think this standard is hard to enforce consistently.) Accordingly, I would favor relying on community voting to determine what posts/comments are valuable and constructive, except in rare cases. (Obviously, this isn’t a perfect solution either, but it at least moves away from the arbitrariness of the “difficult to discuss rationally” standard.)
As you note, there are different ways to be cool. And I think the way in which EA is sometimes cool is when impressive people sympathetic to EA ideas do impactful things. Like, most of the general public probably wouldn’t say Ezra Klein is cool, but a lot of EAs (and potential EAs) probably would. I think EA’s path to greater coolness—to the extent we think this matters, which I’m not convinced it does—is probably via finding and supporting more smart, likable people who publicly buy into EA ideas while doing high-profile, impactful work.