+1 - Ecosystem services (and more generally, Earth systems) are infamously hard to pin down, which is why I often taken any bottom line analyses of climate change with gigantic grains of salt (in both directions). For example, there’s currently a gold rush on technology to quantify the value of soil sequestration, forest sequestration, etc, and as far as I can tell, experts are still bickering over the basics on how to calculate these data with any accuracy. Those are just a few small pieces of a very very large pie that is difficult to value. Perhaps the modeling takes these massive uncertainties into consideration, but I’m skeptical (and will have to do some research of my own).
Lots of good stuff here! I work in the climate change field so I have expertise here, although it’s crucial to note I haven’t spent my career comparing the risk that climate change poses relative to the other big topics that concern EAs.
It’s not surprising given my biases that I always grimace a little when EAs talk about climate. It’s an easy target—lots of attention, tons of media hubbub, plenty of misinformed opinions and outright grifters, and of course, lack of direct existential threat. Hey look, here’s an issue that most EAs care about that’s already getting attention and talent, and if you run the numbers, according to our values...that’s more than enough attention! So come work on an underserved issue like AI or pandemic risk! It makes sense to use it as a point of contrast and I’m glad that 80K Hours still takes climate change seriously. However, the framing could maybe be better, I’m not sure, I need to think about it more.
One small qualm within the well researched piece—the plastic bag bit is off. Disregarding the fact that plastic bag fees aren’t just about carbon reductions, that graph shows that as long as you don’t make reusable bags out of cotton, reusable bags do exactly what you want them to do. Now, that’s not to say those policies are great, there’s plenty of issues with them, but I don’t find the example to be compelling evidence, especially because no policy demands cotton bags nor do most people use cotton bags. I don’t remember that Danish LCA to be particularly good either.
Nick—absolutely! Making relocation more effective is imperative whether it be international or domestic. I believe that domestic migration is wildly underserved but the work done on that topic can and should be expanded to help facilitate immigration.
Thanks for sharing, Chris! I’ve been meaning to reach out to Teleport for a while to learn about their offerings. They’ve put together some decent data but the UI lacks something integral. I do like their intake survey as a way to narrow choices (a la @evelynciara’s comment). The entire platform feels… abandoned? Could be a good partner down the line for the data side.
Fantastic! Yeah, the basic idea isn’t novel—I’ve heard it a dozen times over the years. However, as far as I can tell, no one has delivered on it, probably because it’s not particularly monetizable. Ultimately I think this kind of product is best suited to be a loss leader or public good.
Adding this as a separate comment to maintain some organization—I’ve mentioned this in comments on other posts, but I really think that there’s room for an organization or mechanism that identifies and rewards undervalued EA-related work that’s already being done at existing non-EA institutions. In the context of your post, it would further normalize the idea that plenty of good EA work happens outside of EA.
Great post/suggestions, I especially agree with target outreach. I want to amplify something that’s touched on but not explicitly stated:
EA is simply a lens/framework—you can apply these principles anywhere, and the impact may be significant! I work in environmental sustainability / climate change mitigation and notice that the movements closely mirror each other because:
Maximizing impact is the overarching principle (at least theoretically...)
It’s a rapidly growing and trendy field.
Until now, amateurs/volunteers/hobbyists have done a lot of the work.
In both EA and sustainability, people clamor for high-profile direct impact roles but they’re incredibly competitive, the roles may lack the imagined leverage and candidates spend an outsized amount of time trying to get them. It’s difficult to quantify, but many (most?) people will be more impactful applying a EA framework to non-EA specific work. The EA movement is still nascent enough that it makes sense to encourage people to apply to EA-specific roles or start new organizations, but eventually the messaging will transition to how you can apply EA to any job you take, not how you can become an EA superstar.
I’ve been thinking about this lately, especially since I’ve started to apply to EA-specific opportunities. It does seem like EA orgs use intelligence as a main filter for hiring, which makes sense given the work (and is far better than plain-old credentialism), but I sometimes wonder if they’re filtering out valuable candidates who are more clever, empathetic or dogged than high IQ. Most EA organizations are small so I expect this will change as the community scales to become more inclusive to the full spectrum of skillsets. Note that this is a perspective from the outside looking in and is completely anecdotal. I could be mistaken.
Great idea! One way that I could see an org like this staying busy when not responding to emergencies is that it could train other more specialized organizations on how to… put together a team to respond to emergencies. This could amplify its impact and help with networking. ALERT could even train PMs to deploy to other organizations in emergency situations. A lot of institutions are already optimally positioned to do good but lack the capacity in emergencies.
My recent idea on the Future Funds’ Project Ideas post may be relevant? https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=qeeCrLXA5dJCAkjTQ
Basically, there should be some function in which to reward undervalued EA-related work. My idea focused on financial rewards, could extend it to include some prestige. Not exactly sure how to confer social rewards—how people gather and socialize doesn’t necessarily correlate with achievement (or maybe it does in the EA community… I wouldn’t really know).
Peter—great idea, I’ve been doing some thinking on this as well, will probably send you an email!
Bonuses/prizes/support for critically situated or talented workers
Empowering Exceptional People
Work that advances society should be rewarded and compensated at fair market value. Unfortunately, rewards are often incommensurate, delayed or altogether unrealized. We’d be excited to see a funding process that 1) identifies work that’s under appreciated by or insulated from the market and 2) provides incentives for workers/teams to stay put and complete said work.
EA often focuses on building new organizations to solve problems, but talented people are already situated within organizations that can foster real change. In government, academia, large legacy companies and non-profits, incentives are usually in the form of slowly accrued assets like prestige, job security or future private sector paydays. Unfortunately, these are also organizations that are tasked with addressing urgent matters such as climate change, pandemics, housing shortages, etc.
How do we incentivize important work outside of the market’s reach? How do we incentivize talented but poorly compensated workers to stay at essential but bureaucratic organizations that are optimally positioned to foster change?
Challenge Prizes: Small to medium sized prizes or donations for the completion of work that’s going too slowly. This provides a market signal that stresses urgency in no uncertain terms. This is similar to moonshots but more immediate/focused/localized.
Bonuses: Set up externally funded performance bonuses for well-placed individuals that are at low-paying but important organizations. Or external signing bonuses for obtaining high-leverage roles in these institutions.
Coddling Services: Basically, personal assistant services for identified high-performing individuals that could use more time focusing (this is similar to an idea already posted by @JanBrauner).