Several years ago, 12 self-identified women and people of color in EA wrote a collaborative article that directly addresses what it’s like to be part of groups and spaces where conversation topics like this come up. It’s worth a read. Making discussions in EA groups inclusive
I’ll bite on the invitation to nominate my own content. This short piece of mine spent little time on the front page and didn’t seem to capture much attention, either positive or negative. I’m not sure why, but I’d love for the ideas in it to get a second look, especially by people who know more about the topic than I do.
Title: Leveraging labor shortages as a pathway to career impact? [note: question mark was added today to better reflect the intended vibe of the post]
Author: Ian David Moss
Why it’s good: I think it surfaces an important and rarely-discussed point that could have significant implications for norms and practices around EA community-building and career guidance if it were determined to be valid.
Hi David, thanks for your interest in our work! I need to preface this by emphasizing that the primary purpose of the quantitative model was to help us assess the relative importance of and promise of engaging with different institutions implicated in various existential risk scenarios. There was less attention given to the challenge of nailing the right absolute numbers, and so those should be taken with a super-extra-giant grain of salt.
With that said, the right way to understand the numbers in the model is that the estimates were about the impact over 100 years from a single one-time $100M commitment (perhaps distributed over multiple years) focusing on a single institution. The comment in the summary about $100 million/year was assuming that the funder(s) would focus on multiple institutions. Thus, the 100 basis points per billion figure is the “correct” one provided our per-institution estimates are in the right order of magnitude.
We’re about to get started on our second iteration of this work and will have more capacity to devote to the cost-effectiveness estimates this time around, so hopefully that will result in less speculative outputs.
Dustin & Cari were also among the largest donors in 2020: https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valleys
Wow, I didn’t see it at the time but this was really well written and documented. I’m sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.
I think it would have been very easy for Jonas to communicate the same thing in less confrontational language. E.g., “FWIW, a source of mine who seems to have some inside knowledge told me that the picture presented here is too pessimistic.” This would have addressed JP’s first point and been received very differently, I expect.
I understood the heart of the post to be in the first sentence: “what should be of greater importance to effective altruists anyway is how the impacts of all [Musk’s] various decisions are, for lack of better terms, high-variance, bordering on volatile.” While Evan doesn’t provide examples of what decisions he’s talking about, I think his point is a valid one: Musk is someone who is exceptionally powerful, increasingly interested in how he can use his power to shape the world, and seemingly operating without the kinds of epistemic guardrails that EA leaders try to operate with. This seems like an important development, if for no other reason that Musk’s and EA’s paths seem more likely to collide than diverge as time goes on.
I agree this is an important point, but also think identifying top-ranked paths and problems is one of 80K’s core added values, so don’t want to throw out the baby with the bathwater here.
One less extreme intervention that could help would be to keep the list of top recommendations, but not rank them. Instead 80K could list them as “particularly promising pathways” or something like that, emphasizing in the first paragraphs of text that personal fit should be a large part of the decision of choosing a career and that the identification of a top tier of careers is intended to help the reader judge where they might fit.
Another possibility, I don’t know if you all have thought of this, would be to offer something that’s almost like a wizard interface where a user inputs or checks boxes relating to various strengths/weaknesses they have, where they’re authorized to work, core beliefs or moral preferences, etc., and then the program spits back a few options of “you might want to consider careers x, y, and z—for more, sign up for a session with one of our advisors.” Then promote that as the primary draw for the website more than the career guides. Just a thought?
I was also going to say that it’s pretty confusing that this list is not the same as either the top problem areas listed elsewhere on the site or the top-priority career paths, although it seems derived from the latter. Maybe there are some version control issues here?
I feel like this proposal conflates two ideas that are not necessarily that related:
Lots of people who want to do good in the world aren’t easily able to earn-to-give or do direct work at an EA organization.
Starting altruistically-motivated independent projects is plausibly good for the world.
I agree with both of these premises, but focusing on their intersection feels pretty narrow and impact-limiting to me. As an example of an alterative way of looking at the first problem, you might consider instead or in addition having people on who work in high(ish)-impact jobs where there are currently labor shortages.
Overall, I think it would be better if you picked which of the two premises you’re most excited about and then went all-in on making the best podcast you could focused on that one.
Hmm, I guess I’m more optimistic about 3 than you are. Billionaires are both very competitive and often care a lot about how they’re perceived, and if a scaled-up and properly framed version of this evaluation were to gain sufficient currency (e.g. via the billionaires who score well on it), you might well see at least some incremental movement. I’d put the chances of that around 5%.
I thought this was great! With a good illustrator and some decent connections I think you could totally get it published as a picture book. A couple of feedback notes:
The transition from helping people in Johnny’s life to helping people far away via the internet felt a bit forced. If Johnny is supposed to be a student in primary school like the intended reader, it wasn’t clear where he gets his donation budget from and I wonder how relatable that would be (a donation of $25 is mentioned, which I guess could come from allowance/gift money, but it’s implied that it’s only one of many donations). It might be better and more realistic to depict Johnny fundraising for these charities from his parents and other people in his life or community.
One great thing about the first part is that you see the impact of Johnny’s help on his teacher, the bullied kid, etc., whereas that becomes more obscure once he transitions to the internet. I wonder if you could fix that by temporarily switching the focus of the story to the person who got their eyes fixed because of Johnny, showing how meaningful it was for them. I think it’s really critical in a story like this to demonstrate that far-away people are just as real and lead just as worthwhile lives as those close to us.
I’m not aware of anyone working on it really seriously!
It’s possible there’s a more comprehensive writeup somewhere, but I can offer two data points regarding the removal of $30B in pandemic preparedness funding that was originally part of Biden’s Build Back Better initiative (which ultimately evolved into the Inflation Reduction Act):
I had an opportunity to speak earlier this summer with a former senior official in the Biden administration who was one of the main liaisons between the White House and Congress in 2021 when these negotiations were taking place. According to this person, they couldn’t fight effectively for the pandemic preparedness funding because it was not something that representatives’ constituents were demanding.
During his presentation at EA Global DC a few weeks ago, Gabe Bankman-Fried from Guarding Against Pandemics said that Democratic leaders in Congress had polled Senators and Representatives about their top three issues as Build Back Better was being negotiated in order to get a sense for what could be cut without incurring political backlash. Apparently few to no members named pandemic preparedness as one of their top three. (I’m paraphrasing from memory here, so may have gotten a detail or two wrong.)
The obvious takeaway here is that there wasn’t enough attention to motivating grassroots support for this funding, but to be clear I don’t think that is always the bottleneck—it just seems to have been in this particular case.
I also think it’s true that if the administration had wanted to, it probably could have put a bigger thumb on the scale to pressure Congressional leaders to keep the funding. Which suggests that the pro-preparedness lobby was well-connected enough within the administration to get the funding on the agenda, but not powerful enough to protect it from competing interests.
I have some sympathy for the second view, although I’m skeptical that sane advisors have significant real impact. I’d love a way to test it as decisively as we’ve tested the “government (in its current form) responds appropriately to warning shots” hypotheses.On my own models, the “don’t worry, people will wake up as the cliff-edge comes more clearly into view” hypothesis has quite a lot of work to do. In particular, I don’t think it’s a very defensible position in isolation anymore....if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form “surely my political party will become popular, claim power, and implement policies I like”.)
I have some sympathy for the second view, although I’m skeptical that sane advisors have significant real impact. I’d love a way to test it as decisively as we’ve tested the “government (in its current form) responds appropriately to warning shots” hypotheses.
On my own models, the “don’t worry, people will wake up as the cliff-edge comes more clearly into view” hypothesis has quite a lot of work to do. In particular, I don’t think it’s a very defensible position in isolation anymore....if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.
(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form “surely my political party will become popular, claim power, and implement policies I like”.)
I think the second view is basically correct for policy in general, although I don’t have a strong view yet of how it applies to AI governance specifically. One thing that’s become clear to me as I’ve gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that’s possible in those settings. The more optimistic among us tend to get too excited about isolated interventions (e.g., electing a committed EA to Congress, getting a voting reform passed in one jurisdiction) that, even if successful, would only address a small part of the problem. On the other hand, skeptics see the inherent complexity and failures of past efforts and conclude that policy/advocacy/improving institutions is fundamentally hopeless, neglecting to appreciate that critical decisions by governments are, at the end of the day, made by real people with friends and colleagues and reading habits just like anyone else.
Viewed through that lens, my opinion and one that I think you will find is shared by people with experience in this domain is that the reason we have not seen more success influencing large-scale bureaucratic systems is that we have have been under-resourcing it as a community. By “under-resourcing it” I don’t just mean in terms of money, because as the Flynn campaign showed us it’s easy to throw millions of dollars at a solution that hits rapidly diminishing returns. I mean that we have not been investing enough in strategic clarity, a broad diversity of approaches that complement one another and collectively increase the chances of success, and the patience to see those approaches through. In the policy world outside of EA, activists consider it normal to have a 6-10 year timeline to get significant legislation or reforms enacted, with the full expectation that there will be many failed efforts along the way. But reforms do happen—just look at the success of the YIMBY movement, which Matt Yglesias wrote about today, or recent legislation to allow Medicare to negotiate prescription drug prices, which was in no small part the result of an 8-year, $100M campaign by Arnold Ventures.
Progress in the institutional sphere is not linear. It is indeed disappointing that the United States was not able to get a pandemic preparedness bill passed in the wake of COVID, or that the NIH is still funding ill-advised research. But we should not confuse this for the claim that we’ve been able to do “approximately nothing.” The overall trend for EA and longtermist ideas being taken seriously at increasingly senior levels over the past couple of years is strongly positive. Some of the diverse factors include the launch of the Future Fund and the emergence of SBF as a key political donor; the publication of Will’s book and the resulting book tour; the networking among high-placed government officials by EA-focused or -influenced organizations such as Open Philanthropy, CSET, CLTR, the Simon Institute, Metaculus, fp21, Schmidt Futures, and more; and the natural emergence of the initial cohort of EA leaders into the middle third of their careers. Just recently, I had one senior person tell me that Longview Philanthropy’s hiring of Carl Robichaud, a nuclear security grantmaker with 20 years of experience, is what got them to pay attention to EA for the first time. All of it is, by itself, not enough to make a difference, and judged on its own terms will look like a failure. But all of it combined is what creates the possibility that more can be accomplished the next time around, and all of the time in between.
Amazing resource, thanks so much! I’ll add that the Effective Institutions Project is in the process of setting up an innovation fund to support initiatives like these, and we are planning to make our first recommendations and disbursements later this year. So if anyone’s interested in supporting this work generally but doesn’t have the time/interest to do their own vetting, let us know and we can get you set up as a participant in our pooled fund (you can reach me via PM on the Forum or write email@example.com).
Also worth noting that you can be influential on Twitter without necessarily having a large audience (e.g., by interacting strategically with elites and frequently enough that they get to know you).
It seems worth noting that you can get famous on Twitter for tweeting, or you can happen to be famous on Twitter as a result of becoming famous some other way. The two pathways imply very different promotional strategies and theories of impact. But my sense is that it’s pretty hard to grow an audience on Twitter through tweeting alone, no matter how good your content is.
He seems like a natural fit for the American economist-public intellectual cluster (Yglesias/Cowen/WaitButWhy/etc.) that’s already pretty sympathetic to EA. The twitter content is basically “EA in depth,” but retaining the normie socially responsible brand they’ve come to expect and are comfortable with. Max Roser would be another obvious candidate to promote Peter. I’d start there and see where it goes.