Amazing resource, thanks so much! I’ll add that the Effective Institutions Project is in the process of setting up an innovation fund to support initiatives like these, and we are planning to make our first recommendations and disbursements later this year. So if anyone’s interested in supporting this work generally but doesn’t have the time/interest to do their own vetting, let us know and we can get you set up as a participant in our pooled fund (you can reach me via PM on the Forum or write info@effectiveinstitutionsproject.org).
IanDavidMoss
Also worth noting that you can be influential on Twitter without necessarily having a large audience (e.g., by interacting strategically with elites and frequently enough that they get to know you).
It seems worth noting that you can get famous on Twitter for tweeting, or you can happen to be famous on Twitter as a result of becoming famous some other way. The two pathways imply very different promotional strategies and theories of impact. But my sense is that it’s pretty hard to grow an audience on Twitter through tweeting alone, no matter how good your content is.
He seems like a natural fit for the American economist-public intellectual cluster (Yglesias/Cowen/WaitButWhy/etc.) that’s already pretty sympathetic to EA. The twitter content is basically “EA in depth,” but retaining the normie socially responsible brand they’ve come to expect and are comfortable with. Max Roser would be another obvious candidate to promote Peter. I’d start there and see where it goes.
I’m curious how this applies to infohazards specifically. Without actually spilling any infohazards, could you comment on how one could do a good job applying this model in such a situation?
I’m a little surprised that Rob Wiblin doesn’t have more followers, but he’s already high-profile enough that it wouldn’t take that big of a push to get him into another tier. He’s also the most logical person to leverage 80K’s broader content on social media given his existing profile and activity. (ETA: although Habiba could do this too, per your suggestion.)
Amanda Askell consistently has thoughtful and underrated takes on Twitter.
Peter Wildeford is an A+ follow on Twitter IMHO. I think it’s realistic to get him a bunch more followers if that’s something he wanted.
I assume you’re being modest in not suggesting “Nathan Young,” so I’ll do it for you.
Do we know that he doesn’t already have a social media manager? He’s had a lot of help to promote the book.
In light of the two-factor voting, I’m unclear what you mean by “upvote.” I would suggest using the “agree/disagree” box as the scoring, with “upvote/downvote” meant to refer to your wisdom in suggesting the person and/or the analysis you provided. But I think you should clarify which one you intend to actually pay attention to.
I think raising one’s own kids is often significantly more rewarding than raising adopted kids, just because one’s own kids will share so much more of one’s cognitive traits, personality traits, quirks, etc, that you can empathize better with them.
I’m extremely skeptical of this claim. Many parents I know with multiple biological children report that they have immensely different personalities, and it seems intuitively obvious that any statistical correlations of such traits between child and parent that are driven by genes will be overwhelmed by statistical noise in a family with an n of, say, 3 or fewer children. As someone with two biological children, IMHO almost all of the rewarding aspects of being a parent come from the experience of watching them grow up on a daily basis and directly contributing to that growth, not from picking out physical or other characteristics that happen to remind me of myself.
Haha, well it would depend a lot on the specifics but we’d probably at least be up for having a conversation about it :)
Maybe indirectly? Addressing talent gaps within the EA community isn’t a primary focus of ours, but it does seem that our outreach is helping to increase the pool of mid-career and senior people out in the world who take EA seriously.
Effective Institutions Project here. As of now I’d say our number is more like $150-200K, assuming we’re talking about an annual commitment. The number is lower because our networks give us access to a large talent pool and I’m fairly optimistic that we can fill openings easily once we have the budget for them.
Thanks for the response!
I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside
That’s fair, and I should also be clear that I’m less familiar with LTFF’s grantmaking than some others in the EA universe.
It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it’s not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead.
Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.
I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.
To back this up a bit, let’s take a closer look at the risk factors Asya cited in the comment above.
Pushing policies that are harmful. In any institutional context where policy decisions matter, there is a huge ecosystem of existing players, ranging from industry lobbyists to funders to media outlets to think tanks to agency staff to policymakers themselves, who are also trying to influence the outcomes of legislation/regulation/etc. in their preferred direction. As a result, making policies actually become reality is inherently quite hard and almost impossible to make happen without achieving buy-in from a diverse range of stakeholders. While that process can be frustrating and often results in watering down really good ideas to something less inspiring, it is actually quite good for mitigating the downside risks from bad policies! It’s understandable to think of such a volatile mix of influences as scary and something to be avoided, but we should also consider the possibility that it is a productive way to stress-test ideas coming out of EA/longtermist communities by exposing them to audiences with different interests and perspectives. After all, these interests at least in part do reflect the landscape of competing motivations and goals in the public more generally, and thus are often relevant for whether a policy idea will be successful or not.
Making key issues partisan. My view is that this is much more likely to happen by way of involvement in electoral politics than traditional policy-advocacy work. Importantly, though, we just had a high-profile test of this idea in the form of Carrick Flynn’s bid for Congress. By the logic of EA grantmakers worried about partisan politicization, my sense is that the Flynn campaign is one of the riskiest things this community has ever taken on (and remember, we only saw the primary—if he had won and run in the general, many Republican politicians’ and campaign strategists’ first exposure to EA and longtermism would have been by way of seeing a Democrat supported by two of the largest Democratic donors running on EA themes in a competitive race against one of their own.) And yet as it turned out, it did not result in longtermism being politicized basically at all. So while the jury is still out, perhaps a reasonable working hypothesis based on what we’ve seen thus far is that “try to do good and help people” is just not a very polarizing POV for most people, and therefore we should stress out about it a little less.
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with. I think this one is pretty easily avoided. If you have someone leading a policy initiative who is any of those things, they probably aren’t going to make much progress and their work thus won’t cause much harm (other than wasting the grantmaker’s money). Furthermore, the increasing media coverage of longtermism and the fact that longtermism has credible allies in society (multiple billionaires, an increasing number of public intellectuals, etc.) both significantly mitigate against the concern expressed here, as the former factors are much more likely to influence a broad set of policymakers’ opinions and actions.
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project. This seems to be more of a general concern about grantmaking to early-stage organizations and doesn’t strike me as unique to the policy space at all. If anything, it seems to rest on a questionable premise that there is only one channel for communicating with policymakers and only one organization or individual can occupy that channel at a time. As I stated earlier, policymakers already have huge ecosystems of people trying to influence policy outcomes; another entrant into the mix isn’t going to take up much space at all. But also, policymakers themselves are part of a huge bureaucratic apparatus and there are many, many potential levers and points of access that can’t all possibly be covered by a single organization. I do agree that coordination is important and desirable, but we shouldn’t let in itself that be a barrier to policy entrepreneurship, IMHO.
To be clear, I do think these risks are all real and worth thinking about! But to my reasonably well-informed understanding of at least three EA grantmakers’ processes, most of these projects are not judged by way of a sober risk analysis that clearly articulates specific threat models, assigns probabilities to each, and weighs the resulting estimates of harm against a similarly detailed model of the potential benefits. Instead, the risks are assessed on a holistic and qualitative basis, with the result that many things that seem potentially risky are not invested in even if the upside of them working out could really be quite valuable. Furthermore, the risks of not acting are almost never assessed—if you aren’t trying to get the policymaker’s attention tomorrow, who’s going to get their ear instead, and how likely might it be that it’s someone you’d really prefer they didn’t listen to?
While there are always going to be applications that are not worth funding in any grantmaking process, I think when it comes to policy and related work we are too ready to let perfect be the enemy of the good.
- ^
Important to note that the observations here are most relevant to policymaking in Western democracies; the considerations in other contexts are very different.
Re: “Why haven’t I heard of OR?”, I think your comments on the fragmentation and branding challenges are extremely on point. Last year Effective Institutions Project did a scoping exercise looking at different fields and academic disciplines that intersect with institutional decision-making, and it was amazing to see the variety of names and frames for what is ultimately a collection of pretty similar ideas. With that said, I think the directions that have been explored under the OR banner are particularly interesting and impressive, and am really glad to have someone in the community who knows that field well!
I think the second view is basically correct for policy in general, although I don’t have a strong view yet of how it applies to AI governance specifically. One thing that’s become clear to me as I’ve gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that’s possible in those settings. The more optimistic among us tend to get too excited about isolated interventions (e.g., electing a committed EA to Congress, getting a voting reform passed in one jurisdiction) that, even if successful, would only address a small part of the problem. On the other hand, skeptics see the inherent complexity and failures of past efforts and conclude that policy/advocacy/improving institutions is fundamentally hopeless, neglecting to appreciate that critical decisions by governments are, at the end of the day, made by real people with friends and colleagues and reading habits just like anyone else.
Viewed through that lens, my opinion and one that I think you will find is shared by people with experience in this domain is that the reason we have not seen more success influencing large-scale bureaucratic systems is that we have have been under-resourcing it as a community. By “under-resourcing it” I don’t just mean in terms of money, because as the Flynn campaign showed us it’s easy to throw millions of dollars at a solution that hits rapidly diminishing returns. I mean that we have not been investing enough in strategic clarity, a broad diversity of approaches that complement one another and collectively increase the chances of success, and the patience to see those approaches through. In the policy world outside of EA, activists consider it normal to have a 6-10 year timeline to get significant legislation or reforms enacted, with the full expectation that there will be many failed efforts along the way. But reforms do happen—just look at the success of the YIMBY movement, which Matt Yglesias wrote about today, or recent legislation to allow Medicare to negotiate prescription drug prices, which was in no small part the result of an 8-year, $100M campaign by Arnold Ventures.
Progress in the institutional sphere is not linear. It is indeed disappointing that the United States was not able to get a pandemic preparedness bill passed in the wake of COVID, or that the NIH is still funding ill-advised research. But we should not confuse this for the claim that we’ve been able to do “approximately nothing.” The overall trend for EA and longtermist ideas being taken seriously at increasingly senior levels over the past couple of years is strongly positive. Some of the diverse factors include the launch of the Future Fund and the emergence of SBF as a key political donor; the publication of Will’s book and the resulting book tour; the networking among high-placed government officials by EA-focused or -influenced organizations such as Open Philanthropy, CSET, CLTR, the Simon Institute, Metaculus, fp21, Schmidt Futures, and more; and the natural emergence of the initial cohort of EA leaders into the middle third of their careers. Just recently, I had one senior person tell me that Longview Philanthropy’s hiring of Carl Robichaud, a nuclear security grantmaker with 20 years of experience, is what got them to pay attention to EA for the first time. All of it is, by itself, not enough to make a difference, and judged on its own terms will look like a failure. But all of it combined is what creates the possibility that more can be accomplished the next time around, and all of the time in between.