Thanks for the post! Much of it resonated with me.
A few quick thoughts:
1. I could see some reads of this being something like, “EA researchers are doing a bad job and should feel bad.” I wouldn’t agree with this (mainly the latter bit) and assume the author wouldn’t either. Lots of EAs I know seem to be doing about the best that they know of and have a lot of challenges they are working to overcome.
2. I’ve had some similar frustrations over the last few years. I think that there is a fair bit of obvious cause prioritization research to be done that’s getting relatively little attention. I’m not as confident as you seem to be about this, but agree it seems to be an issue.
3. I would categorize many of the issues as being systematic between different sectors. I think significant effort in these areas would require bold efforts with significant human and financial capital, and these clusters are rare. Right now the funding situation is still quite messy for ventures outside the core OpenPhil cause areas.
I could see an academic initiative taking some of them on, but that would be a significant undertaking from at least one senior academic who may have to take a major risk to do so. Right now we have a few senior academics who led/created the existing main academic/EA clusters, and these projects were very tied to the circumstances of the senior people.
If you want a job in Academia, it’s risky to do things outside the common tracks, and if you want one outside of Academia, it’s often riskier. One in-between is making new small nonprofits. This is also a significant undertaking however. The funding situation for small ongoing efforts is currently quite messy; these are often too small for OpenPhil but too big for EA funds.
4. One reason why funding is messy is because it’s thought that groups doing a bad job at these topics could be net negative. Thus, few people are trusted to lead important research in new areas that are core to EA. This could probably be improved with significantly more vetting, but this takes a lot of time. Now that I think about it, OpenPhil has very intensive vetting for their hires, and these are just hires; after they are hired they get managers and can be closely worked with. If a funder funds a totally new research initiative, they will have a vastly lower amount of control (or understanding) over it than organizations do over their employees. Right now we don’t have organizations around who can do near hiring-level amounts of funding for small initiatives, perhaps we should though.
5. We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding. Right now a whole lot of great ones are focused on AI (this often requires many years of grad school or training) and Animals. My impression is that on the margin, moving some people from these fields to other fields (cause prioritization or experimental new things) could be good, though a big change to several individuals.
6. It seems really difficult to convince committed researchers to change fields. They often have taken years to develop expertise, connections, and citations, so changing that completely is very costly. An alternative is to focus on young, new people, but those people take a while to mature as researchers.
In EA we just don’t have many “great generic researchers” who we can reassign from one topic to something very different on short notice. More of this seems great to me, but it’s tricky to setup and attract talent for.
7. I think it’s possible that older/experienced researchers don’t want to change careers, and new ones aren’t trusted with funding. Looking back I’m quite happy that Ellie and Holden started GiveWell without feeling like they needed to work in an existing org for 4 years first. I’m not sure what to do here, but would like to see more bets on smart young people.
8. I think there are several interesting “gaps” in EA and am sure that most others would agree. Solving them is quite challenging, it could require a mix of coordination, effort, networking, and thinking. I’d love to see some senior people try to do work like this full-time. In general I’d love for see more “EA researcher/funding coordination”, that seems like the root of a lot of our problems.
9. I think Rethink Priorities has a pretty great model and could be well suited to these kinds of problems. My impression is funding has been a bottleneck for them. I think that Peter may respond to this, so can do so directly. If there are funders out there who are excited to fund any of the kinds of work described in this article, I’d suggest reaching out to Rethink Priorities and seeing if they could facilitate that. They would be my best bet for that kind of arrangement at the moment.
10. Personally, I think forecasting/tooling efforts could help out cause prioritization work quite a bit (this is what I’m working on), but it will take some time, and obviously aren’t direct work on the issue.
1. EA researchers are doing a great job. Much kudos to them. Fully agree with you on that. I think this is mostly a coordination issue.
3. Agree a messy funding situation is a problem. Not so sure there is that big huge gap between groups funded by EA Funds and groups funded by OpenPhil.
4. Maybe we should worry less about “groups doing a bad job at these topics could be net negative”. I am not a big donor so find this hard to judge this well. Also I am all for funding well evidenced projects (see my skepticism below about funding “smart young people”). But I am not convinced that we should be that worried that research on this will lead to harm, except in a few very specific cases. Poor research will likely just be ignored. Also most Foundations vet staff more carefully than they vet projects they fund.
5-6. Agree research leaders are rare (hopefully this inspires them). Disagree that junior researchers are rare. You said: “We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding.” + “It seems really difficult to convince committed researchers to change fields” Very good points. That said I think Rethink Priories have been positively surprised at how many very high quality applicants they have had for research roles. So maybe junior researchers are there. My hope this post inspires some people to set up more organisations working in this space.
7. Not so sure about “more bets on smart young people”. Not sure I agree. I tend to prefer giving to or hiring people with experience or evidence of traction. But I don’t have a strong view and would change my mind if there was good evidence on this. There might also be ways to test less experienced people before funding the, like through a “Charity Entrepreneurship” type fellowship scheme.
8. I’d love to have more of your views on what an “EA researcher/funding coordination” looks like as I could maybe make it happen. I am a Trustee of EA London. EA London is already doing a lot of global coordination of EA work (especially under COVID). I have been thinking and talking to David (EA London staff) about scaling this up, hiring a second person etc. If you have a clear vision of what this might look like or what it could add I would consider pushing more on this.
9. Rethink Priorities is OK. I have donated to them in the past but might stop as not sure they are making much headway on the issue listed here. Peter said “I think we definitely do “Beyond speculation (practical longtermism) … So far we’ve mainly been favoring within-cause intervention prioritization”.
10. Good luck with your work on forecasting efforts.
4. I haven’t investigated this much myself, I was relaying what I know from donors (I don’t donate myself). I’ve heard a few times that OpenPhil and some of the donors behind EA Funds are quite worried about negative effects. My impression is that the reason for some of this is simple, but there are some more complicated reasons that go into the thinking here that haven’t been written up fully. I think Oliver Habryka has a bunch of views here.
5-6. I didn’t mean to imply that junior researchers are “rare”, just that they are limited in number (which is obvious). My impression is that there’s currently a bottleneck to give the very junior researchers experience and reputability, which is unfortunate. This is evidenced by Rethink’s round. I think there may be a fair amount of variation in these researchers though; that only a few are really the kinds who could pioneer a new area (this requires a lot of skills and special career risks).
7. I’m also really unsure about this. Though to be fair, I’m unsure about a lot of things. To be clear though, I think that there are probably rather few people this would be a good fit for.
I’m really curious just how impressive the original EA founders were compared to all the new EAs. There are way more young EAs now than there were in the early days, so theoretically we should expect that some will be in many ways more competent than the original EA founders, minus in experience of course.
Part of me wonders: if we don’t see a few obvious candidates for young EA researchers as influential as the founders were, in the next few years, maybe something is going quite wrong. My guess is that we should aim to resemble other groups that are very meritocratic in terms of general leadership and research.
8. Happy to discuss in person. They would take a while to organize and write up.
The very simple thing here is that to me, we really could use “funding work” of all types. OpenPhil still employs a very limited headcount given their resources, and EA Funds is mostly made up of volunteers. Distributing money well is a lot of work, and there currently aren’t many resources going into this.
One big challenge is that not many people are trusted to do this work, in part because of the expected negative impacts of funding bad things. So there’s a small group trusted to do this work, and a smaller subset of them interested in spending time doing it.
I would love to see more groups help coordinate, especially if they could be accepted by the major donors and community. I think there’s a high bar here, but if you can be over it, it can be very valuable.
I’d also recommend talking to the team at EA Funds, which is currently growing.
9. This could be worth discussing more further. RP is still quite early and developing. If you have suggestions about how it could improve, I’d be excited to have discussions on that. I could imagine us helping change it in positive directions going forward.
I think that there is a fair bit of obvious cause prioritization research to be done that’s getting relatively little attention.
Do you have a list of the top research areas you’d like to see that aren’t getting done?
Personally, I think forecasting/tooling efforts could help out cause prioritization work quite a bit (this is what I’m working on)
I agree. Forecasting is a common good to many causes, so you’d expect it not to be neglected. But in practice, it seems the only people working on forecasting are EA or EA-adjacent (I’d count Tetlock as adjacent). Recently I’ve had many empirical questions about the future that I thought could use good forecasts, e.g., for this essay I wrote, I made some Metaculus questions and used those to help inform the essay. It would be really helpful if it were easier to get good forecasts.
Do you have a list of the top research areas you’d like to see that aren’t getting done?
Oh boy. I’ve had a bunch of things in the back of my mind. Some of this is kind of personal (specific to my own high level beliefs, but wouldn’t apply to many others). I’m a longtermist and believe that most of the expected value will happen in the far future. Because of that, many of the existing global poverty, animal welfare, and criminal justice reform interventions don’t seem particularly exciting to me. I’m unsure what to think of AI Risk, but “unsure” is much, much better than “seems highly unlikely.” I think it’s safe to have some great people here; but currently get the impression that a huge number of EAs are getting into this field, and this seems like too many to me on the margin.
What I’m getting to is: when you exclude most of poverty, animal welfare, criminal justice reform, and AI, there’s not a huge amount getting worked on in EA at the moment.
I think I don’t quite buy the argument that the only long-term interventions to consider are ones that will cause X-risks in the next ~30 years, nor the argument that the only interventions are ones that will cause X-risks. I think it’s fairly likely(>20%) that sentient life will survive for at least billions of years; and that there may be a fair amount of lock-in, so changing the trajectory of things could be great.
I like the idea of building “resilience” instead of going after specific causes. For instance, if we spend all of our attention on bio risks, AI risks, and nuclear risks, it’s possible that something else weird will cause catastrophe in 15 years. So experimenting with broad interventions that seem “good no matter what” seems interesting. For example, if we could have effective government infrastructure, or general disaster response, or a more powerful EA movement, those would all be generally useful things.
I like Phil’s work (above comment) and think it should get more attention, quickly. Figuring out and implementing an actual plan that optimizes for the long term future seems like a ton of work to me.
I really would like to see more “weird stuff.” 10 years ago many of the original EA ideas seemed bizarre; like treating AI risk as highly important. I would hope that with 10-100x as many people, we’d have another few multiples of weird but exciting ideas. I’m seeing a few of them now but would like more.
Better estimation, high-level investigation, prioritization, data infrastructure, etc. seem great to me.
Maybe one way to put it would be something like, imagine clusters of ideas as unique as those of Center on Long-Term Risk, Qualia Computing, the Center for Election Science, etc. I want to see a lot more clusters like these.
Some quick ideas: - Political action for all long term things still seems very neglected and new to me, as mentioned in this post. - A lot of the prioritization work, even of, “Let’s just estimate a lot of things to get expected values.” - I’d like to see research in ways AI could make the world much better/safer; the most exciting part to me is how it could help us reason in better ways, pre-AGI, and what that could lead to. - Most EA organizations wouldn’t upset anyone (are net positives for everyone), but many things we may want would. For instance, political action, or potential action to prevent bio or ai companies from doing specific things. I could imagine groups like, “slightly-secretive strategic agencies” that go around doing valuable things, to have a lot of possible benefit (but of course significant downsides if done poorly). - This is close to me, but I’m curious if open source technologies could be exciting philanthropic investments. I think the donation to Roam may have gone extremely well, and am continually impressed and surprised by how little money there is in incredible but very early or experimental efforts online. Ideally this kind of work would include getting lots of money from non-EAs. - In general, trying to encourage EA style thinking in non-EA ventures could be great. There’s tons of philanthropic money being spent outside EA. The top few tech billionaires just dramatically increased their net worths in the last few months, many will likely spend those eventually. - I really care about growing the size and improve the average experience of the EA community. I think there’s a ton of work to be done here of many shapes and forms. - I think many important problems that feel like they should be done in Academia aren’t due to various systematic reasons. If we could produce researchers who do “the useful things, very well”, either in Academia or outside, that could be valuable, even in seemingly unrelated fields like anthropology, political science, or targeted medicine (fixing RSI, for instance). “Elephant and the Brain” style work comes to mind. - On that note, having 1-2 community members do nothing but work on RSI, back, and related physical health problems for EAs/rationalists, could be highly worthwhile at this point. We already have a few specific psychologists and a productivity coach. Maybe eventually there could be 10-40+ people doing a mini-industry of services tailored to these communities. - Unlikely idea: insect farms. Breed and experiment with insects or other small animals in ways that seem to produce the most well-being for the lowest cost. Almost definitely not that productive, but good for diversification, and possibly reasonably cheap to try for a few years. - Much better EA funding infrastructure, in part for long-term funding. - Investigation and action to reform/improve the UN and other global leadership structures. - I’m curious about using extensive Facebook ads, memes, Youtube sponsorship, and similar, to both encourage Effective Altruism, and to encourage ideas we think are net valuable. These things can be highly scalable.
Also, I’d be curious to get the suggestions of yourself and others here.
A lot of the prioritization work, even of, “Let’s just estimate a lot of things to get expected values.”
I would like to see more of this, and I would also like to see people be less uniformly critical of this sort of work. I’ve written a few things like this, and I inevitably get a few comments along the lines of, “This estimate isn’t actually accurate, you can’t know the true expected value, this research is a waste of time.” IME I get much more strongly negative comments when I write anything quantitative than when I don’t. But I might just be noticing that type of criticism more than other types.
Much better EA funding infrastructure, in part for long-term funding.
The rate of institutional value drift is something like 0.5%. Halving this would be extremely beneficial for anyone who wants to invest their money for future generations. It seems likely that if we put more effort into designing stable institutions, we could create EA investment funds that last for much longer.
The rate of individual value drift is even higher, something around 5%. That’s really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?
Some other neglected problems (with some shameless references to my own writings):
I like GPI’s research agenda. Right now there are only about half a dozen people working on these problems.
What is the correct “philosophy of priors”? The choice of prior distribution heavily affects how we should behave in areas of high uncertainty. For example, see Will MacAskill’s post and the Toby Ord’s reply. (edit: see also this relevant post)
With a simple model, I calculated that improving our estimate of the discount rate could matter more than any particular cause. The rationale is that the we should spent our resources at some optimal rate, which is largely determined by the philanthropic discount rate. Moving our spending schedule slightly closer to the optimal rate substantially increases expected utility. This is just based on a simple model, but I’d like to see more work on this.
In the conclusion of the same essay, I gave a list of relevant ideas for potential top causes with my rough guesses on their importance/neglectedness/tractability. The ideas not mentioned so far are: improving the ability of individuals to delegate their income to value-stable institutions; and making expropriation and value drift less threatening by spreading altruistic funds more evenly across actors and countries.
IMO there are some relatively straightforward ways that EAs could invest better, which I wrote about here. Improving EAs’ investments could be pretty valuable, especially for “give later”-leaning EAs.
Reducing the long-term probability of extinction, rather than just the probability over the next few decades. (I’m currently writing something about this.)
If you accept that improving the long-term value of the future is more important than reducing x-risk, is there anything you should do now, or should you mainly invest to give later? Does movement building count as investing? What about cause prioritization research? When is it better to work on movement building/cause prioritization rather than simply investing your money in financial assets?
IME I get much more strongly negative comments when I write anything quantitative than when I don’t. But I might just be noticing that type of criticism more than other types.
I haven’t seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all. At the extreme end are people who don’t even make clear statements, they just speak in vague metaphors or business jargon that are easy to defend but don’t actually convey any information. Needless to say, I think this is an anti-pattern. I’d be curious if anyone reading this would argue.
The rate of individual value drift is even higher, something around 5%. That’s really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?
It seems to me like some modeling here would be highly useful, though it can get kind of awkward. I imagine many decent attempts would include numbers like, “total expected benefit of one member”. Our culture often finds some of these calculations too “cold and calculating.” It could be worth it for someone to do a decent job at some of this, and just publicly write up the main takeaways.
I find the ideas you presented quite interesting and reasonable, I’d love to see more work along those lines.
I’d be curious if anyone reading this would argue.
I think it would depend a lot on how we operationalise the stance you’re arguing in favour of.
Overall, at the margin, I’m in favour of:
less use of vague-yet-defensible language
EAs/people in general making and using more explicit, quantitative estimates (including probability estimates)
(I’m in favour of these things both in general and when it comes to cause priorisation work.)
But I’m somewhat tentative/moderate in those views. For the sake of conversation, I’ll skip stating the arguments in favour of those views, and just focus on the arguments against (or the arguments for tentativenesss/moderation).
Essentially, as I outlined in this post (which I know you already read and left useful comments on), I think making, using, and making public quantitative estimates might sometimes:
Cost more time and effort than alternative approaches (such as more qualitative, “all-things-considered” assessments/discussions)
Exclude some of the estimators’ knowledge (which could’ve been leveraged by alternative approaches)
Cause overconfidence and/or cause underestimations of the value of information
Succumb to the optimizer’s curse
Cause anchoring
Cause reputational issues
(These downsides won’t always occur, can sometimes occur more strongly if we use approaches other than quantitative estimates, and can be outweighed by the benefits of quantitative estimates. But here I’m just focusing on “arguments against”.)
As a result:
I don’t think we should always aim for or require quantitative estimates (including in cause prioritisation work)
I think it may often be wise to combine use of quantitative estimates, formal models, etc. with more intuitive / all-things-considered / “black-box” approaches (see also)
I definitely think some statements/work from EAs and rationalists have used quantitative estimates in an overconfident way (sometimes wildly so), and/or has been treated by others as more certain than it is
It’s plausible to me that this overconfidence problem has not merely co-occurred or correlated with use of quantitative estimates, but that it tends to be exacerbated by that
But I’m not at all certain of that. Using quantitative estimates can sometimes help us see our uncertainty, critique people’s stances, have reality clearly prove us wrong (well, poorly calibrated), etc.
Relatedly, I think people using quantitative estimates should be very careful to remember how uncertain they are and communicate this clearly
But I’d say the same for most qualitative work in domains like longtermism
It’s plausible to me that the anchoring and/or reputational issues of making one’s quantitative estimates public outweigh the benefits of doing so (relative to just making more qualitative conclusions and considerations public)
But I’m not at all certain of that (as demonstrated by me making this database)
And I think this’ll depend a lot on how well thought-out one’s estimates are, how well one can communicate uncertainty, what one’s target audiences are, etc.
And it could still be worth making the estimates and not communicating them, or communicating them less publicly
I don’t think this position strongly contrasts with your or Michael’s positions. And indeed I’m a fan of what I’ve seen of both your work, and overall I favour more work like that. But these do seem like nuances/caveats worth noting.
I’m not advocating for “poorly done quantitative estimates.” I think anyone reasonable would admit that it’s possible to bungle them.
I’m definitely not happy with a local optimum of “not having estimates”. It’s possible that “having a few estimates” can be worse, but I imagine we’ll want to get to the point of “having lots of estimates, and becoming more mature to be able to handle them.” at some point, so that’s the direction to aim for.
I think the “local vs global optima” framing is an interesting way of looking at it.
That reminds me of some of my thinking when I was trying to work out whether it’d be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/longtermists, etc.). In particular, a big part of my reasoning was something like:
It’s plausible that it’s worse for this database to exist than for there to be no public existential risk estimates. But what really matters is whether it’s better that this database exist than that there be a small handful of existential risk estimates, scattered in various different places, and with people often referring to only one set in a given instance (e.g., the 2008 FHI survey), sometimes as if it’s the ‘final word’ on the matter.
That situation seems probably even worse from an anchoring and reputational perspective than there being a database. This is because seeing a larger set of estimates side by side could help people see how much disagreement there is and thus have a more appropriate level of uncertainty and humility.
With your comment in mind, I’d now add:
But all of that is just about how good various different present-day situations would be. We should also consider what position we ultimately want to reach.
It seems plausible that we could end up with a larger set of more trustworthy and more independently-made existential risk estimates. And it seems likely that this would be better than the situation we’re in now.
Furthermore, it seems plausible that making this database moves us a step towards that destination. This could be a reason to make the database, even if doing so was slightly counterproductive in the short term.
I haven’t seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all...
Reminds me of the thing where corporations don’t want to implement internal prediction markets because implementing a market isn’t in the self-interest of any individual decision-maker.
I imagine many decent attempts would include numbers like, “total expected benefit of one member”. Our culture often finds some of these calculations too “cold and calculating.”
I think this is a good point. A three-factor model of community building comes to mind as a prior post that had to tackle and communicate about this sort of tricky thing, and that did a good job of that, in my opinion. That post might be useful reading for other people who have to tackle and communicate about this sort of tricky issue in future. (E.g., I quoted it in a recent post of mine.)
The most relevant parts of that post are the section on “Elitism vs. egalitarianism”, and the following paragraph:
[Variation in the factors this post focuses on] often rests on things outside of people’s control. Luck, life circumstance, and existing skills may make a big difference to how much someone can offer, so that even people who care very much can end up having very different impacts. This is uncomfortable, because it pushes against egalitarian norms that we value. [...] We also do not think that these ideas should be used to devalue or dismiss certain people, or that they should be used to idolize others. The reason we are considering this question is to help us understand how we should prioritize our resources in carrying out our programs, not to judge people.
It seems to me like some modeling here would be highly useful
The basic model is really easy. Total number of community members at time t is e(r−v)t, where r is the movement growth rate and v is the value drift rate. So if the value of the EA community is proportional to the number of members, then increasing r by some number of percentage points is exactly as good as decreasing v by the same amount.
It’s less obvious how to model the tractability of changing r and v.
If you accept that improving the long-term value of the future is more important than reducing x-risk
Do you mean “If you accept that improving the long-term value of the future is more important than reducing extinction risk” (as distinct from existential risk more broadly, which already includes other ways of improving the value of the future)?
Or “If you accept that improving the long-term value of the future is more important than reducing the risk of existential catastrophe in the relatively near future?”
I meant to distinguish between long-term efforts and reducing x-risk in the relatively near future (the second case on your list), sorry that was unclear.
Import more of Silicon Valley’s “pay it forward” culture
Less reputation management / more psychological safety
Less sniping
OAK, Bay Area group houses, EA Hotel
Again, building out (non-dominating) ways to audit & collect data from the object-level projects
Less scrupulosity
Ties into the above but deserves its own bullet given how our collective psychology skews
Compassionate fighting against the thought-pattern Scott Alexander describes here
Make EA sexier
Market to retail donors / the broader public (e.g. Future Perfect, e.g. 80k, e.g. GiveWell running ads on Vox podcasts)
Market to impact investors (e.g. Lionheart) and big philanthropy
Cultivating more “I want to be like that” energy
Seems easy to walk back if it isn’t working because so many interest groups are competing for mindshare
Support EA physical health
Propagate effective treatments for RSI & back problems, as above
Take the mind-body connection seriously
Propagate best practices for nutrition, sleep, exercise; make the case that attending to these is prerequisite to having impact (rather than trading off against having impact)
Advance our frontier of knowledge
e.g. GPI’s research agenda, e.g. the stuff Michael Dickens laid out in his comment
More work on how to solve coordination problems
More work on governance (e.g. Vitalik’s stuff, e.g. the stuff Palladium is exploring)
Fund many moonshots / speculative projects
Fund projects that can be walked back if they aren’t working out (which is most projects, though some tech projects may be hard-to-reverse)
That’s an interesting list, especially for 30 minutes :) (Makes me wonder what you or others could do with more time.)
Much of it focused on EA community stuff. I kind of wonder if funders are extra resistant to some of this because it seems like they’re just “giving money to their friends”, which in some ways, they are. I could see some of it feeling odd and looking bad, but I think if done well it could be highly effective.
Many religious and ethnic groups spend a lot of attention helping each other, and it seems to have very positive effects. Right now EA (and the subcommunities I know of in EA) seem fairly far from that still.
A semi-related point on that topic; I’ve noticed that for many intelligent EAs, it feels like EA is a competition, not a collaboration. Individuals at social events will be trying to one-up each other with their cleverness. I’m sure I’ve contributed to this. I’ve noticed myself becoming jealous when I hear of others who are similar in some ways doing well, which really should make no sense at all. I think in the anonymous surveys 80K did a while back a bunch of people complained that there was a lot of signaling going on and that status was a big deal.
Many companies and open source projects live or die depending on the cultural health. Investments in the cultural health of EA may be difficult to measure, but pay off heavily in the long run.
100% agree that cultural health is very important, and that EA is under-investing in it. (The “we don’t want to just give money to our friends” point resonates, and other scrupulosity-related stuff is probably at play here as well.)
Individuals at social events will be trying to one-up each other with their cleverness. I’m sure I’ve contributed to this. I’ve noticed myself becoming jealous when I hear of others who are similar in some ways doing well, which really should make no sense at all.
Thank you for talking about this!
I’ve noticed similar patterns in my own mind, especially around how I engage with this Forum. (I’ve been stepping back from it more this year because I’ve noticed that a lot of my engagement wasn’t coming from a loving place.)
These dynamics may not make any sense, but there are deep biological & psychological forces giving rise to them. [insert Robin Hanson’s “everything you do is signaling” rant here]
… I think in the anonymous surveys 80K did a while back a bunch of people complained that there was a lot of signaling going on and that status was a big deal.
Right. Last year concerns about status made a lot of heat on the Forum (1, 2, 3), but as far as I know nothing has really changed since then, perhaps other than more folks acknowledging that status is a thing.
(A bunch of those ideas seem interesting, but I’ll just comment on the one where I have something to say)
Seems easy to walk back if it isn’t working because so many interest groups are competing for mindshare
This does seem to me like it makes it easy to walk back efforts to make EA sexier, but it doesn’t seem like it makes it easy to do it again later in a different way (without the odds of success being impaired by the first attempt).
Essentially:
I think we could make EA relatively small/non-prominent/whatever again if we wanted to
But it also seems plausible to me that EA can only make “one big first impression”, and that that’ll colour a lot of people’s perceptions of EA if it tries to make a splash again later (even perhaps 10-30 years later).
Put another way:
They might stop thinking about EA if we stop actively reminding them
But then if we start competing for their attention again later they’ll be like “Wait, aren’t those the people who [whatever impression they got of us the first time]?”
Forecasting is a common good to many causes, so you’d expect it not to be neglected. But in practice, it seems the only people working on forecasting are EA or EA-adjacent (I’d count Tetlock as adjacent)
I think I’ve become a bit convinced that incentive and coordination problems are so poor that many “common goods” are surprisingly neglected. The history of the slow development and proliferation of Bayesian techniques in general (up to around 20 years ago maybe, but even now I think the foundations can be improved a lot) seems quite awful.
Also, at this point, I feel quite strong about much of the EA community; like we’ve gathered up many of the most [intelligent + pragmatic + agentic + high-level-optimizing] people in the world. As such I think we can compete and do a good job in many areas we may choose to focus on. So it could be that we could move up from “absolutely, incredibly neglected”, to “just somewhat neglected”, which could open up a whole bunch of fields.
like we’ve gathered up many of the most [intelligent + pragmatic + agentic + high-level-optimizing] people in the world
It seems like I routinely learn about some smart and insightful person through non-EA channels and then later find out they’re involved in EA or at least subscribe to EA principles—most recent example for me is Gordon Irlam, who I originally learned about through his writings on portfolio selection.
I’ve been thinking a lot about the lack of non-EA interest or focus on forecasting or related tools. I was very surprised when I made Guesstimate and there was both excitement from several people, but not that much excitement from most businesses or governments.
I think that forecasting of the GJP sort is still highly niche. Almost no one knows of it or understands the value. You can look at this as similar to specific advances in, say, type theory or information theory.
The really smart groups that have interests in improving their long term judgement seem to be financial institutions and similar. These are both highly secretive, and not interested in spending extra effort helping outside groups.
So to really advance a field like judgemental forecasting would require a combination of expertise, funding, and interest in helping the broad public, and this is a highly unusual combination. I imagine that if IARPA wasn’t around in time to both be interested in and able to fund GJP’s efforts, much less would have happened there. I’d also personally point out that I’d expect that IARPA’s funding of it was around 1/3rd or maybe 1/20th as efficient as it would have been if OpenPhil would have organized a more directed effort, in terms of global benefit.
This makes me think that there are probably many other very specific technology and research efforts that also be exciting for us to focus on, but we don’t have the expertise to recognize them. May may have gotten lucky with forecasting/estimation tech, as that was something we had to get close to anyway for other reasons.
Also worth noting that the managing director of IARPA’s forecasting program was Jason Matheny, who previously founded New Harvest (which does cultured meat research, and was the first such org AFAIK) and did x-risk research at FHI.
Thanks for the post! Much of it resonated with me.
A few quick thoughts:
1. I could see some reads of this being something like, “EA researchers are doing a bad job and should feel bad.” I wouldn’t agree with this (mainly the latter bit) and assume the author wouldn’t either. Lots of EAs I know seem to be doing about the best that they know of and have a lot of challenges they are working to overcome.
2. I’ve had some similar frustrations over the last few years. I think that there is a fair bit of obvious cause prioritization research to be done that’s getting relatively little attention. I’m not as confident as you seem to be about this, but agree it seems to be an issue.
3. I would categorize many of the issues as being systematic between different sectors. I think significant effort in these areas would require bold efforts with significant human and financial capital, and these clusters are rare. Right now the funding situation is still quite messy for ventures outside the core OpenPhil cause areas.
I could see an academic initiative taking some of them on, but that would be a significant undertaking from at least one senior academic who may have to take a major risk to do so. Right now we have a few senior academics who led/created the existing main academic/EA clusters, and these projects were very tied to the circumstances of the senior people.
If you want a job in Academia, it’s risky to do things outside the common tracks, and if you want one outside of Academia, it’s often riskier. One in-between is making new small nonprofits. This is also a significant undertaking however. The funding situation for small ongoing efforts is currently quite messy; these are often too small for OpenPhil but too big for EA funds.
4. One reason why funding is messy is because it’s thought that groups doing a bad job at these topics could be net negative. Thus, few people are trusted to lead important research in new areas that are core to EA. This could probably be improved with significantly more vetting, but this takes a lot of time. Now that I think about it, OpenPhil has very intensive vetting for their hires, and these are just hires; after they are hired they get managers and can be closely worked with. If a funder funds a totally new research initiative, they will have a vastly lower amount of control (or understanding) over it than organizations do over their employees. Right now we don’t have organizations around who can do near hiring-level amounts of funding for small initiatives, perhaps we should though.
5. We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding. Right now a whole lot of great ones are focused on AI (this often requires many years of grad school or training) and Animals. My impression is that on the margin, moving some people from these fields to other fields (cause prioritization or experimental new things) could be good, though a big change to several individuals.
6. It seems really difficult to convince committed researchers to change fields. They often have taken years to develop expertise, connections, and citations, so changing that completely is very costly. An alternative is to focus on young, new people, but those people take a while to mature as researchers.
In EA we just don’t have many “great generic researchers” who we can reassign from one topic to something very different on short notice. More of this seems great to me, but it’s tricky to setup and attract talent for.
7. I think it’s possible that older/experienced researchers don’t want to change careers, and new ones aren’t trusted with funding. Looking back I’m quite happy that Ellie and Holden started GiveWell without feeling like they needed to work in an existing org for 4 years first. I’m not sure what to do here, but would like to see more bets on smart young people.
8. I think there are several interesting “gaps” in EA and am sure that most others would agree. Solving them is quite challenging, it could require a mix of coordination, effort, networking, and thinking. I’d love to see some senior people try to do work like this full-time. In general I’d love for see more “EA researcher/funding coordination”, that seems like the root of a lot of our problems.
9. I think Rethink Priorities has a pretty great model and could be well suited to these kinds of problems. My impression is funding has been a bottleneck for them. I think that Peter may respond to this, so can do so directly. If there are funders out there who are excited to fund any of the kinds of work described in this article, I’d suggest reaching out to Rethink Priorities and seeing if they could facilitate that. They would be my best bet for that kind of arrangement at the moment.
10. Personally, I think forecasting/tooling efforts could help out cause prioritization work quite a bit (this is what I’m working on), but it will take some time, and obviously aren’t direct work on the issue.
Tank you Ozzie. Very very helpful. To respond.
1. EA researchers are doing a great job. Much kudos to them. Fully agree with you on that. I think this is mostly a coordination issue.
3. Agree a messy funding situation is a problem. Not so sure there is that big huge gap between groups funded by EA Funds and groups funded by OpenPhil.
4. Maybe we should worry less about “groups doing a bad job at these topics could be net negative”. I am not a big donor so find this hard to judge this well. Also I am all for funding well evidenced projects (see my skepticism below about funding “smart young people”). But I am not convinced that we should be that worried that research on this will lead to harm, except in a few very specific cases. Poor research will likely just be ignored. Also most Foundations vet staff more carefully than they vet projects they fund.
5-6. Agree research leaders are rare (hopefully this inspires them). Disagree that junior researchers are rare. You said: “We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding.” + “It seems really difficult to convince committed researchers to change fields” Very good points. That said I think Rethink Priories have been positively surprised at how many very high quality applicants they have had for research roles. So maybe junior researchers are there. My hope this post inspires some people to set up more organisations working in this space.
7. Not so sure about “more bets on smart young people”. Not sure I agree. I tend to prefer giving to or hiring people with experience or evidence of traction. But I don’t have a strong view and would change my mind if there was good evidence on this. There might also be ways to test less experienced people before funding the, like through a “Charity Entrepreneurship” type fellowship scheme.
8. I’d love to have more of your views on what an “EA researcher/funding coordination” looks like as I could maybe make it happen. I am a Trustee of EA London. EA London is already doing a lot of global coordination of EA work (especially under COVID). I have been thinking and talking to David (EA London staff) about scaling this up, hiring a second person etc. If you have a clear vision of what this might look like or what it could add I would consider pushing more on this.
9. Rethink Priorities is OK. I have donated to them in the past but might stop as not sure they are making much headway on the issue listed here. Peter said “I think we definitely do “Beyond speculation (practical longtermism) … So far we’ve mainly been favoring within-cause intervention prioritization”.
10. Good luck with your work on forecasting efforts.
Thanks for the response!
Quick responses:
4. I haven’t investigated this much myself, I was relaying what I know from donors (I don’t donate myself). I’ve heard a few times that OpenPhil and some of the donors behind EA Funds are quite worried about negative effects. My impression is that the reason for some of this is simple, but there are some more complicated reasons that go into the thinking here that haven’t been written up fully. I think Oliver Habryka has a bunch of views here.
5-6. I didn’t mean to imply that junior researchers are “rare”, just that they are limited in number (which is obvious). My impression is that there’s currently a bottleneck to give the very junior researchers experience and reputability, which is unfortunate. This is evidenced by Rethink’s round. I think there may be a fair amount of variation in these researchers though; that only a few are really the kinds who could pioneer a new area (this requires a lot of skills and special career risks).
7. I’m also really unsure about this. Though to be fair, I’m unsure about a lot of things. To be clear though, I think that there are probably rather few people this would be a good fit for.
I’m really curious just how impressive the original EA founders were compared to all the new EAs. There are way more young EAs now than there were in the early days, so theoretically we should expect that some will be in many ways more competent than the original EA founders, minus in experience of course.
Part of me wonders: if we don’t see a few obvious candidates for young EA researchers as influential as the founders were, in the next few years, maybe something is going quite wrong. My guess is that we should aim to resemble other groups that are very meritocratic in terms of general leadership and research.
8. Happy to discuss in person. They would take a while to organize and write up.
The very simple thing here is that to me, we really could use “funding work” of all types. OpenPhil still employs a very limited headcount given their resources, and EA Funds is mostly made up of volunteers. Distributing money well is a lot of work, and there currently aren’t many resources going into this.
One big challenge is that not many people are trusted to do this work, in part because of the expected negative impacts of funding bad things. So there’s a small group trusted to do this work, and a smaller subset of them interested in spending time doing it.
I would love to see more groups help coordinate, especially if they could be accepted by the major donors and community. I think there’s a high bar here, but if you can be over it, it can be very valuable.
I’d also recommend talking to the team at EA Funds, which is currently growing.
9. This could be worth discussing more further. RP is still quite early and developing. If you have suggestions about how it could improve, I’d be excited to have discussions on that. I could imagine us helping change it in positive directions going forward.
10. Thanks!
Excellent comment.
Do you have a list of the top research areas you’d like to see that aren’t getting done?
I agree. Forecasting is a common good to many causes, so you’d expect it not to be neglected. But in practice, it seems the only people working on forecasting are EA or EA-adjacent (I’d count Tetlock as adjacent). Recently I’ve had many empirical questions about the future that I thought could use good forecasts, e.g., for this essay I wrote, I made some Metaculus questions and used those to help inform the essay. It would be really helpful if it were easier to get good forecasts.
Oh boy. I’ve had a bunch of things in the back of my mind. Some of this is kind of personal (specific to my own high level beliefs, but wouldn’t apply to many others).
I’m a longtermist and believe that most of the expected value will happen in the far future. Because of that, many of the existing global poverty, animal welfare, and criminal justice reform interventions don’t seem particularly exciting to me. I’m unsure what to think of AI Risk, but “unsure” is much, much better than “seems highly unlikely.” I think it’s safe to have some great people here; but currently get the impression that a huge number of EAs are getting into this field, and this seems like too many to me on the margin.
What I’m getting to is: when you exclude most of poverty, animal welfare, criminal justice reform, and AI, there’s not a huge amount getting worked on in EA at the moment.
I think I don’t quite buy the argument that the only long-term interventions to consider are ones that will cause X-risks in the next ~30 years, nor the argument that the only interventions are ones that will cause X-risks. I think it’s fairly likely(>20%) that sentient life will survive for at least billions of years; and that there may be a fair amount of lock-in, so changing the trajectory of things could be great.
I like the idea of building “resilience” instead of going after specific causes. For instance, if we spend all of our attention on bio risks, AI risks, and nuclear risks, it’s possible that something else weird will cause catastrophe in 15 years. So experimenting with broad interventions that seem “good no matter what” seems interesting. For example, if we could have effective government infrastructure, or general disaster response, or a more powerful EA movement, those would all be generally useful things.
I like Phil’s work (above comment) and think it should get more attention, quickly. Figuring out and implementing an actual plan that optimizes for the long term future seems like a ton of work to me.
I really would like to see more “weird stuff.” 10 years ago many of the original EA ideas seemed bizarre; like treating AI risk as highly important. I would hope that with 10-100x as many people, we’d have another few multiples of weird but exciting ideas. I’m seeing a few of them now but would like more.
Better estimation, high-level investigation, prioritization, data infrastructure, etc. seem great to me.
Maybe one way to put it would be something like, imagine clusters of ideas as unique as those of Center on Long-Term Risk, Qualia Computing, the Center for Election Science, etc. I want to see a lot more clusters like these.
Some quick ideas:
- Political action for all long term things still seems very neglected and new to me, as mentioned in this post.
- A lot of the prioritization work, even of, “Let’s just estimate a lot of things to get expected values.”
- I’d like to see research in ways AI could make the world much better/safer; the most exciting part to me is how it could help us reason in better ways, pre-AGI, and what that could lead to.
- Most EA organizations wouldn’t upset anyone (are net positives for everyone), but many things we may want would. For instance, political action, or potential action to prevent bio or ai companies from doing specific things. I could imagine groups like, “slightly-secretive strategic agencies” that go around doing valuable things, to have a lot of possible benefit (but of course significant downsides if done poorly).
- This is close to me, but I’m curious if open source technologies could be exciting philanthropic investments. I think the donation to Roam may have gone extremely well, and am continually impressed and surprised by how little money there is in incredible but very early or experimental efforts online. Ideally this kind of work would include getting lots of money from non-EAs.
- In general, trying to encourage EA style thinking in non-EA ventures could be great. There’s tons of philanthropic money being spent outside EA. The top few tech billionaires just dramatically increased their net worths in the last few months, many will likely spend those eventually.
- I really care about growing the size and improve the average experience of the EA community. I think there’s a ton of work to be done here of many shapes and forms.
- I think many important problems that feel like they should be done in Academia aren’t due to various systematic reasons. If we could produce researchers who do “the useful things, very well”, either in Academia or outside, that could be valuable, even in seemingly unrelated fields like anthropology, political science, or targeted medicine (fixing RSI, for instance). “Elephant and the Brain” style work comes to mind.
- On that note, having 1-2 community members do nothing but work on RSI, back, and related physical health problems for EAs/rationalists, could be highly worthwhile at this point. We already have a few specific psychologists and a productivity coach. Maybe eventually there could be 10-40+ people doing a mini-industry of services tailored to these communities.
- Unlikely idea: insect farms. Breed and experiment with insects or other small animals in ways that seem to produce the most well-being for the lowest cost. Almost definitely not that productive, but good for diversification, and possibly reasonably cheap to try for a few years.
- Much better EA funding infrastructure, in part for long-term funding.
- Investigation and action to reform/improve the UN and other global leadership structures.
- I’m curious about using extensive Facebook ads, memes, Youtube sponsorship, and similar, to both encourage Effective Altruism, and to encourage ideas we think are net valuable. These things can be highly scalable.
Also, I’d be curious to get the suggestions of yourself and others here.
This is a really good comment.
I would like to see more of this, and I would also like to see people be less uniformly critical of this sort of work. I’ve written a few things like this, and I inevitably get a few comments along the lines of, “This estimate isn’t actually accurate, you can’t know the true expected value, this research is a waste of time.” IME I get much more strongly negative comments when I write anything quantitative than when I don’t. But I might just be noticing that type of criticism more than other types.
The rate of institutional value drift is something like 0.5%. Halving this would be extremely beneficial for anyone who wants to invest their money for future generations. It seems likely that if we put more effort into designing stable institutions, we could create EA investment funds that last for much longer.
The rate of individual value drift is even higher, something around 5%. That’s really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?
Some other neglected problems (with some shameless references to my own writings):
I like GPI’s research agenda. Right now there are only about half a dozen people working on these problems.
What is the correct “philosophy of priors”? The choice of prior distribution heavily affects how we should behave in areas of high uncertainty. For example, see Will MacAskill’s post and the Toby Ord’s reply. (edit: see also this relevant post)
With a simple model, I calculated that improving our estimate of the discount rate could matter more than any particular cause. The rationale is that the we should spent our resources at some optimal rate, which is largely determined by the philanthropic discount rate. Moving our spending schedule slightly closer to the optimal rate substantially increases expected utility. This is just based on a simple model, but I’d like to see more work on this.
In the conclusion of the same essay, I gave a list of relevant ideas for potential top causes with my rough guesses on their importance/neglectedness/tractability. The ideas not mentioned so far are: improving the ability of individuals to delegate their income to value-stable institutions; and making expropriation and value drift less threatening by spreading altruistic funds more evenly across actors and countries.
IMO there are some relatively straightforward ways that EAs could invest better, which I wrote about here. Improving EAs’ investments could be pretty valuable, especially for “give later”-leaning EAs.
Reducing the long-term probability of extinction, rather than just the probability over the next few decades. (I’m currently writing something about this.)
If you accept that improving the long-term value of the future is more important than reducing x-risk, is there anything you should do now, or should you mainly invest to give later? Does movement building count as investing? What about cause prioritization research? When is it better to work on movement building/cause prioritization rather than simply investing your money in financial assets?
I haven’t seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all. At the extreme end are people who don’t even make clear statements, they just speak in vague metaphors or business jargon that are easy to defend but don’t actually convey any information. Needless to say, I think this is an anti-pattern. I’d be curious if anyone reading this would argue.
It seems to me like some modeling here would be highly useful, though it can get kind of awkward. I imagine many decent attempts would include numbers like, “total expected benefit of one member”. Our culture often finds some of these calculations too “cold and calculating.” It could be worth it for someone to do a decent job at some of this, and just publicly write up the main takeaways.
I find the ideas you presented quite interesting and reasonable, I’d love to see more work along those lines.
I think it would depend a lot on how we operationalise the stance you’re arguing in favour of.
Overall, at the margin, I’m in favour of:
less use of vague-yet-defensible language
EAs/people in general making and using more explicit, quantitative estimates (including probability estimates)
(I’m in favour of these things both in general and when it comes to cause priorisation work.)
But I’m somewhat tentative/moderate in those views. For the sake of conversation, I’ll skip stating the arguments in favour of those views, and just focus on the arguments against (or the arguments for tentativenesss/moderation).
Essentially, as I outlined in this post (which I know you already read and left useful comments on), I think making, using, and making public quantitative estimates might sometimes:
Cost more time and effort than alternative approaches (such as more qualitative, “all-things-considered” assessments/discussions)
Exclude some of the estimators’ knowledge (which could’ve been leveraged by alternative approaches)
Cause overconfidence and/or cause underestimations of the value of information
Succumb to the optimizer’s curse
Cause anchoring
Cause reputational issues
(These downsides won’t always occur, can sometimes occur more strongly if we use approaches other than quantitative estimates, and can be outweighed by the benefits of quantitative estimates. But here I’m just focusing on “arguments against”.)
As a result:
I don’t think we should always aim for or require quantitative estimates (including in cause prioritisation work)
I think it may often be wise to combine use of quantitative estimates, formal models, etc. with more intuitive / all-things-considered / “black-box” approaches (see also)
I definitely think some statements/work from EAs and rationalists have used quantitative estimates in an overconfident way (sometimes wildly so), and/or has been treated by others as more certain than it is
It’s plausible to me that this overconfidence problem has not merely co-occurred or correlated with use of quantitative estimates, but that it tends to be exacerbated by that
But I’m not at all certain of that. Using quantitative estimates can sometimes help us see our uncertainty, critique people’s stances, have reality clearly prove us wrong (well, poorly calibrated), etc.
Relatedly, I think people using quantitative estimates should be very careful to remember how uncertain they are and communicate this clearly
But I’d say the same for most qualitative work in domains like longtermism
It’s plausible to me that the anchoring and/or reputational issues of making one’s quantitative estimates public outweigh the benefits of doing so (relative to just making more qualitative conclusions and considerations public)
But I’m not at all certain of that (as demonstrated by me making this database)
And I think this’ll depend a lot on how well thought-out one’s estimates are, how well one can communicate uncertainty, what one’s target audiences are, etc.
And it could still be worth making the estimates and not communicating them, or communicating them less publicly
I don’t think this position strongly contrasts with your or Michael’s positions. And indeed I’m a fan of what I’ve seen of both your work, and overall I favour more work like that. But these do seem like nuances/caveats worth noting.
Nice post. I think I agree with all of that.
I’m not advocating for “poorly done quantitative estimates.” I think anyone reasonable would admit that it’s possible to bungle them.
I’m definitely not happy with a local optimum of “not having estimates”. It’s possible that “having a few estimates” can be worse, but I imagine we’ll want to get to the point of “having lots of estimates, and becoming more mature to be able to handle them.” at some point, so that’s the direction to aim for.
I think the “local vs global optima” framing is an interesting way of looking at it.
That reminds me of some of my thinking when I was trying to work out whether it’d be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/longtermists, etc.). In particular, a big part of my reasoning was something like:
With your comment in mind, I’d now add:
Reminds me of the thing where corporations don’t want to implement internal prediction markets because implementing a market isn’t in the self-interest of any individual decision-maker.
Yea, I think there are similar incentives at play in both cases
I think this is a good point. A three-factor model of community building comes to mind as a prior post that had to tackle and communicate about this sort of tricky thing, and that did a good job of that, in my opinion. That post might be useful reading for other people who have to tackle and communicate about this sort of tricky issue in future. (E.g., I quoted it in a recent post of mine.)
The most relevant parts of that post are the section on “Elitism vs. egalitarianism”, and the following paragraph:
Thanks!
The basic model is really easy. Total number of community members at time
t
is e(r−v)t, wherer
is the movement growth rate andv
is the value drift rate. So if the value of the EA community is proportional to the number of members, then increasingr
by some number of percentage points is exactly as good as decreasingv
by the same amount.It’s less obvious how to model the tractability of changing
r
andv
.I liked this comment.
Do you mean “If you accept that improving the long-term value of the future is more important than reducing extinction risk” (as distinct from existential risk more broadly, which already includes other ways of improving the value of the future)?
Or “If you accept that improving the long-term value of the future is more important than reducing the risk of existential catastrophe in the relatively near future?”
Or something else (e.g., about smaller trajectory changes)?
I meant to distinguish between long-term efforts and reducing x-risk in the relatively near future (the second case on your list), sorry that was unclear.
Here’s a list I came up with from thinking about this for ~30 minutes:
Better ways of measuring what matters
Better neuroimaging tech to parse out the neurological basis of desirable & undesirable subjective states
Better measures of subjective well-being
Help EAs see more clearly, unpack + resolve personal traumas, and boost their efficacy + motivation
Emotional healing as a prerequisite to rationality
CFAR, OAK, Leverage, etc.
Plus building methods to audit which projects are working, which are failing, which are stagnating
Perhaps also a data collection project that vacuums up outcomes from the object-level projects?
Strengthen EA community ties / our sense of fellowship
More honesty about how weird effective research methods can be
More acknowledgement of the interdependent causal complex that gives rise to good research (e.g. Alex Flint’s introduction here)
More Ben Franklin-esque Juntos
Import more of Silicon Valley’s “pay it forward” culture
Less reputation management / more psychological safety
Less sniping
OAK, Bay Area group houses, EA Hotel
Again, building out (non-dominating) ways to audit & collect data from the object-level projects
Less scrupulosity
Ties into the above but deserves its own bullet given how our collective psychology skews
Compassionate fighting against the thought-pattern Scott Alexander describes here
Make EA sexier
Market to retail donors / the broader public (e.g. Future Perfect, e.g. 80k, e.g. GiveWell running ads on Vox podcasts)
Market to impact investors (e.g. Lionheart) and big philanthropy
Cultivating more “I want to be like that” energy
Seems easy to walk back if it isn’t working because so many interest groups are competing for mindshare
Support EA physical health
Propagate effective treatments for RSI & back problems, as above
Take the mind-body connection seriously
Propagate best practices for nutrition, sleep, exercise; make the case that attending to these is prerequisite to having impact (rather than trading off against having impact)
Advance our frontier of knowledge
e.g. GPI’s research agenda, e.g. the stuff Michael Dickens laid out in his comment
More work on how to solve coordination problems
More work on governance (e.g. Vitalik’s stuff, e.g. the stuff Palladium is exploring)
Fund many moonshots / speculative projects
Fund projects that can be walked back if they aren’t working out (which is most projects, though some tech projects may be hard-to-reverse)
Worry less about brand management
That’s an interesting list, especially for 30 minutes :) (Makes me wonder what you or others could do with more time.)
Much of it focused on EA community stuff. I kind of wonder if funders are extra resistant to some of this because it seems like they’re just “giving money to their friends”, which in some ways, they are. I could see some of it feeling odd and looking bad, but I think if done well it could be highly effective.
Many religious and ethnic groups spend a lot of attention helping each other, and it seems to have very positive effects. Right now EA (and the subcommunities I know of in EA) seem fairly far from that still.
https://www.nationalgeographic.com/culture/2018/09/south-asia-america-motels-immigration/
A semi-related point on that topic; I’ve noticed that for many intelligent EAs, it feels like EA is a competition, not a collaboration. Individuals at social events will be trying to one-up each other with their cleverness. I’m sure I’ve contributed to this. I’ve noticed myself becoming jealous when I hear of others who are similar in some ways doing well, which really should make no sense at all. I think in the anonymous surveys 80K did a while back a bunch of people complained that there was a lot of signaling going on and that status was a big deal.
Many companies and open source projects live or die depending on the cultural health. Investments in the cultural health of EA may be difficult to measure, but pay off heavily in the long run.
Thanks!
100% agree that cultural health is very important, and that EA is under-investing in it. (The “we don’t want to just give money to our friends” point resonates, and other scrupulosity-related stuff is probably at play here as well.)
Thank you for talking about this!
I’ve noticed similar patterns in my own mind, especially around how I engage with this Forum. (I’ve been stepping back from it more this year because I’ve noticed that a lot of my engagement wasn’t coming from a loving place.)
These dynamics may not make any sense, but there are deep biological & psychological forces giving rise to them. [insert Robin Hanson’s “everything you do is signaling” rant here]
Right. Last year concerns about status made a lot of heat on the Forum (1, 2, 3), but as far as I know nothing has really changed since then, perhaps other than more folks acknowledging that status is a thing.
(Status seems closely related to scrupulosity & to EA being vetting-constrained; I haven’t unpacked this yet.)
(A bunch of those ideas seem interesting, but I’ll just comment on the one where I have something to say)
This does seem to me like it makes it easy to walk back efforts to make EA sexier, but it doesn’t seem like it makes it easy to do it again later in a different way (without the odds of success being impaired by the first attempt).
Essentially:
I think we could make EA relatively small/non-prominent/whatever again if we wanted to
But it also seems plausible to me that EA can only make “one big first impression”, and that that’ll colour a lot of people’s perceptions of EA if it tries to make a splash again later (even perhaps 10-30 years later).
Put another way:
They might stop thinking about EA if we stop actively reminding them
But then if we start competing for their attention again later they’ll be like “Wait, aren’t those the people who [whatever impression they got of us the first time]?”
Posts that informed my thinking here:
Hard-to-reverse decisions destroy option value (which I see you also referenced yourself)
The fidelity model of spreading ideas
How valuable is movement growth?
Why not to rush to translate effective altruism into other languages
Your list reminds me of this thread: What EA Forum posts do you want someone to write?
I think I’ve become a bit convinced that incentive and coordination problems are so poor that many “common goods” are surprisingly neglected. The history of the slow development and proliferation of Bayesian techniques in general (up to around 20 years ago maybe, but even now I think the foundations can be improved a lot) seems quite awful.
Also, at this point, I feel quite strong about much of the EA community; like we’ve gathered up many of the most [intelligent + pragmatic + agentic + high-level-optimizing] people in the world. As such I think we can compete and do a good job in many areas we may choose to focus on. So it could be that we could move up from “absolutely, incredibly neglected”, to “just somewhat neglected”, which could open up a whole bunch of fields.
It seems like I routinely learn about some smart and insightful person through non-EA channels and then later find out they’re involved in EA or at least subscribe to EA principles—most recent example for me is Gordon Irlam, who I originally learned about through his writings on portfolio selection.
I’ve been thinking a lot about the lack of non-EA interest or focus on forecasting or related tools. I was very surprised when I made Guesstimate and there was both excitement from several people, but not that much excitement from most businesses or governments.
I think that forecasting of the GJP sort is still highly niche. Almost no one knows of it or understands the value. You can look at this as similar to specific advances in, say, type theory or information theory.
The really smart groups that have interests in improving their long term judgement seem to be financial institutions and similar. These are both highly secretive, and not interested in spending extra effort helping outside groups.
So to really advance a field like judgemental forecasting would require a combination of expertise, funding, and interest in helping the broad public, and this is a highly unusual combination. I imagine that if IARPA wasn’t around in time to both be interested in and able to fund GJP’s efforts, much less would have happened there. I’d also personally point out that I’d expect that IARPA’s funding of it was around 1/3rd or maybe 1/20th as efficient as it would have been if OpenPhil would have organized a more directed effort, in terms of global benefit.
This makes me think that there are probably many other very specific technology and research efforts that also be exciting for us to focus on, but we don’t have the expertise to recognize them. May may have gotten lucky with forecasting/estimation tech, as that was something we had to get close to anyway for other reasons.
Also worth noting that the managing director of IARPA’s forecasting program was Jason Matheny, who previously founded New Harvest (which does cultured meat research, and was the first such org AFAIK) and did x-risk research at FHI.
Yep, and a few others at IARPA who worked around the forecasting stuff were also EAs or close.