I’m Aaron, I’ve done Uni group organizing at the Claremont Colleges for a bit. Current cause prioritization is AI Alignment.
Aaron_Scher
The short answer to your question is “yes, if major changes happen in the world fairly quickly, then career advice which does not take such changes into account will be flawed”
I would also point to the example of the advice “most of the impact in your career is likely to come later on in your life, like age 36+” (paraphrase from here and other places). I happen to believe there’s a decent chance we have TAI/AGI by the time I’m 36 (maybe I’m >50% on this), which would make the advice less likely to be true.
Other things to consider: if timelines are longer, then impact-oriented altruists might have a higher chance of being able to shape the world positively, especially if you personally are early in your career. A calculus you might run is (probability of year X being crunch time for AI Safety research) * (expected value of me helping in year X). And it might be that you should focus on being impactful in 2038 even if you expect we are more likely to get AGI in 2024, because in 2024 you wouldn’t be very useful.
You will probably find the discussion here useful. See also here which I think is relevant, though indirectly. There is plenty of other writing on this topic but I can’t link it off the top of my head; you may be able to find it by searching around the EA Forum and LessWrong.
I like this comment and think it answers the question at the right level of analysis.
To try and summarize it back: EA’s big assumption is that you should purchase utilons, rather than fuzzies, with charity. This is very different from how many people think about the world and their relationship to charity. To claim that somebody’s way of “doing good” is not as good as they think is often interpreted by them as an attack on their character and identity, thus met with emotional defensiveness and counterattack.
EA ideas aim to change how people act and think (and for some core parts of their identity); such pressure is by default met with resistance.
There is some non-prose discussion of arguments around AI safety. Might be worth checking out: https://www.lesswrong.com/posts/brFGvPqo8sKpb9mZf/the-basics-of-agi-policy-flowchart Some of the stuff linked here: https://www.lesswrong.com/posts/4az2cFrJp3ya4y6Wx/resources-for-ai-alignment-cartography Including: https://www.lesswrong.com/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment
I have only skimmed this, but it seems quite good and I want more things like it on the forum. Positive feedback!
My phrasing below is more blunt and rude than I endorse, sorry. I’m writing quickly on my phone. I strong downvoted this post after reading the first 25% of it. Here are some reasons:
“Bayesianism purports that if we find enough confirming evidence we can at some point believe to have found “truth”.” Seems like a mischaracterization, given that sufficient new evidence should be able to change a Bayesian’s mind (tho I don’t know much about the topic).
“We cannot guess what knowledge people will create into the future” This is literally false, we can guess at this and we can have a significant degree of accuracy. E.g. I predict that there will be a winner in the 2020 US presidential election, even though I don’t know who it will be. I can guess that there will be computer chips which utilize energy more efficiently than the current state of the art, even though I do not know what such chips will look like (heck I don’t understand current chips).
“We can’t predict the knowledge we will have in the future. If we could, we would implement that knowledge today” Still obviously false. Engineers often know approximately what the final product will look like without figuring out all the details along the way.
“To achieve AGI we will need to program the following:
knowledge creating processes emotions creativity free will consciousness” This is a strong claim which is not obviously true and which you do not defend. I think it is false, as do many readers. I don’t know how to define free will, but it doesn’t seem necessary as you can get the important behavior from just following complex decision processes. Consciousness, likewise, seems hard to define but not necessary for any particular behavior (besides maybe complex introspection which you could define as part of consciousness).
“Reality has no boundary, it is infinite. Remembering this, people and AGI have the potential to solve an infinite number of problems” This doesn’t make much sense to me. There is no rule that says people can solve an infinite number of problems. Again, the claim is not obviously true but is undefended.
Maybe you won’t care about my disagreements given that I didn’t finish reading. I had a hard time parsing the arguments (I’m confused about the distinction between Bayesian reasoning and fallibilism, and it doesn’t line up with my prior understanding of Bayesianism), and many of the claims I could understand seem false or at least debatable and you assume their true.
This post is quite long and doesn’t feature a summary, making it difficult to critique without significant time investment.
I liked this post and would like to see more of people thinking for themselves about cause prioritization and doing BOTECs.
Some scattered thoughts below, also in the spirit of draft amnesty.
I had a little trouble understanding your calculations/logic, so I’m going to write them out in sentence form: GiveWell’s current giving recommendations correspond to spending about $0.50 to save an additional person an additional year of life. A 10% chance of extinction from misaligned AI means that postponing misaligned AI by a year gets us 10%*current population number of person days, or about 10 million. If we take GiveWell’s willingness to spend and extrapolate it to the scenario of postponing misaligned AI, we get that GiveWell might be willing to spend $500 million to postpone misaligned AI by a day.
I think it’s important that these are different domains, and the number of people who would be just as happy to see their donation buy a bednet as lobby for tech regulation (assuming similar EV) is unfortunately small. Many donors care about much more than some cause-neutral how-much-good their donation does. e.g., for instance I see to care (I’m confiused) that some of my donations help extremely poor people.
You point out that maybe regulation doesn’t work, but there’s the broader problem which is that we don’t have shovel ready projects that can turn $500 million into postponing misaligned AI by a day; I suspect there are many interventions which can do this for much cheaper, but they are not interventions which can just absorb money and save lived like many global health charities can (perhaps they need projects to be founded and take years to develop).
The above problems point to another important idea: The GiveWell bar is able to be where it is because of what projects to improve the world can actually be funded by GiveWell dollars — not because of some fact about the value of postponing a life by a day. You might think about the GiveWell bar as, the cheapest scalable ways to save lives in the global health and development space can provide an additional day of life for $0.50. If you ask individuals in the US how much they would pay to extend their own life or that of a loved one by a day, you will get numbers much higher than this; if you look at spending on healthcare in the developed world my guess is that it is very normal to spend thousands of dollars to postpone a life by a day. GiveWell’s bar for funding would be higher if there were other great opportunities for saving lives salably for cheap (at least in global health and development).
An abstraction I notice I’m using is thinking about $0.50/person/day as the current market price. However, this is not an efficient market, for a number of reasons. This post draws the parallel of “hey look, at that price we should be willing to spend $0.5b on postponing misaligned AI by a day”. However, if we actually had many opportunities to spend $0.5b on postponing misaligned AI by a day, the funding bar would increase, because there isn’t enough money in the cause-neutral altruism bucket.
Some implications: cause-neutral donors who put above negligible probability on existential risks from AI will probably get much more bang for their buck trying to reduce existential risks or buy time, at least contingent on there being projects that can absorb money in that space. More importantly, those working on reducing AI x-risk have a lot of work to do in terms of closing the cost-effectiveness gap between themselves and global health. By closing the gap I mean getting enough projects to exist in the space such that we can consistently take in more money and turn it into x-risk reduction or buying time.
If you haven’t read Astronomical Waste, you might like it.
I am a bit confused by the key question / claim. It seems to be some variant of “Powerful AI may allow the development of technology which could be used to destroy the world. While the AI Alignment problem is about getting advanced AIs to do what their human operator wants, this could still lead to an existential catastrophe if we live in such a vulnerable world where unilateral actors can deploy destructive technology. Thus actual safety looks like not just having Aligned AGI, but also ensuring that the world doesn’t get destroyed by bad or careless or unilateral actors”
If this is the claim, seems about right, and has been discussed a lot both online and offline. Powerful AI itself might be that destructive technology, hence discussion of Deployment and Coordination problems. See here. Some other relevant resources: here, here.
As you asked for feedback,
If I am not mistaken, “AI Alignment” seems to mean getting AI to do what we want without harmful side effects, but “AI Safety” seems to imply keeping AI from harming or destroying humanity.
I would say the distinction isn’t so clear and the semantics don’t seem too important; what matters is that those in the field of AI Alignment and AI Alignment broadly are aimed at getting good outcomes for humanity.
I guess your claim might actually be “Powerful AI may be a precipitating factor for other risks as it allows the development of many other, potentially unsafe, technologies.” This seems technically true but is unlikely to be how the world goes. Mainly I expect one of two outcomes:
humanity is disempowered or dead from misaligned AI;
we successfully align AGIs and solve the deployment problem which results in a world which where no single actor can cause an existential catastrophe.
The reason I think that no single actor can cause existential catastrophe in 2 is that this seems to be a likely precursor to avoiding dying to misaligned AGI. I would recommend the above links for understanding this intuition. I may be wrong here because it may be that the way we avoid misaligned AGI is by democratizing aligned-AGI-creation tech (all the open source libraries include Alignment properties including preventing misuse); but maybe the filters for preventing misuse are not sufficient for stopping people from developing civilization-destroying tech in the future (but probably given such a filter we would already be dead from misaligned AGI that somebody made by stress-testing the filters).
Sorry for scattered thoughts
Personally I didn’t put much weight on this sentence because the more-important-to-me evidence is many EAs being on the political left (which feels sufficient for the claim that EA is not a generally conservative set of ideas, as is sometimes claimed). See the 2019 EA Survey in which 72% of respondents identified as Left or Center Left.
“There are also strong selection effects on retreat attendees vs. intro fellows”
I wonder what these selection effects are. I imagine you get a higher proportion of people who think they are very excited about EA. But also, many of the wicked smart, high achieving people I know are quite busy and don’t think they have time for a retreat like this, so I wonder if you’re somewhat selecting against these people?
Similarly, people who are very thoughtful about opportunity costs and how they spend their time might feel like a commitment like this is too big given that they don’t know much about EA yet and don’t know how much they agree/want to be involved.
Thanks for making this. I expect that after you make edits based on comments and such this will be the most up to date and accurate public look at this question (the current size of the field). I look forward to linking people to it!
I disagree with a couple specific points as well as the overall thrust of this post. Thank you for writing it!
A maximizing viewpoint can say that we need to be cautious lest we do something wonderful but not maximally so. But in practice, embracing a pragmatic viewpoint, saving money while searching for the maximum seems bad.
I think I strongly disagree with this because opportunities for impact appear heavy tailed. Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile. I think the default of the world is that I donate to a charity in the 50th percentile. And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile. It is only when I take a maximizing stance, and a strong mandate to do lots of good (or when many thousands of hours have been spent on global priorities research), that I will find and donate to the very best charities. The ratios matter of course, and probably if I was faced with donating $1,000 to 90th percentile charities or $1 to a 99th percentile charity, I would probably donate to the 90th percentile charities, but if the numbers are $2 and $1, I should donate to the 99th percentile charity. I am claiming: the distribution of altruistic opportunities is roughly heavy tailed; the best (and maybe only) way to end up in the heavy tail is to take a maximizing approach; the “wonderful” thing that we would do without maximizing is, as measured ex post (looking at the results in retrospect), significantly worse than the best thing; a claim that I am also making, though which I think is weakest, is that we can differentiate between the “wonderful” and the “maximal available” opportunities ex ante (before hand) given research and reflection; the thing I care about is impact, and the EA movement is good insofar as it creates positive impact in the world (including for members of the EA community, but they are a small piece of the universe).
There are presumably people who would have pursued PhDs in computer science, and would have been EA-aligned tenure track professors now, but who instead decided to earn-to-give back in 2014. Whoops!
To me this seems like it doesn’t support the rest of your argument. I agree that the correct allocation of EA labor is not all doing AI Safety research, and we need to have outreach and career related resources to support people with various skills, but to me this is more-so a claim that we are not maximizing well enough — we are not properly seeking the optimal labor allocation because we’re a relatively uncoordinated set of individuals. If we were better at maximizing at a high level, and doing a good job of it, the problem you are describing would not happen, and I think it’s extremely likely that we can solve this problem.
With regard to the thrust of your post: I cannot honestly tell a story about how the non-maximizing strategy wins. That is, when I think about all the problems in the world: pandemics, climate change, existential threats from advanced AI, malaria, mass suffering of animals, unjust political imprisonment, etc., I can’t imagine that we solve these problems if we approach them like exercise or saving for retirement. If I actually cared about exercise or saving for retirement, I would treat them very differently than I currently do (and I have had periods in my life where I cared more about exercise and thus spent 12 hours a week in the gym). I actually care about the suffering and happiness in the world, and I actually care that everybody I know and love doesn’t die from unaligned AI or a pandemic or a nuclear war. I actually care, so I should try really hard to make sure we win. I should maximize my chances of winning, and practically this means maximizing for some of the proxy goals I have along the way. And yes, it’s really easy to mess up this maximize thing and to neglect something important (like our own mental health), but that is an issue with the implementation, not with the method.
Perhaps my disagreement here is not a disagreement about what EA descriptively is and more-so a claim about what I think a good EA movement should be. I want a community that’s not a binary in / out, that’s inclusive and can bring joy and purpose to many people’s lives, but what I want more than those things is for the problems in the world to be solved — for kids to never go hungry or die from horrible diseases, for the existence of humanity a hundred years from now to not be an open research question, for billions+ of sentience beings around the world to not live lives of intense suffering. To the extent that many in the EA community share this common goal, perhaps we differ in how to get there, but the strategy of maximizing seems to me like it will do a lot better than treating EA like I do exercise or saving for retirement.
You write:
Another possible reason to argue for a zero-discount rate is that the intrinsic value of humanity increases at a rate greater than the long-run catastrophe rate[19]. This is wrong for (at least) 2 reasons.
Your footnote is to The Precipice: To quote from The Precipice Appendix E:
by many measures the value of humanity has increased substantially over the centuries. This progress has been very uneven over short periods, but remarkably robust over the long run. We live long lives filled with cultural and material riches that would have seemed like wild fantasy to our ancestors thousands of years ago. And the scale of our civilization may also matter: the fact that there are thousands of times as many people enjoying these richer lives seems to magnify this value. If the intrinsic value of each century increases at a rate higher than r, this can substantially increase the value of protecting humanity (even if this rate of increase is not sustained forever). [Footnote here]
Regarding your first reason: You first cite that this would imply a negative-discount rate that rules in favor of future people; I’m confused why this is bad? You mention “radical conclusions” – I mean sure, there are many radical conclusions in the world, for instance I believe that factory farming is a moral atrocity being committed by almost all of current society – that’s a radical view. Being a radical view doesn’t make it wrong (although I think we should be healthily skeptical of views that seem weird). Another radical conclusion I hold is that all people around the world are morally valuable, and enslaving them would be terrible; this view would appear radical to most people at various points in history, and is not radical in most the world now.
Regarding your second reason:
while it is true that lives lived today are much better than lives lived in the past (longer, healthier, richer), and the same may apply to the future, this logic leads to some deeply immoral places. The life of a person a who will live a long, healthy, and rich life, is worth no more than the life of the poorest, sickest, person alive. While some lives may be lived better, all lives are worth the same. Longtermism should accept this applies across time too.
I would pose to you the question: Would you rather give birth to somebody who would be tortured their entire life or somebody who would be quite happy throughout their life (though they experience ups and downs)? Perhaps you are indifferent between these, but I doubt it (they both are one life being born, however, so taking the “all lives are worth the same” line literally here implies they are equally good). I think the value of a future where everybody being tortured is quite bad and is probably worse than extinction, whereas a flourishing future where people are very happy and have their needs met would be awesome!
I agree that there are some pretty unintuitive conclusions of this kind of thinking, but there are also unintuitive conclusions if you reject it! I think the value of an average life today, to the person living it, is probably higher than the value of an average life in 1700 CE, to the person living it. In the above Precipice passage, Ord discusses some reasons why this might be so.
Welcome to the forum! I am glad that you posted this! And also I disagree with much of it. Carl Shulman already responded explaining why he things the extinction rate approaches zero fairly soon, reasoning I agree with.
Under a stable future population, where people produce (on average) only enough offspring to replace themselves, a person’s expected number of descendants is equal to the expected length of human existence, divided by the average lifespan (). I estimate this figure is 93[22].
To be consistent, when comparing lives saved in present day interventions with (expected) lives saved from reduced existential risk, present day lives saved should be multiplied by this constant, to account for the longtermist implications of saving each person. This suggests priorities such as global health and development may be undervalued at present.
I think the assumption about a stable future population is inconsistent with your calculation of the value of the average life. I think of two different possible worlds:
World 1: People have exactly enough children to replace themselves, regardless of the size of the population. The population is 7 billion in the first generation; a billion extra (not being accounted for in the ~2.1 kids per couple replacement rate) people die before being able to reproduce. The population then goes on to be 6 billion for the rest of the time until humanity perishes. Each person who died cost humanity 93 future people, making their death much worse than without this consideration.
World 2: People have more children than replace themselves, up to the point where the population hits the carrying capacity (say it’s 7 billion). The population is 7 billion in the first generation; a billion extra (not being accounted for in the ~2.1 kids per couple replacement rate) people die before being able to reproduce. The population then goes on to be 6 billion for one generation, but the people in that generation realize that they can have more than 2.1 kids. Maybe they have 2.2 kids, and each successive generation does this until the population is back to 7 billion (the amount of time this takes depends on numbers, but shouldn’t be more than a couple generations).
World 2 seems much more realistic to me. While in World 1, each death cost the universe 1 life and 93 potential lives, in World 2 each death cost the universe something like 1 life and 0-2 potential lives.
It seems like using an average number of descendants isn’t the important factor if we live in a world like World 2 because as long as the population isn’t too small, it will be able to jumpstart the future population again. Thus follows the belief that (100% of people dying vs. 99% of people dying) is a greater difference than (0% of people dying vs. 99% of people dying). Assuming 1% of people would be able to eventually grow the population back.
I read this post around the beginning of March this year (~6 months ago). I think reading this post was probably net-negative for my life plans. Here are some thoughts about why I think reading this post was bad for me, or at least not very good. I have not re-read the post since then, so maybe some of my ideas are dumb for obvious reasons.
I think the broad emphasis on general skill and capacity building often comes at the expense of directly pursuing your goals. In many ways, the post is like “Skill up in an aptitude because in the future this might be instrumentally useful for making the future go well.” And I think this is worse than “Identify what skills might help the future go well, then skill up in these skills, then you can cause impact.” I think the aptitudes framework is what I might say if I knew a bunch of un-exceptional people were listening to me and taking my words as gospel, but it is not what I would advise to an exceptional person who wants to change the world for the better (I would try to instill a sense of specifically aiming at the thing they want and pursuing it more directly). This distinction is important. To flesh this out, if only geniuses are reading my post, I might advise that they try high variance, high EV things which have a large chance of ending up in the tails (e.g., startups, for which most the people will fail). But I would not recommend to a broader crowd that they try startups, because more of them would fail, and then the community that I was trying to create to help the future go well is largely made up of people who took long shot bets and failed, making them not so useful, and making my community less useful when it’s crunch time (although I am currently unsure what we need at crunch time, having a bunch of people who pursued aptitudes growth is probably good). Therefore, I think I understand and somewhat endorse a safer, aptitudes based advice at a community scale, but I don’t want it to get in the way of ‘people who are willing to take greater risks and do whacky career stuff’ actually doing so.
My personal experience is that reading this post gave me the idea that I could sorta continue life as normal, but with a slight focus on developing particular aptitudes like building organizational success, research on core longtermist topics, communicating maybe. I currently think that plan was bad and, if adopted more broadly, has a very bad chance of working (i.e., AI alignment gets solved). However, I also suspect that my current path is suboptimal – I am not investing in my career capital or human capital for the long-run as much as I should be.
So I guess my overall take is something like: people should consider the aptitudes framework, but they should also think about what needs to happen in the world in order to get the thing you care about. Taking a safer, aptitudes based approach, is likely the right path for many people, but not for everybody. If you take seriously the career advice that you read, it seems pretty unlikely that this would cause you to take roughly the same actions you were planning on taking before reading – you should be suspicious of this surprising convergence.
The Unilateralist’s Curse, An Explanation
This is great and I’m glad you wrote it. For what it’s worth, the evidence from global health does not appear to me strong enough to justify high credence (>90%) in the claim “some ways of doing good are much better than others” (maybe operationalized as “the top 1% of charities are >50x more cost-effective than the median”, but I made up these numbers).
The DCP2 (2006) data (cited by Ord, 2013) gives the distribution of the cost-effectiveness of global health interventions. This is not the distribution of the cost-effectiveness of possible donations you can make. The data tells us that treatment of Kaposi Sarcoma is much less cost-effective than antiretroviral therapy in terms of avoiding HIV related DALYs, but it tell us nothing about the distribution of charities, and therefore does not actually answer the relevant question: of the options available to me, how much better are the best than the others?
If there is one charity focused on each of the health interventions in the DCP2 (and they are roughly equally good at turning money into the interventions) – and therefore one action corresponding to each intervention – then it is true that the very best ways of doing good available to me are better than average.
The other extreme is that the most cost-effective interventions were funded first (or people only set up charities to do the most cost-effective interventions) and therefore the best opportunities still available are very close to average cost-effectiveness. I expect we live somewhere between these two extremes, and there are more charities set up for antiretroviral therapy than kaposi sarcoma.
The evidence that would change my mind is if somebody publicly analyzed the cost-effectiveness of all (or many) charities focused on global health interventions. I have been meaning to look into this, but haven’t yet gotten around to it. It’s a great opportunity for the Red Teaming Contest, and others should try to do this before me. My sense is that GiveWell has done some of this but only publishes the analysis for their recommended charities; and probably they already look at charities they expect to be better than average – so they wouldn’t have a representative data set.
The edit is key here. I would consider running an AI-safety arguments competition in order to do better outreach to graduate-and-above level researchers to be a form of movement building and one for which crunch time could be in the last 5 years before AGI (although probably earlier is better for norm changes).
One value add from compiling good arguments is that if there is a period of panic following advanced capabilities (some form of firealarm), then it will be really helpful to have existing and high quality arguments and resources on hand to help direct this panic into positive actions.
This all said, I don’t think Chris’s advice applies here:
I would be especially excited to see people who are engaged in general EA movement building to pass that onto a successor (if someone competent is available) and transition towards AI Safety specific movement building.
I think this advice likely doesn’t apply because the models/strategies for this sort of AI Safety field building are very different from that of general EA community building (e.g., University groups), the background knowledge is quite different, the target population is different, the end goal is different, etc. If you are a community builder reading this and you want to transition to AI Safety community building but don’t know much about it, probably learning about AI Safety for >20 hours is the best thing you can do. The AGISF curriculums are pretty great.
I’m a bit confused by this post. I’m going to summarize the main idea back, and I would appreciate it if you could correct me where I’m misinterpreting.
Human psychology is flawed in such a way that we consistently estimate the probability of existential risk from each cause to be ~10% by default. In reality, the probability of existential risk from particular causes is generally less than 10% [this feels like an implicit assumption], so finding more information about the risks causes us to decrease our worry about those risks. We can get more information about easier-to-analyze risks, so we update our probabilities downward after getting this correcting information, but for hard-to-analyze risk we do not get such correcting information so we remain quite worried. AI risk is currently hard-to-analyze, so we remain in this state of prior belief (although the 10% part varies by individual, could be 50% or 2%).
I’m also confused about this part specifically:
initially assign something on the order of a 10% credence to the hypothesis that it will by default lead to existentially bad outcomes. In each case, if we can gain much greater clarity about the risk, then we should think there’s about a 90% chance this clarity will make us less worried about it
– why is there a 90% chance that more information leads to less worry? Is this assuming that for 90% of risks, they have P(Doom) < 10%, and for the other 10% of risks P(Doom) ≥ 10%?
A solution that doesn’t actually work but might be slightly useful: Slow the lemons by making EA-related Funding things less appealing than the alternative.
One specific way to do this is to pay less than industry pays for similar positions: altruistic pay cut. Lightcone, the org Habryka runs, does this: “Our current salary policy is to pay rates competitive with industry salary minus 30%.” At a full-time employment level, this seems like one way to dissuade people who are interested in money, at least assuming they are qualified and hard working enough to get a job in industry with similar ease.
Additionally, it might help to frame university group organizing grants in the big scheme of the world. For instance, as I was talking to somebody group organizing grants I reminded them that the amount of money they would be making (which I probably estimated at a couple thousand dollars per month), is peanuts compared to what they’ll be earning in a year or two when they graduate from a top university with a median salary of ~80k. It also seems relevant to emphasize that you actually have to put in the time and effort into organizing a group for a grant like this; it’s not free money – it’s money in exchange for time/labor. Technically it’s possible to do nothing and pretty much be a scam artist, but I didn’t want to say that.
This solution doesn’t work for a few reasons. One is that it only focuses on one issue – the people who are actually in it for themselves. I expect we will also have problems of well-intending people who just aren’t very good at stuff. Unfortunately, this seems really hard to evaluate, and many of us deal with imposter syndrome, so self-evaluation/selection seems bad.
This solution also doesn’t work because it’s hard to assess somebody’s fit for a grant, meaning it might remain easier to get EA-related money than other money. I claim that it is hard to evaluate somebody’s fit for a grant in large part because feedback loops are terrible. Say you give somebody some money to do some project. Many grants have some product or deliverable that you can judge for its output quality, like a research paper. Some EA-related grants have this, but many don’t (e.g., paying somebody to skill up might have deliverables like a test score but might not). Without some form of deliverable or something, how do you know if your grant was any good? Idk maybe somebody who does grantmaking has an idea on this. More importantly, a lot of the bets people in this community are taking are low chance of success, high EV. If you expect projects to fail a lot, then failure on past projects is not necessarily a good indicator of somebody’s fit for new grants (in fact it’s likely good to keep funding high EV, low P(success) projects, depending on your risk tolerance). So this makes it difficult to actually make EA-related money harder to get than other money.
I expect a project like this is not worth the cost. I imagine doing this well would require dozens of hours of interviews with people who are more senior in the EA movement, and I think many of those people’s time is often quite valuable.
Regarding the pros you mention:
I’m not convinced that building more EA ethos/identity based around shared history is a good thing. I expect this would make it even harder to pivot to new things or treat EA as a question, it also wouldn’t be unifying for many folks (e.g. who having been thinking about AI safety for a decade or who don’t buy longtermism). According to me, the bulk of people who call themselves EAs, like most groups, are too slow to update on new arguments and information and I would expect that having a written and agreed upon history would not help with this. Then again, my point might be made better if I could reference common historical cases of what I mean lol
I don’t see how this helps build trust.
I don’t see how having a written history makes the movement less likely to die. I also don’t know what it looks like for the EA movement to die or how bad this actually is; the EA movement is largely instrumental toward other things I care about: reducing suffering, increasing the chances of good stuff in the universe, my and my friends’ happiness to a lesser extent.
This does seem like a value add to me, though the project I’m imagining only does a medium job at this given it’s goal is not “chronology of mistakes and missteps”. Maybe worth checking out https://www.openphilanthropy.org/research/some-case-studies-in-early-field-growth/
With ideas like this I sometimes ask myself “why hasn’t somebody done this yet”. Some reasons that come to mind: too busy doing other things they think are important, might come across as self aggrandizing, who’s going to read it?-and ways I expect it to get read are weird and indoctorinaty (“welcome to the club, here’s a book about our history”, as opposed to “oh, you want to do lots of good, here are some ideas that might be useful”), it doesn’t directly improve the world and the indirect path to impact is shakier than other meta things.
I’m not saying this is necessarily a bad idea. But so far I don’t see strong reasons to do this over the many other things openphil/cea/Kelsey piper/interviewees could be doing.