Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.
Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work.
What we did well
One of the most important insights from the successes of FHI is to have a long-term perspective on one’s research. While working on currently fashionable and fundable topics may provide success in academia, aiming for building up fields that are needed, writing papers about topics before they become cool, and staying in the game allows for creating a solid body of work that is likely to have actual meaning and real-world effect.
The challenge is obviously to create enough stability to allow such long-term research. This suggests that long-term funding and less topically restricted funding is more valuable than big funding.
Many academic organizations are turned towards other academic organizations and recognized research topics. However, pre-paradigmatic topics are often valuable, and relevant research can occur in non-university organizations or even in emerging networks that only later become organized. Having the courage to defy academic fashion and “investing” wisely in such pre-paradigmatic or neglected domains (and networks) can reap good rewards.
Having a diverse team, both in terms of backgrounds but also in disciplines, proved valuable. But this was not always easy to achieve within the rigid administrative structure that we operated in. Especially senior hires with a home discipline in a faculty other than philosophy were nearly impossible to arrange. Conversely, by making it impossible to hire anyone not from a conventional academic background (i.e., elite university postdocs) adversely affects minorities, and resulted in instances where FHI was practically blocked from hiring individuals from under-represented groups. Hence, try to avoid credentialist constraints.
In order to do interdisciplinary work, it is necessary to also be curious about what other disciplines are doing and why, as well as to be open to working on topics one never considered before. It also opens the surface to the rest of the world. Unusually for a research group based in a philosophy department, FHI members found themselves giving tech support to the pharmacology department; participating in demography workshops, insurance conferences, VC investor events, geopolitics gatherings, hosting artists and civil servant delegations studying how to set up high-performing research institutions in their own home country, etc. - often with interesting results.
It is not enough to have great operations people; they need to understand what the overall aim is even as the mission grows more complex. We were lucky to have had many amazing and mission-oriented people make the Institute function. Often there was an overlap between being operations and a researcher: most of the really successful ops people participated in our discussions and paper-writing. Try to hire people who are curious.
Where we failed
Any organization embedded in a larger organization or community needs to invest to a certain degree in establishing the right kind of social relationships to maintain this embeddedness. Incentives must be aligned, and both parties must also recognize this alignment. We did not invest enough in university politics and sociality to form a long-term stable relationship with our faculty.
There also needs to be an understanding of how to communicate across organizational communities. When epistemic and communicative practices diverge too much, misunderstandings proliferate. Several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received. Finding friendly local translators and bridgebuilders is important.
Another important lesson (which is well known in business and management everywhere outside academia) is that as an organization scales up it needs to organize itself differently. The early informal structure cannot be maintained beyond a certain size, and must be gradually replaced with an internal structure. Doing this gracefully, without causing administrative sclerosis or lack of delegation, is tricky and in my opinion we somewhat failed.
So, you want to start another FHI?
Did FHI become humanity’s best effort at understanding and evaluating its own long-term prospects? We leave that to the future to evaluate properly, but we certainly think we did unexpectedly well for a “three-year project”.
FHI is ending and we are sad to see it go. We think it could have achieved far more than it did, but circumstances made it impossible to continue. On the plus side, we know FHI did not live past its time. There is a real risk that organizations lose their mission and become self-perpetuating users of resources that could better be used for other things, preventing the flowering of the new.
In fact, as mentioned above, FHI has seeded a number of new organizations, fields and topics. In a biological or memetic sense, it would count as having had great fitness in propagating successors, although many are not FHI-like or have different goals.
What would it take to replicate FHI, and would it be a good idea? Here are some considerations for why it became what it was:
Concrete object-level intellectual activity in core areas and finding and enabling top people were always the focus. Structure, process, plans, and hierarchy were given minimal weight (which sometimes backfired—flexible structure is better than little structure, but as organization size increases more structure is needed).
Tolerance for eccentrics. Creating a protective bubble to shield them from larger University bureaucracy as much as possible (but do not ignore institutional politics!).
Short-term renewable contracts. Since firing people is basically impossible within the University, only by offering short-term contracts (two or three years) was it possible to get rid of people who turned out not to be great fit. It was important to be able to take a chance on people who might not work out. Maybe about 30% of people given a job at FHI were offered to have their contracts extended after their initial contract ran out. A side-effect was to filter for individuals who truly loved the intellectual work we were doing, as opposed to careerists.
Valued: insights, good ideas, intellectual honesty, focusing on what’s important, interest in other disciplines, having interesting perspectives and thoughts to contribute on a range of relevant topics.
Deemphasized: the normal academic game, credentials, mainstream acceptance, staying in one’s lane, organizational politics.
Very few organizational or planning meetings. Most meetings were only to discuss ideas or present research, often informally.
A comment from a member:
“I think there’s no cookie-cutter template for replicating FHI because it depends critically on having the right (rare) people and a particular intellectual culture. The secret sauce was not an organizational structure or some kind of management process. But with the right people and culture, then shielding from other constraints can become enabling.
“To the extent that it can be replicated, I think it is because (a) it was an existence proof of organization, template ideas and research results, and legitimization, and (b) the intellectual culture has spread (e.g. in the wider rationalist and EA networks). But it could probably not be replicated by having some random administrator or manager trying to reproduce the same organizational structure—that would be like the cargo cult.”
So, the conclusion may be that while the above considerations give a recipe to aim for, the key question for any replication should be: “What are the important topics this organization should aim at?” Pursuing those topics must always be at the center of what is being done (both in research and administration), even when new knowledge and developments change them and their priorities.
I think it is worth appreciating the number and depth of insights that FHI can claim significant credit for. In no particular order:
The concept of existential risk, and arguments for treating x-risk reduction as a global priority (see: The Precipice)
Arguments for x-risk from AI, and other philosophical considerations around superintelligent AI (see: Superintelligence)
Arguments for the scope and importance of humanity’s long-term future (since called longtermism)
Information hazards
Observer selection effects and ‘anthropic shadow’
Bounding natural extinction rates with statistical methods
The vulnerable world hypothesis
Moral trade
Crucial considerations
The unilteralist’s curse
Dissolving the Fermi paradox
The reversal test in applied ethics
‘Comprehensive AI services’ as an alternative to unipolar outcomes
The concept of existential hope
Note especially how much of the literal terminology was coined on (one imagines) a whiteboard in FHI. “Existential risk” isn’t a neologism, but I understand it was Nick who first suggested it be used in a principled way to point to the “loss of potential” thing. “Existential hope”, “vulnerable world”, “unilateralist’s curse”, “information hazard”, all (as far as I know) tracing back to an FHI publication.
It’s also worth remarking on the areas of study that FHI effectively incubated, and which are now full-blown fields of research:
The ‘Governance of AI Program’ was launched in 2017, to study questions around policy and advanced AI, beyond the narrowly technical questions. That project was spun out of FHI to become the Centre for the Governance of AI. As far as I understand, it was the first serious research effort on what’s now called ”AI governance”.
From roughly 2019 onwards, the working group on biological risks seems to have been fairly instrumental in making the case for biological risk reduction as a global priority, specifically because of engineered pandemics.
If research on digital minds (and their implications) grows to become something resembling a ‘field’, then the small team and working groups on digital minds can make a claim to precedence, as well as early and more recent published work.
FHI was staggeringly influential; more than many realise.
Edit: I wrote some longer reflections on FHI here.
I’m awestruck, that is an incredible track record. Thanks for taking the time to write this out.
These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.
What a champ. if institutions can be heroes, FHI is surely one.
I want to take this opportunity to thank the people who kept FHI alive for so many years against such hurricane-force headwinds. But I also want to express some concerns, warnings, and—honestly—mixed feelings about what that entailed.
Today, a huge amount of FHI’s work is being carried forward by dozens of excellent organizations and literally thousands of brilliant individuals. FHI’s mission has replicated and spread and diversified. It is safe now. However, there was a time when FHI was mostly alone and the ember might have died from the shockingly harsh winds of Oxford before it could light these thousands of other fires.
I have mixed feelings about encouraging the veneration of FHI ops people because they made sacrifices that later had terrible consequences for their physical and mental health, family lives, and sometimes careers—and I want to discourage others from making these trade-offs in the future. At the same time, their willingness to sacrifice so much, quietly and in the background, because of their sincere belief in FHI’s mission—and this sacrifice paying off with keeping FHI alive long enough for its work to spread—is something for which I am incredibly grateful.
A small selection from the report:
21 and 22 hour workdays sounds like hyperbole, but I was there and it isn’t. No one should work this hard. And it was not free. Yet, if you ever meet Tanya Singh, please know you are meeting a (foolishly self-sacrificing?) hero.
And while Andrew Snyder-Beattie is widely and accurately known as a productivity robot, transforming into a robot—leaving aside the fairytales of the cult of productivity—requires inflicting an enormous amount of deprivation on your human needs.
But why did this even happen? An example from the report:
This again sounds like hyperbole. It again is not. This was me. After a small grant was awarded and accepted by the university, it took me 308 emails to get this “completed” grant into our account.
FHI died because Oxford killed it. But it was not a quick death. It was a years-long struggle with incredible heroism and terrible casualties. But what a legacy. Thank you sincerely to all of the ops people who made it possible.
Strong +1 re: ‘hero’ work culture. especially for ops staff. This was one of the things that bothered me while there and contributed to my moving on—an (admittedly very nice) attitude of praising (especially admin/management) people who were working stupidly hard/long, rather than actually investing in fixing a clearly dysfunctional situation. And while it might not have been possible to fix later on due to embedded animosity/frustration on both sides ⇒ hiring freeze etc, it certainly was early on when I was there.
The admin load issue was not just about the faculty. And the breakdown of relationship with the faculty was really was not one-sided, at least when I was there (and I think I succeeded in semi-rescuing some of the key relationships (oxford martin school, faculty of philosophy) while I was there, at least temporarily).
This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!).
I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of “academic politics”?)
A minor note on the forward-looking advice: “short-term renewable contracts” can have their place, especially for trying out untested junior researchers. But you should be aware that it also filters out mid-career academics (especially those with family obligations) who could potentially bring a lot to a research institution, but would never leave a tenured position for short-term one. Not everyone who is unwilling to gamble away their academic career is thereby a “careerist” in the derogatory sense.
On your second point, FHI had at least ~£10m sitting in the bank in 2020 (see below, from the report). So the fundraising freeze, while unusual, wasn’t terminal. A rephrasing of your question is “What adminstrative and organisational problems at FHI could possibly have prompted the Faculty to take the unusual step of a hiring and fundraising freeze in 2020, and why could it not be resolved over the next two to three years?”
There is now a dedicated FHI website with lists of selected publications and resources about the institute. (Thanks to @Stefan_Schubert for bringing this to my attention.)
See also Anders’s more personal reflections:
I made a perma.cc copy of the Final Report here: https://perma.cc/3KP9-ZSFB
I’d be curious about a list of topics they would like others to investigate/continue investigating, or a list of the most important open questions.
There is a list by Sandberg here. (The other items in that post may also be of interest.)
That’s sad. For anyone interested in why they shut down (I’d thought they had an indefinitely sustainable endowment!), the archived version of their website gives some info:
I still don’t understand why the University of Oxford was not cooperative with the institute, and why later it decided to freeze it completely. What was that about?
I’m also confused by this. Did Oxford think it was a reputation risk? Were the other philosophers jealous of the attention and funding FHI got? Was a beaurocratic parasitic egregore putting up roadblocks to siphon off money to itself? Garden variety incompetence?
Having worked there and interfaced with the Faculty for 4 years, yes, I would expect garden variety incompetence on Bostrom’s part in terms of managing the relationship was a big part; I would predict the single biggest contributer to the eventual outcome.
Why was relationship management even necessary? Wasn’t FHI bringing prestige and funding to the university? Aren’t the incentives pretty well aligned?
Why are people pressing the “disagree” button? Do they disagree with the idea that FHI brought prestige? Do they disagree with the framing? Is it because I have a silly username?
Clearly there’s some politics going on here, but I have no idea who the factions are or why.
Someone help me out?
I’m very confused why you think that FHI brought prestige to Oxford University rather than the other way around
The vast majority of academic philosophy at prestigious universities will be relegated to the dustbins of history, FHI’s work is quite plausibly an exception.
To be clear, this is not a knock on philosophy; I’d guess that total funding for academic philosophy in the world is on the order of 1B. Most things that are 0.001% of the world economy won’t be remembered much 100 years from now. I’d guess philosophy in general punches well above its weight here, but base rates are brutal.
You’re answering a somewhat different question to the one I’m bringing up
My thinking was that Because they were doing influential research and brought in funding? FHI’s work seems significantly better than most academic philosophy, even by prestigious university standards.
But on reflection, yes, obviously Oxford University will bring more prestige to anything it touches.
Erm, looking at the accomplishments of FHI, I’d be genuinely surprised if random philosophers from Oxford will be nearly as influential going forwards. “It’s the man that honors the medal.”
Influence =/= prestige
I might not be tracking all the exact nuances, but I’d have thought that prestige is ~just legible influence aged a bit, in the same way that old money is just new money aged a bit. I model institutions like Oxford as trying to play the “long game” here.
The point I’m trying to make is that there are many ways you can be influential (including towards people that matter) and only some of them increase prestige. People can talk about your ideas without ever mentioning or knowing your name, you can be a polarising figure who a lot of influential people like but who it’s taboo to mention, and so on.
I also do think you originally meant (or conveyed) a broader meaning of influential—as you mention economic output and the dustbins of history, which I would consider to be about broad influence.
Andrew Tate is very influential, but entirely lacking in prestige.
Interesting example! I don’t know much about Tate, but I understand him as a) only “influential” in a very ephemeral way, in the way that e.g. pro wrestlers are, and b) only influential among people who themselves aren’t influential.
It’s possible we aren’t using the word “influential” in the same way. E.g. implicit in my understanding of “influential” is something like “having influence on people who matter” whereas maybe you’re just defining it as “having influence on (many) people, period?”
This seems like quite an in-group perspective. From the perspective of a generic philosophy faculty, that looks like a very small list of papers for a department that was running for nearly two decades. Without knowing their impact factor (which I’d guess was higher than average, but not extreme) it’s hard to say whether this was reasonable from a prestige perspective.
I don’t think it’s just an in-group perspective! Bostrom literally gives and receives feedback from kings; other members of FHI have gone on to influential positions in multi-billion dollar companies.
Are you really saying that if you ask the general public (or members of the intellectual elite), typical philosophy faculty at prestigious universities will be recognized to be as or more impressive or influential in comparison?
When did he get feedback from Kings? Googling it, the only thing I can see is that he was invited to an event which the Swedish king was also at.
Also, most of Bostrom’s extra-academic prestige is based on a small handful of the papers listed. That might justify making him something like a public communicator of philosophy, but it doesn’t obviously merit sponsoring an entire academic department indefinitely.
To be clear, I have no strong view on whether the university acted reasonably a) in the abstract or b) according to incentives in the unique prestige ecosystem which universities inhabit. But I don’t think listing a handful of papers our subgroup approves of is a good rationale for claiming that it did neither.
I’m at work and don’t have the book with me, but you can look at the “Acknowledgements” section of Superintelligence.
I agree that it’s not clear whether the Department of Philosophy acted reasonably in the unique prestige ecosystem which universities inhabit, whether in the abstract or after adjusting for FHI quite possibly being unusually difficult/annoying to work with. I do think history will vindicate my position in the abstract and “normal people” with a smattering of facts about the situation (though perhaps not the degree of granularity where you understand the details of specific academic squabbles) will agree with me.
This sounds like it’s disagreeing with the parent comment but I’m not sure if it is?
I claim that on net FHI would’ve brought more prestige to Oxford than the other way around, especially in the counterfactual world where it thrived/was allowed to thrive (which might be impractical for other reasons).
I might think of FHI as having borrowed prestige from Oxford. I think it benefited significantly from that prestige. But in the longer run it gets paid back (with interest!).
That metaphor doesn’t really work, because it’s not that FHI loses prestige when it pays it back—but I think the basic dynamic of it being a trade of prestige at different points in time is roughly accurate.
i am very glad that Bostrom chose the path he chose with FHI in contrast to the path you seem to have chosen with CSER and Leverhulme, so I am inclined to not trust your take here too much (though your closeness and direct experience of course is important and relevant here and makes up for some of my a-priori skepticism). Where FHI successfully continued to produce great intellectual work, I have seen other organizations in similar positions easily fall prey to the demand and pressures and become a hollow shell of political correctness and vapid ideas.
It seems clear to me that Bostrom’s choices ended up much better than the choices of other people in similar situations. It seems like he didn’t compromise on the integrity of the institution he was building, and that seems to have been crucial to FHI’s success (and I expect will be crucial to Bostrom’s continued intellectual legacy).
I am confident he probably nevertheless made many mistakes in his relationship to the university, as this kind of conflict tends to force one to commit many visible errors, but the fate of FHI seems much better than the path that e.g. CSER and Leverhulme have been going down. Indeed I can’t think of a single institution in a similar situation that maintained its independence and integrity for as long as FHI did, so I am inclined to measure this more as a story of success, instead of a story of failure (though it might be that the success here is downstream of other people than Bostrom like yourself and Carrick, which I can’t rule out, though my guess is Bostrom’s overall choices were likely substantially responsible for the positive outcome here).
Of course, there did probably exist some way to get the best of all worlds here, we are talking about a very high-dimensional problem space here and Bostrom was obviously far from perfect at all the skills required to run FHI, but given the enormous success of FHI in its independence and continued ongoing production of important ideas, I would be very surprised if the answer turns out to be “garden variety incompetence”.
At the very least I feel like you should recognize that maintaining your independence under this kind of pressure is extremely hard, and that you yourself seem to have made difficult tradeoffs in the opposite direction (which I think were a mistake, but like, we can have that conversation instead of just chalking things up to “incompetence”).
Thanks Habryka. My reason for commenting is that a one-sided story is being told here about the administrative/faculty relationship stuff, both by FHI and in the discussion here, and I feel it to be misleading in its incompleteness. It appears Carrick and I disagree and I respect his views, but I think many people who worked at FHI felt it to be severely administratively mismanaged for a long time. I felt presenting that perspective was important for trying to draw the right lessons.
I agree with the general point that maintaining independence under this kind of pressure is extremely hard, that there are difficult tradeoffs to make. I believe Nick made many of the right decisions in maintaining integrity and independence, and sometimes incurred costly penalties to do so that likely contributed to the administrative/bureaucratic tensions with the faculty. However, I think part of what is happening here is that some quite different things from working-inside-fhi-perspective are being conflated under broad ‘heading’ (intellectual integrity/independence) which sometimes overlapped, but often relatively minimally, and can be usefully disaggregated—intellectual vision and integrity; following administrative process for your hosting organisation; bureaucratic relationship management.
Pick your battles. If you’re going to be ‘weird’ along one dimension, it often makes sense to try to be ‘easy’ along others. The really important dimension was the intellectual independence. During my time FHI constantly incurred heavy costs for being uncooperative on many administrative and bureaucratic matters that I believe did not affect the intellectual element, or only minimally, often resulting in using up far more of FHI’s own team’s time than otherwise.
One anecdote. When I arrived at FHI in 2011, there was a head of admin at philosophy (basically running the faculty) called Tom (I think). His name was mud at FHI; the petty administrative tyrant who was thwarting everything FHI wanted to do. So I went and got to know him. Turns out the issue was fixed by my having a once a month meeting with him to talk through what we wanted to do, and figure out how to do it. Nearly everything we wanted to to do could be done, but sometimes following a process that FHI hadn’t been following, or looping in someone who needed to be aware. Not doing this had been causing him huge administrative hassle and extra workload. After that, he was regularly working overtime to help us on deadline occasions. On one occasion, he was (I’m sure) the only admin in Oxford working on Easter Monday, using the Oxford ‘authority’ to help us sort out a visa problem for a researcher’s wife unexpectedly stuck at an airport and panicking. A lot of that kind of thing. (*note, I expect that later in FHI’s time frictions were sufficiently entrenched to prevent these kinds of positive feedbacks)
I don’t particularly wish to have a referendum on my integrity, or a debate over whether CSER and CFI have been good or not. On the former, people can read my comment, your criticism, and make their own mind up how much to ‘trust’ me, or ask others who worked at FHI; the latter is a separate conversation where I am somewhat constrained in what I can say.
But briefly, for the same reasons that I think it’s important not to take the wrong lessons: I don’t agree that CSER and CFI have been bad for the world. They are also quite different than what my own visions for them would have been (in some ways good no doubt, in some ways bad perhaps). If you are to draw the direct comparison, I think it’s worth noting that Nick and I were in very different positions that afforded different freedoms. I took up the role at CSER somewhat reluctantly at Nick’s encouragement. I was too junior to play the kind of role that Nick played at FHI from Cambridge’s perspective (nowhere near being a professor), and there was already a senior board in place of professors mostly uninvolved with this field, and with quite different perspectives to mine. The founder whose perspectives most aligned with my own took a hands off role, for what I think are sound reasons. The extent to which this might come to limit my own intellectual and strategic relevance became apparent to me in 2015, and I spoke to Nick about resigning and doing something else; he persuaded me that staying and providing what intellectual and strategic direction I could appeared the highest value thing I could do. In hindsight, had my goal been to realise my own intellectual and strategic vision I would have been better served to continue direct academic work longer, progress to professor, and start something smaller a little later. In practice, my role required executing a shared vision in which my influence was one of many; or at CFI developing one of several distinct programmes.
With that said, I’m entirely confident that you are right that there were intellectual and strategic decisions I made that were the wrong ones, and where I judged the tradeoffs incorrectly. I’m also confident that had I been in Nick’s position, there are correct decisions that he made that I would not have had the intellectual courage to make or stick with in the face of opposition. And as I noted in a previous comment, I think elements of Nick’s personality in terms of stubborn-ness and uncompromising-ness on the way he wanted to do things contributed both to the intellectual independence and the administrative/bureaucratic problems; I just wish they could have been more selectively applied. (I also don’t think Nick made all the right intellectual and strategic decisions, but that, again, is a different discussion).
Re: incompetence in terms of faculty relationship, I believe the comment is correct and I stand by it. But it is of course only one part of the story (one i wanted not to be lost). And how strongly I hold that may be coloured by my own feelings. FHI was something that was important to me too, and that I put years of hard work into supporting. Even as late as 2022 I was working with Oxford to try to find solutions. I feel that there were many unforced errors, and I am frustrated.
(With apologies, I’m leaving for research meetings in China tomorrow, so will likely not have time to reply for a few weeks).
And I guess I should just say directly. I do wish it were possible to raise (specific) critical points on matter like faculty relations where I have some direct insight and discuss these, without immediate escalation to counterclaims that my career’s work has been bad for the world, that I am not to be trusted, and and that my influence is somehow responsible for attacks on people’s intellectual integrity. It’s very stressful and upsetting.
I suffer from (mild) social anxiety. That is not uncommon. This kind of very forceful interaction is valuable for some people but is difficult and costly for others to engage with. I am going to engage less with EA forum/LW as a result of this and a few similar interactions, and I am especially going to be more hesitant to be critical of EA/LW sacred cows. I imagine, given what you have said about my takes, that this will be positive from your perspective. So be it. But you might also consider the effect it will have on others who might be psychologically similar, and whose takes you might consider more valuable.
This makes me sad as I enjoy reading your comments and find them insightful. That said, I understand and support your reasoning. I feel as though some amount of “mistake mindset” has disappeared a little in the two years I’ve been reading the forum.
Thanks Rían, I appreciate it. And to be fair, this is from my perspective as much a me thing as it is an Oli thing. Like, I don’t think the global optimal solution is an EA forum that’s a cuddly little safe space for me. But we all have to make the tradeoffs that make most sense for us individually, and this kind of thing is costly for me.
I agree with this, but also think the forum “not being cuddly for Sean” and “not driving contributors away” aren’t mutually exclusive. Maybe I am not seeing all the tradeoffs though.
I feel your pain. I hope the amount of upvotes and hearts you’re getting helps you feel better, but I know brains don’t always work that way (mine doesn’t).
Thanks Sean. I think this is a good comment and I think makes me understand your perspective better.
I do think we obviously have large worldview differences here, that seem maybe worth exploring at some point, but this comment (as well as some private conversations sparked by these comments with others at FHI) made me feel more sympathetic to the perspective of “there is some history-rewriting happening that seems scary, where the university gets portrayed as this kind of boogeyman, and while it does seem the university did some unreasonable-seeming things, I think a lack of empathy and practicality in relation to that university had a lot of bad effects on both the university and FHI, and we should be very wary of remembering the story of FHI as a purely one-sided one”.
Deleting this because on re-reading I think I’m just repeating myself, but in a more annoyed way. Thanks for checking with other people, I’ll leave it at that.
Thank you. I’m grateful you checked with other people. Yes, I do think there is some history rewriting and mythologising going on here compared to my own memory of how things were, and this bothers me because I think the truth does matter.There is a very real sense in which Nick had a pretty sweet setup at Oxford, in terms of having the power and influence to do an unusual thing. And there were a bunch of people around him working insanely hard to help make that happen. I also do think there is a degree to which, yes, Nick blew it. I don’t really want to dwell on this because it feels a bit like bad-mouthing FHI at its funeral. And it’s not the whole story. But it is a source of some frustration to me, because I did not have that position of power and influence in trying to do somewhat similar things, and have spent years banging my head against various walls, and I would have liked to see the FHI story go well. That is not to say i would have made all the right decisions had I had that power and influence (I’m sure I would not), or that I did make all the right decisions in the situations I was in. But I still think I am within my rights to have a view as someone who was actually there for 4 years doing the thing.As well as all the good stuff, Nick was unusually pedantic and stubborn about a huge range of things, many of them (to my lights) relatively unimportant, from expensive cups to fonts to refusing to follow processes that would not realistically have impeded FHI’s intellectual activities to follow. And so many things would get framed as a battle to be won against the Faculty/University, where a bit of cooperation would have gone a long way. Play stupid games, win stupid prizes. It sucked up huge amounts of FHI time, huge amounts of Faculty time, and huge amounts of social capital, which made it harder to stand ground/get cooperation on the stuff that mattered. And it compounded over time. You don’t trust me, but do some digging and I think you’ll find it. There were two sides to this thing.All of this I am saying based on a lot of context and experience at FHI. Rather than question or challenge me on my original point, your immediate reaction was two multi-paragraph posts seemingly aimed at publicly discrediting me in every way—repeatedly saying that you don’t consider me trustworthy; that my career’s work has been bad for the world and therefore my takes shouldn’t be listened to; that I am some sort of malign intellectual influence who is somehow responsible for intellectual attacks on other people*. To me this doesn’t look like truth-seeking behaviour. It looks more like an effort to discredit a person who challenged the favoured narrative.Evenafterbeing told by someone who actually was at FHI that I was a big part of making it work, your response seems to imply that if you did some digging into my time at FHI, you would find that actually my influence turned out to be negative and harmful. Well do that digging, see if that’s what you find. I worked damn hard there, took personal risks, and did good work. You want to claim that’s false, you can show some evidence.*And no, I don’t think these things are equivalently harsh. I criticised Nick for bureaucratic mistakes. Nobody respects Nick primarily for his administrative/bureaucratic relationship skills. They respect him for other things, which I have praised on other occasions. Your personal go at me targeted pretty much every aspect of why people might respect me or consider me worth listening to. That is fundamentally different.Yudkowsky’s comments at his sister’s wedding seems surprisingly relevant here:
Yeah I made a similar point here.
Sorry Oli, but what is up with this (and your following) comment?
From what I’ve read from you[1] seem to value what you call “integrity” almost as a deontological good above all others. And this has gained you many admirers. But to my mind high integrity actors don’t make the claims you’ve made in both of these comments without bringing examples or evidence. Maybe you’re reacting to Sean’s use of ‘garden variety incompetence’ which you think is unfair to Bostrom’s attempts to tow the fine line between independence and managing university politics but still, I feel you could have done better here.
To make my case:
When you talk about “other organizations… become a hollow shell of political correctness and vapid ideas” you have to be referring to CSER & Leverhulme here right, like it’s the only context that makes sense.
If not, I feel like that’s very misleadingly phrased.
But if it is, then calling those organisations ‘hollow shells’ of ‘vapid ideas’ is like really rude, and if you’re going to go there at least have the proof to back it up?
Now that just might be you having very different politics from CSER & Leverhulme people. But then you say “he [Bostrom] didn’t compromise on the integrity of the institution he was building”, which again I read as you directly contrasting against CSER & Leverhulme—or even Sean in person.
Is this true? Surely organisation can have different politics or even have worse ideas without compromising on integrity?
If they did compromise on integrity, feels like you should share what those are.
If it is directed at Sean personally, that feels very nasty. Making assertions about someone’s integrity without solid proof isn’t just speculation, it’s harmful to the person and also poor ‘epistemic hygiene’ for the community at large.
You say “the track record here speaks quite badly to Sean’s allocation of responsibility by my lights”. But I don’t know what ‘track record’ your speaking about here. Is it at FHI? CSER & Leverhulme? Sean himself?
Finally, this trio of claims in your second comment really rubbed me[2] the wrong way. You say that you think:
“CSER and Leverhulme, which I think are institutions that have overall caused more harm than good and I wish didn’t exist”
This is a huge claim imo. More harm than good? So much so that you wish it didn’t exist? With literally no evidence apart from it being your opinion???
“Sean thought were obvious choices were things that would have ultimately had long-term bad consequences”
I assume that this is about relationship management with the university perhaps? But I don’t know what to make of it because you don’t say what these ‘obvioous choices are’, or why you think they’re so likely to have bad consequences
“I also wouldn’t be surprised if Sean’s takes were ultimately responsible for a good chunk of associated pressure and attacks on people’s intellectual integrity”
This might be the worst one. Why are Sean’s takes responsible? What were the attacks on people’s integrity? Was this something Sean did on purpose?
I don’t know what history you’re referring to here, and the language used is accusatory and hostile. It feels really bad form to write it without clarifying what you’re referring to for people (like me) who don’t know what context you’re talking about.
Maybe from your perspective you feel like you’re just floating questions here and sharing your personal perspective, but given the content of what you’ve said I think it would have been better if you had either brought more examples or been less hostile.
And I feel like I’ve read quite a bit, both here, on LW, and on your Twitter
And given the votes, a lot of readers including some who may have agreed with your first comment
I don’t understand. I do not consider myself to be under the obligation that all negative takes I share about an organization must be accompanied by a full case for why I think those are justified.
Similar to how it would IMO be crazy to request people to justify that all positive comments about an organization must be accompanied by full justifications for ones judgement.
I have written about my feelings about CSER and Leverhulme some in the past (one of my old LTFF writeups for example includes a bunch of more detailed models I have of CSER). I have definitely not written up most of my thoughts, as they would span many dozens of pages.
I think holding criticism to a higher standard than praise is one of the most common low level violations of integrity that people engage in on an ongoing basis. I absolutely do not consider it part my of concept of integrity to only make negative claim about people without also making a comprehensive argument and providing extensive evidence of its veracity.
Indeed the honor culture from which my guess that instinct comes from is one of the things I am culturally most opposed to, so in as much as you have a concept of integrity here, it doesn’t seem that have that much overlap with mine (which is fine, words are hard, we can disambiguate in the future).
Fwiw I think part of the issue that I had[1] with your comment is that the comment came across much more aggressively and personally, rather than as a critique of an organization. I do think the bar for critiquing individuals ought to be moderately higher than the bar for critiquing organizations. Particularly when the critique comes from a different place/capacity[2] than strictly necessary for the conversation[3].
I expect some other people like JWS had a similar reaction to me, and stronger in magnitude. I did think your comment was on net useful for the conversation (not including the more global effects/externalities).
Before your comment blew up, I upvoted/agreevoted it because I do think there’s a true and important point to be made about FHI being much more successful at doing future-of-humanityish research than “peer” organizations that are more successful at looking like a normal/respectable organization. But I did wince (and didn’t strong upvote). I also lacked the information necessary to judge whether your perceived causal models were correct.
Put another way, I read your comment as quite far on the “contextualizing” end of contextualizaing vs decoupling norms, and I expected more decoupling in online spaces we both frequent.
Eg, if this was a fundraising post for CSER, or a post similar to the Conjecture critique, public criticisms of Sean in his capacity as director might be necessary. Similarly, if Sean made logical errors locally in his comment, or displayed poor reading comprehension, or was overly aggressive, criticisms of him in his capacity as an internet commentator may be necessary.
Hmm, I agree that there was some aggression here, but I felt like Sean was the person who first brought up direct criticism of a specific person, and very harsh one at that (harsher than mine I think).
Like, Sean’s comment basically said “I think it was directly Bostrom’s fault that FHI died a slow painful death, and this could have been avoided with the injection of just a bit of competence in the relevant domain”. My comment is more specific, but I don’t really see it as harsher. I also have a prior to not go into critiques of individual people, but that’s what Sean did in this context (of course Bostrom’s judgement is relevant, but I think in that case so is Sean’s).
Sure, social aggression is a rather subjective call. I do think decoupling/locality norms are relevant here. “Garden variety incompetence” may not have been the best choice of words on Sean’s part,[1] but it did seem like a) a locally scoped comment specifically answering a question that people on the forum understandably had, b) much of it empirically checkable (other people formerly at FHI, particularly ops staff, could present their perspectives re: relationship management), and c) Bostom’s capacity as director is very much relevant to the discussion of the organization’s operations or why the organization shut down.
Your comment first presents what I consider to be a core observation that is true and important, namely, FHI did a lot of good work, and this type of magic might not be easy to replicate if you do everything with apparent garden-variety competence. But afterwards, it also brought in a bunch of what I consider to be extraneous details on Sean’s competency, judgment, and integrity. The points you raise are also more murkily defined and harder to check. So overall I think of your comment as more escalatory.
or maybe it was under the circumstances. I don’t know the details here, maybe the phrase was carefully chosen.
It wasn’t carefully chosen. It was the term used by the commenter I was replying to. I was a little frustrated, because it was another example of a truth-seeking enquiry by Milena getting pushed down the track of only-considering-answers-in-which-all-the-agency/wrongness-is-on-the-university side (including some pretty unpleasant options relating to people I’d worked with (‘parasitic egregore/siphon money’).
>Did Oxford think it was a reputation risk? Were the other philosophers jealous of the attention and funding FHI got? Was a beaurocratic parasitic egregore putting up roadblocks to siphon off money to itself? Garden variety incompetence?
So I just did copy and paste on the most relevant phrase, but flipped it. Bit blunter and more smart-arse than I would normally be (as you’ve presumably seen from my writing, I normally caveat to a probably-tedious degree), but I was finding it hard to challenge the simplistic fhi-good-uni-bad narrative. It was one line, I didn’t think much about it.
I remain of the view that the claim is true/a reasonable interpretation, but de novo / in a different context I would have phrased differently.
One other observation that might explain some of the different perceptions on ‘blame’ here.
I don’t think Oxford’s bureaucracy/administration is good, and I think it did behave very badly at points*. But overall, I don’t think Oxford’s bureaucracy/behaviour was a long way outside what you would expect for the reference class of thousand-year-old-institutions with >10,000 employees. And Nick knew that was what it was, chose to be situated there, and did benefit (particularly in the early days) from the reputation boost. I think there is some reasonable expectation that having made that choice, he would put some effort into either figuring out how to operate effectively within its constraints, or take it somewhere else.
(*it did at point have the feeling of grinding inevitability of a failing marriage, where beyond a certain point everything one side did was perceived in the worst light and with maximal irritation by the other side, going in both directions, which contributed to bad behaviour I think).
For what it’s worth, I’m (at least partly) sympathetic to Oli’s position here. If nothing else, from my end I’m not confident that the combined time usage of:
[Oli producing book-length critique of CSER/Leverhulme, or me personally, depending] +
[me producing presumably book-length response] +
[further back and forth] +
[a whole lot of forum readers trying to unpick the disagreements]
is overall worth it, particularly given (a) it seems likely to me there are some worldview/cultural differences that would take time to unpick and (b) I will be limited in what I can say on certain matters by professional constraints/norms.
I think this might be one of the LTFF writeups Oli mentions (apologies if wrong), and seems like a good place to start:
https://forum.effectivealtruism.org/posts/an9GrNXrdMwBJpHeC/long-term-future-fund-august-2019-grant-recommendations-1#Addendum__Thoughts_on_a_Strategy_Article_by_the_Leadership_of_Leverhulme_CFI_and_CSER
And as to the claim “I also wouldn’t be surprised if Sean’s takes were ultimately responsible for a good chunk of associated pressure and attacks on people’s intellectual integrity” it seems like some of this is based on my online comments/writing. I don’t believe I’ve ever deleted anything on the EA forum, LW, or very much on twitter/linkedin (the online mediums I use), my papers are all online, and so again a decent place to start is to search for my username and come to their own conclusions.
Yep, that’s the one I was thinking about. I’ve changed my mind on some of the things in that section in the (many) years since I wrote it, but it still seems like a decent starting point.
In my experience people update less from positive comments and more from negative comments intuitively to correct for this asymmetry (that it’s more socially acceptable to give unsupported praise than unsupported criticism). Your preferred approach to correcting the asymmetry, while I agree is in the abstract better, doesn’t work in the context of these existing corrections.
Yeah, I agree this is a real dynamic. It doesn’t sound unreasonable for me to have a standard link that l link to if I criticize people on here that makes it salient that I am aspiring to be less asymmetric in the information I share (I do think the norms are already pretty different over on LW, where if anything I think criticism is a bit less scrutinized than praise, so its not like this is a totally alien set of norms).
Perhaps this old comment from Rohin Shah could serve as the standard link?
(Note that it’s on the particular case of recommending people do/don’t work at a given org, rather than the general case of praise/criticism, but I don’t think this changes the structure of the argument other than maybe making point 1 less salient.)
Excerpting the relevant part:
Yeah, that’s a decent link. I do think this comment is more about whether anti-recommendations for organizations should be held to a similar standard. My comment also included some criticisms of Sean personally, which I think do also make sense to treat separately, though at least I definitely intend to also try to debias my statements about individuals after my experiences with SBF in-particular on this dimension.
I don’t understand your lack of understanding. My point is that you’re acting like a right arse.
When people make claims, we expect there to be some justification proportional to the claims made. You made hostile claims that weren’t following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn’t have anything to back it up.
I don’t understand how you wouldn’t think that Sean would be hurt by it.[2] So to me, you behaved like arse, knowing that you’d hurt someone, didn’t justify it, got called out, and are now complaining.
So I don’t really have much interest in continuing this discussion for now, or much opinion at the moment of your behaviour or your ‘integrity’
Like nobody was discussing CSER/CFI or Sean directly until you came in with it
Even if you did think it was justified
This seems relatively straightforwardly false. In as much as Sean is making claims about the right strategy to follow for FHI, and claiming that the errors at FHI were straightforwardly Bostrom’s fault and attributable to ‘garden variety incompetence’, the degree of historical success of the strategies that Sean seems to be advocating for is of course relevant in assessing whether that’s accurate. And CSER and Leverhulme seem like the obvious case studies that are available here.
We can quibble over the exact degree of relevance of the points I brought up, but the logical connection here seems straightforward.
Separately, I see no way how you could know whether I have anything to back up my criticism. I have written about my thoughts on CSER in the past, and I did not intend to write up all the thoughts and evidence I have in this thread.
If you want we can have a call for an hour, or you can investigate this question yourself and come to your own conclusion, and then you can make a judgement of whether I have anything to back up my opinion, but as I have said upthread, I don’t consider myself to have an obligation to extensively document the evidence for all of my opinions and judgements before I feel comfortable expressing them.
To be clear, I also absolutely do not hold myself to this standard. I feel totally fine, and encourage others to do as well, to casually mention controversial and important beliefs of theirs whenever it seems relevant, without an obligation to fully back up that claim. Indeed, I am pretty confused what norm you are referring to here, since I also can’t think of this norm in almost any context I am in.
If someone mentions they believe in god, I don’t expect that this means they are ready or want to have a conversation about theology with me right then and there. When someone says they vote libertarian in the US general election I totally don’t expect to have a conversation with them about macroeconomic principles right there. People express large broad claims all the time without wanting to go into all the details.
In the examples you give, the arguments for and against are fairly cached so there’s less of a need to bring them up. That doesn’t apply here. I also think your argument is often false even in your examples—in my experience, the bigger the gap between the belief the person is expressing and that of the ~average of everyone else in the audience, the more likely there is to be pushback (though not always by putting someone on the spot to justify their beliefs, e.g. awkwardly changing the conversation or straight out ridiculing the person for the belief)
Pushback (in the form of arguments) is totally reasonable! It seems very normal that if someone is arguing for some collective path of action, using non-shared assumptions, that there is pushback.
The thing that feels weirder is to invoke social censure, or to insist on pushback when someone is talking about their own beliefs and not clearly advocating for some collective path of action. I really don’t think it’s common for people to push back when someone is expressing some personal belief of theirs that is only affecting their own actions.
In this case, I think it’s somewhat ambiguous whether there I am was arguing for a collective path of action, or just explaining my private beliefs. By making a public comment I at least asserted some claim towards relevance towards others, but I also didn’t explicitly say that I was trying to get anyone else to really change behavior.
And in either case, invoking social censure on the basis of someone expressing a belief of theirs without also giving a comprehensive argument for that belief seems rare (not unheard of, since there are many places in the world where uniform ideologies are enforced, though I don’t think EA has historically been such a place, nor wants to be such a place).
Sean is one of the under-sung heroes who helped build FHI and kept it alive. He did this by—among other things—careful and difficult relationship management with the faculty. I had to engage in this work too and it was less like being between a rock and a hard place and more like being between a belt grinder and another bigger belt grinder.
One can disagree about apportioning the blame for this relationship—and in my mind, I divide it differently than Sean—but after his four years of first-hand experience, my response to Sean is to take his view seriously, listen, and consider it. (And to give it weight even against my 3.5 years of first-hand experience!)
As a tangent, respectfully listening to people’s views and expressing gratitude—and avoiding unnecessary blame—was a core part of what allowed ops and admin staff to keep FHI alive for so long against hostile social dynamics. As per Anders’ comment posted by Pablo here, it might be useful for extending EA’s productive legacy as well.
Sean thank you so much for all you did for FHI.
Thanks, that’s useful context, and I definitely have been less close to things than you have (and I have much less reason to distrust your take here than Sean’s).
I do think that given the results of Sean’s strategy at CSER and Leverhulme, which I think are institutions that have overall caused more harm than good and I wish didn’t exist, my best guess would be that as I dig into this more, I would find that what Sean thought were obvious choices were things that would have ultimately had long-term bad consequences, and I also wouldn’t be surprised if Sean’s takes were ultimately responsible for a good chunk of associated pressure and attacks on people’s intellectual integrity (though I don’t know, and would be interested in takes from people who have been responsible for core FHI intellectual contributions about the tradeoffs that Sean was advocating for).
My guess is Sean did probably get some things right, but I do think the track record here speaks quite badly to Sean’s allocation of responsibility by my lights.
At least from where I am standing, I do think a big issue with FHI’s relationship to the university was one in which I repeatedly saw FHI get bullied by the university in a way that felt somewhat obviously crazy to me, and at least my current (low-confidence) read of the situation is that Sean instead of reacting to that appropriately seemed to push for making FHI the kind of institution that wouldn’t fight back against that in a reasonable way (by e.g. threatening to just leave before it got completely smothered by the university, or drawing lines in the sand which would have caused FHI to shut down sooner instead of losing its coherence over many years of pain).
But again, I don’t have a ton of context here, I am mostly reasoning from the online comments of Sean that I’ve seen and the de-facto fate that befell the FHI and would update a good amount on reports by people who were actually there, especially in the later years.
I’d love if you could comment on which concrete actions were harmful. (I donated to CSER a long time ago and then didn’t pay attention to what they were doing, so I’m curious.)
This thread doesn’t feel great for this, though CSER is an organization for which I do really wish more people shared their assessments. Also happy to have a call if your curiosity extends that far, and you would be welcome to write up the things that I say in that call publicly (though of course that’s a lot of work and I don’t think you have any obligation to do so).
(Thanks, dm sent.)
>”and would update a good amount on reports by people who were actually there, especially in the later years.”
For takes from people you might trust more than me, you might consider reaching out to Owen Cotton-Barratt, Niel Bowerman, or Page Hedley, all of whom played relevant roles later than me.
Fwiw I downvoted this post because it doesn’t say anything substantial about what you think CSER and Leverhulme are doing wrong, so it just comes across as abuse.
@Habryka I’m just gonna call you out here. Someone −9ed my above comment in a single vote, and there are only about two people on the forum who that could be, one of who is the person I was criticising.
Given that I (I think clearly) meant this as a constructive remark, and that you’re one of the most influential people in the EA movement, and that EA is supposed to encourage transparency and criticism, this sends a fairly unambiguous signal that the latter isn’t really true.
In fact, I genuinely now imagine I’ve lost some small likelihood of being received positively by you if I ever approach Lightcone for support (and that I’m losing more by writing this). This seems like a bad sign for EA epistemic health.
Please say if this wasn’t you, and I’ll retract and apologise.
I think many people have a voting power of 9. I do, and I know many people with more karma than me.
There are, I believe, fifteen people with +9 strongvotes (the fifteen with 10K+ karma). You can search the People directory by karma to see the list.
As of one those people, I wish there was a smallish chance our strongvotes would count as +8 and a small chance the +8 people’s votes would count as +9 to make attempted identification of a voter more difficult.
Another option is to let people with a voting power of n cast a vote of any strength between 1 and n. This may be somewhat challenging from a UI perspective, though.
Yeah, I’ve considered this a bunch (especially after my upvote strength on LW went up to 10, which really limits the number of people in my reference class).
I think a whole multi-selection UI would be hard, but maybe having a user setting that you can change on your profile where you can set your upvote-strength to be any number between 1 and your current vote strength seems less convenient but much easier UI wise. It would require some kind of involved changes in the way votes are stored (since we currently have an invariant that guarantees you can recalculate any users karma from nothing but the vote table, and this would introduce a new dependency into that that would have some reasonably big performance implications).
Alternatively, you could make the downvote button reduce votes by one if the vote count is positive, and vice versa. For example, after casting a +9 on a comment by strongly upvoting it, the user can reduce the vote strength to +7 by pressing the downvote button twice.
That’s an interesting idea, I hadn’t considered that!
Another idea: force each tier of votes to have at least say 10 members. So even when the highest karma person breaches a new threshold, they don’t get the extra firepower until there are at least nine other great powers to join them.
(I care quite a bit about votes being anonymous, so will generally glomarize in basically all situations where someone asks me about my voting behavior or the voting behavior of others, sorry about that)
Sad to hear this happened, but it seems the situation was irrecoverable, and the organisation was already dead for a bit before it officially shuttered.
Glad for this post and all the comments.
From Bostrom’s website, an updated “My Work” section reads:
A very interesting summary, thanks.
However I’d like to echo Richard Chappell’s unease at the praising of the use of short-term contracts in the report. These likely cause a lot of mental health problems and will dissuade people who might have a lot to contribute but can’t cope with worrying about whether they will need to find a new job or even career in a couple of years’ time. It could be read as a way of avoiding dealing with university processes for firing people—but then the lesson for future organisations may be to set up outside a university structure, and have a sensible degree of job security.
FHI almost singlehandedly made salient so many obscure yet important research topics. To everyone that contributed over the years, thank you!
Nick Bostrom’s website now lists him as “Principal Researcher, Macrostrategy Research Initiative.”
Doesn’t seem like they have a website yet.
The Macrostrategy Research Initiative was registered as a company in August 2023. Its Director is Toby Newberry (“Occupation: Chief of Staff”), who’s been at FHI/GPI for a few years.
https://find-and-update.company-information.service.gov.uk/company/15054502
Not to be confused with The Macrostrategy Partnership: https://www.macrostrategy.co.uk/
For anyone wondering about the definition of macrostrategy, the EA forum defines it as follows:
Executive summary: The Future of Humanity Institute (FHI) achieved notable successes in its mission from 2005-2024 through long-term research perspectives, interdisciplinary work, and adaptable operations, though challenges included university politics, communication gaps, and scaling issues.
Key points:
Long-term research perspectives and pre-paradigmatic topics were key to FHI’s impact, enabled by stable funding.
An interdisciplinary and diverse team was valuable for tackling neglected research areas.
Operations staff needed to understand the mission as it grew in complexity.
Failures included insufficient investment in university politics, communication gaps, and challenges scaling up gracefully.
Replicating FHI would require the right people, intellectual culture, and shielding from constraints, not just copying its structure.
The most important factor is pursuing the key topics and mission, even as knowledge and priorities evolve.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.