Leverage Research: reviewing the basic facts
Resources spent
Leverage Research has now existed for over 7.5 years1
Since 2011, it has consumed over 100 person-years of human capital.
From 2012-16, Leverage Research spent $2.02 million, and the associated Institute for Philosophical Research spent $310k.23
Outputs
Some of the larger outputs of Leverage Research include:
Work on Connection Theory: although this does not include the initial creation of the theory itself, which was done by Geoff Anders prior to founding Leverage Research
Contributions to productivity of altruists via the application of psychological theories including Connection Theory
Intellectual contributions to the effective altruism community: including early work on cause prioritisation and risks to the movement.
Intellectual contributions to the rationality community: including CFAR’s class on goal factoring
The EA Summits in 2013-14: The EA summit is a precursor to EA Global, which is being revived in 2018
Its website also has seven blog posts.4
Recruitment Transparency
Leverage Research previous organized the Pareto Fellowship in collaboration with another effective altruism organization. According to one attendee, Leverage staff were secretly discussing attendees using an individual Slack channel for each.
Leverage Research has provided psychology consulting services using Connection Theory, leading it to obtain mind-maps of a substantial fraction of its prospective staff and donors, based on reports from prospective staff and donors.
The leadership of Leverage Research have on multiple occasions overstated their rate of staff growth by more than double, in personal conversation.
Leverage Research sends staff to effective altruism organizations to recruit specific lists of people from the effective altruism community, as is apparent from discussions with and observation of Leverage Research staff at these events.
Leverage Research has spread negative information about organisations and leaders that would compete for EA talent.
General Transparency
The website of Leverage Research has been excluded from the Wayback Machine5
Leverage Research has had a strategy of using multiple organizations to tailor conversations to the topics of interest to different donors.
Leverage Research had longstanding plans to replace Leverage Research with one or more new organizations if the reputational costs of the name Leverage Research ever become too severe. A substantial number of staff of Paradigm Academy were previously staff of Leverage Research.
General Remarks
Readers are encouraged to add additional facts known about Leverage Research in the comments section, especially where these can be supported by citation, or direct conversational evidence.
Citations
1. https://www.lesswrong.com/posts/969wcdD3weuCscvoJ/introducing-leverage-research
2. https://projects.propublica.org/nonprofits/organizations/453989386
3. https://projects.propublica.org/nonprofits/organizations/452740006
4. http://leverageresearch.org/blog
5. https://web.archive.org/web/*/http://leverageresearch.org/
- Red Teaming CEA’s Community Building Work by 1 Sep 2022 14:42 UTC; 296 points) (
- SHOW: A framework for shaping your talent for direct work by 12 Mar 2019 17:16 UTC; 215 points) (
- Research Deprioritizing External Communication by 6 Oct 2022 12:20 UTC; 89 points) (
- Updates from Leverage Research: History and Recent Progress by 27 Sep 2021 5:23 UTC; 38 points) (LessWrong;
- Research Deprioritizing External Communication by 6 Oct 2022 12:20 UTC; 34 points) (LessWrong;
- 25 Sep 2021 17:14 UTC; 27 points) 's comment on Common knowledge about Leverage Research 1.0 by (LessWrong;
- 6 Aug 2018 12:27 UTC; 25 points) 's comment on Leverage Research: reviewing the basic facts by (
- Leverage Research shutting down? by 4 Jul 2019 20:55 UTC; 22 points) (
- Where would I find the hardcore totalizing segment of EA? by 28 Dec 2023 9:16 UTC; 22 points) (
- Where are Leverage research staff now? by 3 Aug 2019 12:15 UTC; 18 points) (
- To what extent is Paradigm Academy a front organization for, or a covert rebrand of Leverage Research? by 1 Sep 2019 21:45 UTC; 14 points) (
- 29 Oct 2022 0:45 UTC; -10 points) 's comment on Ben_West’s Quick takes by (
[My views only]
Although few materials remain from the early days of Leverage (I am confident they acted to remove themselves from wayback, as other sites link to wayback versions of their old documents which now 404), there are some interesting remnants:
A (non-wayback) website snapshot from 2013
A version of Leverage’s plan
An early Connection Theory paper
I think this material (and the surprising absence of material since) speaks for itself—although I might write more later anyway.
Per other comments, I’m also excited by the plan of greater transparency from Leverage. I’m particularly eager to find out whether they still work on Connection Theory (and what the current theory is), whether they addressed any of the criticism (e.g. 1, 2) levelled at CT years ago, whether the further evidence and argument mentioned as forthcoming in early documents and comment threads will materialise, and generally what research (on CT or anything else) have they done in the last several years, and when this will be made public.
I was interviewed by Peter Buckley and Tyler Alterman when I applied for the Pareto fellowship. It was one of the strangest, most uncomfortable experiences I’ve had over several years of being involved in EA. I’m posting this from notes I took right after the call, so I am confident that I remember this accurately.
The first question asked about what I would do if Peter Singer presented me with a great argument for doing an effective thing that’s socially unacceptable. The argument was left as an unspecified black box.
Next, for about 25 minutes, they taught me the technique of “belief reporting”. (See some information here and here). They made me try it out live on the call, for example by making me do “sentence completion”. This made me feel extremely uncomfortable. It seemed like unscientific, crackpot psychology. It was the sort of thing you’d expect from a New Age group or Scientology.
In the second part of the interview (30 minutes?), I was asked to verbalise what my system one believes will happen in the future of humanity. They asked me to just speak freely without thinking, even if it sounds incoherent. Again it felt extremely cultish. I expected this to last max 5 minutes and to form the basis for a subsequent discussion. But they let me ramble on for what felt like an eternity, and there were zero follow up questions. The interview ended immediately.
The experience left me feeling humiliated and manipulated.
I had an interview with them under the same circumstances and also had the belief reporting trial. (I forget if I had the Peter Singer question.) I can confirm that it was supremely disconcerting.
At the very least, it’s insensitive—they were asking for a huge amount of vulnerability and trust in a situation where we both knew I was trying to impress them in a professional context. I sort of understand why that exercise might have seemed like a good idea, but I really hope nobody does this in interviews anymore.
You could have always told them you felt uncomfortable, and stopped.
The problem is not belief reporting — the problem is your unwillingness to express your discomfort.
“It felt extremely cultish” is not useful. Belief reporting, on the other hand, is very, very useful if you can get over your sensitivities.
People generally have good reasons for following instructions during fellowship interviews. It’s not unusual to hide your discomfort in a situation with that kind of power imbalance.
(same person as avindroth)
The only reason you perceive it as power imbalance is because you treat it as such. You can say you feel uncomfortable if you feel uncomfortable.
Will you do anything they ask of you during an interview? No.
And this is answered with me considering what belief reporting and sentence completion are. Feeling hurt and humiliated from such experience is mind-boggling to me. I can only understand this as someone feeling incapable of expressing their discomfort.
It’s fine to hide your discomfort, but don’t reflect that on the techniques themselves (especially if they were trying to TEACH you something).
Power imbalances are not just all in my head. When you do not have money, and someone with money asks you to do humiliating things to get money, sometimes you do them, even though you don’t want to.
I have no dealings with Leverage and literally no stake in this conversation. But there is definitely a power imbalance between people offering money and people who need money to live.
An article in Splinter News was released few days ago, showing leaked emails where Jonah Bennett, a former Leverage employee who is now editor-in-chief for Palladium Magazine (LinkedIn ), was involved with a white nationalist email list, where he among other things made anti-Semetic jokes about a holocaust survivor, says he “always has illuminating conversations with Richard Spencer”, and complained about someone being “pro-West before being pro-white/super far-right”.
I have 35 mutual friends with this guy on Facebook, mostly EAs. This makes me think that while at Leverage he interacted a reasonable amount with the EA community. (Obviously, I expect my EA mutual friends to react with revulsion to this stuff.)
Bennett denies this connection; he says he was trying to make friends with these white nationalists in order to get information on them and white nationalism. I think it’s plausible that this is somewhat true. In particular, I’d not be that surprised if Bennett is not a fan of Hitler, and if he said racist jokes more to fit in. But I’d be pretty surprised if it turned out that he didn’t have endorsed explicitly racist views—this seems to be the simplest explanation of a bunch of his racist comments which seemed to come from a place of actual racism-based analysis.
I previously knew that Leverage has employed several neoreactionaries; I had perhaps naively assumed that they were mostly just contrarian intellectual types who say various edgy things and probably don’t do much good for the world, but who are basically not explicitly racist as part of their endorsed values. The fact that Jonah seems to be racist in this way updates me towards thinking that the people he worked with in the past at Leverage who now work at Palladium might also be that racist, which makes me think that it’s likely that Leverage at least for a while had a whole lot [EDIT: I no longer endorse this, see endnotes] of really racist employees. I think this is really sketchy and bad.
Suppose there’s an org without much evidence of positive impact; how strong does the evidence of its white nationalist connection have to be before I decide to write it off? Personally I think the answer is “not very high”; I think the evidence in favor of Leverage’s connections to white nationalism are above that low threshold.
Most hardcore EAs I know in person already think Leverage is pretty sketchy and think it’s bad that they try to associate themselves with EA so much; I would like it if the wider community also had this perspective.
----
Edited to add (Oct 08 2019): I wrote “which makes me think that it’s likely that Leverage at least for a while had a whole lot of really racist employees.” I think this was mistaken and I’m confused by why I wrote it. I endorse the claim “I think it’s plausible Leverage had like five really racist employees”. I feel pretty bad about this mistake and apologize to anyone harmed by it.
“Leverage” seems to have employed at least 60 people at some time or another in different capacities. I’ve known several (maybe met around 15 or so), and the ones I’ve interacted with often seemed like pretty typical EAs/rationalists. I got the sense that there may have been few people there interested in the neoreactionary movement, but also got the impression the majority really weren’t.
I just want to flag that I really wouldn’t want EAs generally think that “people who worked at Leverage are pretty likely to be racist,” because this seems quite untrue and quite damaging. I don’t have much information about the complex situation that represents Leverage, but I do think that the sum of the people ever employed by them still holds a lot of potential. I’d really not want them to get or feel isolated from the rest of the community.
Ok actually I just reread this comment and now I think that the thing you quoted me as saying is way too strong. I am confused by why I wrote that.
Yep, understood, and thanks for clarifying in the above comment. I wasn’t thinking you thought many of them were racist, but did think that at least a few readers may have gotten that impression from the piece.
There isn’t too much public discussion on this topic and some people have pretty strong feelings on Leverage, so sadly sometimes the wording and details matter more than they probably should.
Yeah, I don’t think that people who worked at Leverage are pretty likely to be racist.
Hi Buck, Ozzie and Greg,
I thought I’d just add some data from my own experience.
For context, I’ve been heavily involved in the EA community, most recently running CEA. After I left CEA, I spent the summer researching what to do next and recently decided to join the Leverage Research team. I’m speaking personally here, not on behalf of Leverage.
I wanted to second Ozzie’s comment. My personal experience at least is that I’ve found the Leverage and Paradigm teams really welcoming.
They do employ people with a wide range of political views with the idea that it helps research progress to have a diversity of viewpoints. Sometimes this means looking at difficult topics and I’ve sometimes found it uncomfortable to try and challenge why I hold different viewpoints but I’ve always found that the focus is on understanding ideas and the attitude to the individual people one of deep respect. I’ve found this refreshing.
I wanted to thank Ozzie for posting this in part because I noticed reticence in myself to saying anything because my experience with conversations about Leverage is that they can get weird and personal quite fast. I know people who’ve posted positive things about Leverage on the EA Forum and then been given grief for it on and offline.
For this reason Greg, I can see why Leverage don’t engage much with the EA Forum. You and I know each other fairly well and I respect your views on a lot of topics (I was keen to seek you out for advice this summer). I notably avoided discussing Leverage though because I expected an unpleasant experience and that I had more information on the topic from investigating them myself. This feels like a real shame. Perhaps I could chat with you (and potentially others) about what you’d like to see written up by Leverage. I’m happy to commit to specific Leverage-related posts if you can help ensure that turns into a genuinely useful discussion. What do you think? :-)
Hello Larissa,
I’d be eager to see anything that speaks to Leverage’s past or present research activity: what have they been trying to find out, what have they achieved, and what are they aiming for at the moment (cf).
As you know from our previous conversations re. Leverage, I’m fairly indifferent to ‘they’re shady!’ complaints (I think if people have evidence of significant wrongdoing, they should come forward rather than briefing adversely off the record), but much less so to the concern that Leverage has an has achieved extraordinarily little for an organisation with multiple full-time staff working for the better part of a decade. Showing something like, “Ah, but see! We’ve done all these things,” or, “Yeah, 2012-6 was a bit of a write-off, but here’s the progress we’ve made since”, would hopefully reassure, but in any case be informative for people who would like to have a view on leverage independent of which rumour mill they happen to end up near.
Other things I’d be interested to hear about is what you are planning to work on at Leverage, and what information you investigated which—I assume—leads to a much more positive impression of Leverage than I take the public evidence to suggest.
Hi Greg,
Thanks for the message and for engaging at the level of what has Leverage achieved and what is it doing. The tone of your reply made me more comfortable in replying and more interested in sharing things about their work so thank you!
Leverage are currently working on a series of posts that are aimed at covering what has been happening at Leverage from its inception in 2011 up until a recent restructure this year. I expect this series to cover what Leverage and associated organisations were working on and what they achieved. This means that I expect Leverage to answer all of your questions in a lot more depth in the future. However, I understand that people have been waiting a long time for us to be more transparent so below I have written out some more informal answers to your questions from my understanding of Leverage to help in the meantime.
Another good way to get a quick overview of the kinds of things Leverage has been working on beyond my notes below is by checking out this survey that we recently sent to workshop participants. It’s designed for people who’ve engaged directly with our content so it won’t be that relevant for people to fill in necessarily but it gives an overview of the kinds of techniques Leverage developed and areas they researched.
What did Leverage 1.0 work on?
A very brief summary is that the first eight and a half years of Leverage (let’s call this “Leverage 1.0” as a catch-all for those organisations before the restructure) was at first a prioritisation research project looking at what should people work on if they want to improve the world. Leverage 1.0 later came to focus more on understanding and improving people as their psychological frameworks and training tools developed but they still conducted a wide range of research.
This means that in the very early days they were thinking a lot about how to prioritise, how to make good long term plans and just trying a bunch of things. I get the impression that at this stage almost nothing was ruled out in terms of what might be worth exploring if you wanted to improve the world. This meant people investigating all sorts of things like technological intelligence amplification, nootropics, and conducting polyphasic sleep experiments. People might be researching what caused the civilisational collapse that led to the dark ages, the beliefs of a particular Christian sect, or what lead to the development of Newtonian physics. Leverage felt this was important for research progress. They wanted researchers to follow what motivated them. They thought that it was important to investigate a lot of areas before deciding where to focus their efforts because deciding what to prioritise is so important to overall impact. This felt particularly important when investigating moon-shots which had the potential to be extremely valuable even if they seemed unlikely at the outset.
Some of the outputs of these early days of research included training sessions on:
Planning—How to build and error-check plans for achieving your goals
Expert assessment—how to determine if someone is an expert in a given domain when you lack domain knowledge
Learning to learn—how to improve and expand the scope of your learning process
Theorizing—how to build models and improve your model building process over time
Prioritisation and goal setting—how to find your goals, back chain plans from them etc
This is far from everything but gives you a flavour.
Geoff had developed a basic model of psychology called Connection Theory (CT) so this was a thing that was investigated alongside everything else. This involved spending a lot of time testing the various assumptions in CT.
Through experimenting with using CT in this way, Leverage eventually found they were able to use ideas from CT to make some basic predictions about individual and group behaviour, help individuals identify and remove bottlenecks so that they could self improve and perhaps even identify and share specific mental moves people were using to make research progress on particular questions. This made the team more excited about psychology research in particular (amongst the array of things people were researching) as a way to improve the world.
From there they (alongside the newly founded Paradigm Academy) developed some of the research into things like
one on one and group training,
A catalogue of different mental procedures individuals use when conducting research so that they could be taught to others to use to tackle different research and other problems. One example intellectual procedure (IP) just to give a sense of this, is the Proposal IP where you use the fact that you have a lot of implicit content in your mind and your taste response to inelegant proposals to speed up your thinking in an area.
Specific training in strategy, theorizing, and research
A collection of specific introspection and self-improvement techniques such as:
Self-alignment (a tool for increasing introspective access which handled a class of cases where our tools previously weren’t working)
Anti-avoidance techniques (making it so you can think clearly in areas you previously didn’t want to think about or had fuzzy thoughts in)
Charting (a belief change tool that has been modified and built out a lot since it’s initial release)
Mythos (tool for introspection with imagery, helpful for more visual people)
Integration and de-zoning (tools for helping people connect previously separate models)
What is Leverage doing now?
As for what Leverage is currently working on, once we have posted our retrospective we’ll then be updating Leverage’s website to reflect its current staff and focus so again a better update than I can provide is pending.
The teaser here is that from the various research threads being pursued in the early years of Leverage 1.0, Leverage today has narrowed their focus to be primarily on two areas that they found the most promising in their early years:
Scientific methodology research
Psychology
We also continue to be interested in sociology research and expect to bring on research fellows (either full time or part of future fellowship programmes) focusing on sociology in the future. However, since we’re relaunching our website and research programme we want to stay focused so we’re punting building out more of our sociology work to further down the line.
The scientific methodology research involves continuing to look at historical examples of scientific breakthroughs in order to develop better models of how progress is made. This continues some of our early threads of research in theorising, methodology, historical case studies and the history of science. We’re particularly interested in how progress was made in the earlier stages of the development of a theory or technology. Some examples include looking at what led to the transition in chemistry from Phlogiston to Lavoisier’s oxygen theory or the challenges scientists had in verifying findings from the first telescopes. We aim to share lessons from this research with researchers in a variety of fields. In particular, we want to support research that is in its earlier, more explorative stages. This is more of a moon-shot area but this means it can get less attention while being potentially high reward.
Our psychology research aims to continue to build on the progress and various research threads Leverage 1.0 was following. While this is quite a moon shot style bet, if we can improve our understanding of people then we potentially improve the ways in which they work together to solve important problems. At this point, we have developed tools for looking at the mind and mental structures that we think work fairly well on the demographics of people we’ve been working with. I got a ballpark estimate from someone at Leverage that Leverage and Paradigm have worked with around 400 people for shallower training, and about 60 for in-depth work but treat those figures as a guess until we write something up formally. We’ve focused in the last few years on improving these tools so they work in harder cases (e.g. people who have trouble introspecting initially) and using the tools to find common mental structures. Moving forward with this research we want to test the tools in a more rigorous way, in particular by communicating with people in academia to see whether or not they can validate our work.
One thing I personally like about the plans for psychology research is that it also acts as a check on our scientific methodology research. If the insights we gain from looking at the history of scientific progress aren’t useful to us in making progress in psychology then that’s one negative sign on their overall usefulness.
Who works for Leverage and Paradigm?
The team is much smaller and the organisation structure slightly more defined (although there is a way to go here still). There are four researchers (including Geoff who is also the Executive Director) and I’ll be joining as a Program Manager managing the researchers and helping communicate with the public about our work. So four in total at the moment, five once I start.
While Leverage Research in its newer form is getting going it still receives a lot of help from its sister organisation Paradigm Academy. This means that while they are two separate organisations, currently Paradigm staff give a lot of time to helping Leverage, particularly in areas like operations and helping with PR and Communications like the website relaunch. This helps allow the researchers to focus on their research and means the burden of public communication won’t all fall on their newest employee (me). Once a lot of that is done though we expect to make the division between the two organisations clearer. Paradigm currently has nine employees including Geoff.
I expect all of this will generate more questions than it answers at the moment and while my answer is to wait for Leverage’s formal content to be published I can see why this is frustrating. I hope my examples give a small amount of insight into our work while we take the time to write things up. You have every reason to be sceptical about Leverage posting content given various promises made in the past. I think given our track record on public communication that scepticism is valid. All I can perhaps offer in the meantime is that I personally am very keen to see both the retrospective and the new Leverage website published and the get sh*t done spirit that you and others on this forum know me for is part of the reason they’ve offered me a job to help with this in the first place.
Why I chose to work at Leverage
As for my personal reasons for choosing to accept an offer from Leverage, I expect this to be hard to transmit just because of inferential distance. My decision was the result of at least five months of discussions, personal research and resultant updates all of which is built on various assumptions that caused me to already be pursuing the plans I was at CEA.
I’ll attempt a short version here anyway in case it’s helpful. If there’s a lot of interest I’ll consider writing this up but I’m not sure it’ll be sufficiently useful or interesting to be worth the time cost.
Broadly speaking, I created a framework (building off a lot of 80Ks work but adapting it to suit my needs) to use to compare options on:
Impact - comparing potential career plans by looking at the scale and likelihood of success across:
the problem being tackled (e.g. preventing human extinction),
the approach to solving that problem (e.g. develop AGI in a way that’s safe)
the organisation (e.g. DeepMind)
and what I personally could contribute (e.g. say in a role as Project Manager)
Personal happiness (personal fit with the culture, how the job would fit into my life etc)
Future potential (what skills would I build and how useful are they, and what flexible resources such as useful knowledge or network would I gain)
I decided that I was willing to bet at least a few more years of my career on the more moon shot type plans to build a much better future (something like continuing personally to follow CEA’s vision of working towards an optimal world).
This narrowed my focus down to primarily considering paths related to avoiding existential risks and investing in institutions or advances that would improve humanity’s trajectory. In exploring some options around contributing to AI safety in some way I came away both not feeling convinced that I wouldn’t potentially cause harm (through speeding up the development of AGI) and less sure of the arguments for now being a particularly important hinge on this. It, therefore, seemed prudent to learn a lot more before considering this field.
This left me then both wanting to invest more time in learning more while also not wanting to delay working on something I thought was high impact indefinitely. In terms of impact, the remaining areas were advances or institutions that might improve humanity’s ability to tackle global problems.
I’d had plenty of conversations with various people about Leverage (including many Leverage sceptics) in the past and interacted with Leverage and Paradigm directly to some degree, mostly around their introspection techniques which I personally have found extremely useful for self-improvement. I knew that they were interested in psychology initially as a potential way to improve humanity’s trajectory (but didn’t yet understand the scope of their other research) so I reached out to chat about this. I found that many of the people there had already thought a lot about the kinds of things I was considering as options for improving the long-term future and they had some useful models. Those interactions plus my positive view of their introspection techniques led me to think that Leverage had the most plausible plan given my current uncertainty for improving the long-term future and was likely to be by far the best option for me in terms of self-improvement and gaining the knowledge I wanted for making better future plans. Their recent restructure, desire to establish a more structured organisation and plans to publish a lot of content meant they had an opening for my particular skill set and the rest, as they say, is history.
Did this series end up being published?
Just wanted to say I super appreciated this writeup.
Thanks Raemon :-) I’m glad it was helpful.
This aged well… and it reads like what ChatGPT would blurt, if you asked it to “sound like a convincingly respectful and calm cult with no real output.” Your ‘Anti-Avoidance,’ in particular, is deliciously Orwellian. “You’re just avoiding the truth, you’re just confused...”
I was advocating algal and fish farming, including bubbling air into the water and sopping-up the fish poop with crabs and bivalves—back in 2003. Spent a few years trying to tell any marine biologist I could. Fish farming took-off, years later, and recently they realized you should bubble air and catch the poop! I consider that a greater real-world accomplishment than your ‘training 60+ people on anti-avoidance of our pseudo-research.’ Could you be more specific about Connection Theory, and the experimental design of the research you conducted and pre-registered, to determine that it was correct? I’m sure you’d have to get into some causality-weeds, so those experimental designs are going to be top-notch, right? Or, is it just Geoff writing with the rigor of Freud on a Slack he deleted?
Thanks Larissa—the offer to write up posts from Leverage Research is a generous offer. Might it not be a more efficient use of your time, though, to instead answer questions about Leverage the public domain, many of which are fairly straightforward?
For example, you mention that Leverage is welcoming to new staff. This sounds positive—at the same time, the way Leverage treated incoming staff is one of the main kinds of fact discussed in the top-level post. Is it still true that: (i) staff still discuss recruitees on individual slack channels, (ii) mind-mapping is still used during recruitment of staff, (iii) growth-rates are overestimated, (iv) specific lists of attendees are recruited from EA events, and (v) negative rumours are still spread about other organizations that might compete for similar talent? To the extent that you are not sure about (i-v), it would be interested to know whether you raised those concerns with Geoff in the hiring process, before joining the organization.
For other questions raised by the top-level post: (a) are Leverage’s outputs truly as they appear? (b) Is its consumption of financial resources and talent, as it appears? (c) Has it truly gone to such efforts to conceal its activities as described under the general transparency section? (d) How will Leverage measure any impact from its ninth year of operation?
From another post: (e) How many of the staff at Leverage Research are also affiliated with Paradigm Academy? (f) How much of the leadership of Leverage Research is also playing a leading role at Paradigm Academy?
Since your investigations give you much more information on this topic than is available to an interested outsider, it should be very easy for you to help us out with these questions.
Hi Anonymoose,
I’d like to do two things with my reply here.
First, to try and answer your questions as best I can.
But then second, start to work out how to make future conversations with you about Leverage more productive
1. ANSWERING YOUR QUESTIONS
I’d recommend first reading my recent reply to Greg because this will give you a lot of relevant context and answers some of your questions.
Questions a, b and d: outputs, resources and future impact
Your Questions:
In terms of questions a, b and d, I will note the same thing as I said in my reply to Greg which is that we’re currently working both on a retrospective of the last eight and a half years of Leverage and on updating Leverage’s existing website. I think these posts and updates will then allow individuals to assess for themselves
our past work and outputs
whether it was worth the resources invested
our plans for the future
For now, though sections “What did Leverage 1.0 work on?” and “What is Leverage doing now” in my reply to Greg will give you some information about things Leverage did and its plans for the future.
It’s hard to comment on whether any of these things are “as they appear” as (1) there isn’t enough public content to assess and (2) different people seem to have had very different experiences with Leverage and have very different views on their work
This means how Leverage’s work appears depends on an individual’s interactions with them. This has made it hard for any consensus to emerge and so debates continue. Once we’ve published more of our content, I hope it will be easier to sync up and assess them.
Questions e and f: staffing cross over between Leverage and Paradigm
Your questions (referencing one of your other posts about Leverage):
On questions, e and f, see the section in my recent reply to Greg called “Who works for Leverage and Paradigm?”.
Geoff Anders is the founder and Executive Director of both Paradigm and Leverage and as such is currently the only member of staff working at both organisations. The two are sister organisations with different missions that collaborate. Leverage’s mission was essentially conducting research. Paradigm’s focus was much more on training. This means that there is some natural overlap and historically the two organisations have worked closely together. Paradigm uses Leverage’s research content in their training and in return they provide practical support (operations support, help to update the website) to Leverage.
We have no intention as you and the original poster propose of using the Paradigm or other brands to replace Leverage if there are too many problems with its brand. We created other organisations with different brands to distinguish between the different work they were doing. We’ve intentionally continued our current research under the same name so as to link to our origins with Leverage 1.0. When we update both websites (starting with Leverage) over the coming months you will be able to see the teams at each.
Questions i) and iv) discussing potential recruits in slack and recruitment at events
Your questions:
On question i), yes Leverage discuss people they are considering hiring on slack both in recruitment slack channels and DMs as part of deciding whether to hire someone.
With regards to workshops in particular, we will mention people attending workshops on slack but this is usually in the form of staff coordinating to arrange training and share training relevant information (e.g. “this person had trouble belief reporting, it would be good to assign a trainer during the 1-1 who could help them belief report”). If someone is a workshop attendee and we are considering hiring them they might be discussed in both contexts.
Leverage and Paradigm have also (in answer to question iv) attended (EA and non-EA) events with lists of people they plan to speak to because they might be good potential hires. In addition, at events, they often do things like having tables so that people interested in working there (or interested in their work in general) can ask questions. Of course, they also attend events for non-recruitment reasons (e.g. learning about interesting topics by attending talks and having discussions with people). Leverage are particularly interested in meeting independent researchers. Paradigm are interested in meeting people interested in self-improvement.
Question ii) “mind-mapping is still used during recruitment of staff”
On question ii), here I assume you’re asking about using the charting procedure which I think might have been called mind mapping at some point. This is a way of taking system one beliefs someone has and trying to map out why a person has those beliefs and what actions they taking as a result of those beliefs that they want to change.
Historically Paradigm and Leverage did do some charting with potential hires to see if they could use some of the basic techniques and found them helpful. Since Paradigm used charting in training and Leverage in research it was important new hires are aware of that, have a basic understanding of the tools and some interest in it. This is no longer needed in the Leverage hiring process as not all researchers will be doing any research relating to CT or charting but continues to be relevant to Paradigm for training so they still do some charting with potential hires. Staff will of course use their models of psychology to aid hiring decisions because they have useful models in this area but this is more akin to the way that recruiters might use their recruitment experience and intuition to aid decisions than I expect people are imagining.
Leverage and Paradigm both have a strict confidentiality policy for all one to one sessions with individuals which means this information is not discussed as part of recruitment discussions without permission.
While I don’t expect my experience to necessarily be indicative, I can speak a bit to my personal recruitment experience with Leverage. I worked on a trial project part-time for a couple of months looking at things like types of events different communities run and why people join communities and I attended a formal interview. I did charting with someone on I think two occasions, both at my own personal request and nothing to do with hiring. My most relevant nearby experience is in running recruitment rounds at CEA. I’d say that the Leverage process felt less organised and structured than the CEA process but was otherwise what I’d expect.
Leverage and Paradigm have both recently been working to make their hiring process more structured, in part based on feedback from potential hires. For Leverage, people can send a resume and would be invited to a research interview to discuss their research and our requirements. From there, successful candidates would take part in a trial period. Paradigm candidates go through several stages: sending a resume, attending an initial screening interview, a follow-up interview that aims to test job-relevant skills, a one week trial primarily checking team fit and finally an extended trial testing fit for the role. The process I went through was more like a less structured version of the Paradigm process. This is because while I’ll be working at Leverage, I won’t be a researcher so the Paradigm recruitment process made more sense in my particular case.
Question iii) “growth-rates are overestimated”
On question iii, am I right in thinking you’re referring to the claim in the original post that
If so, it’s hard to comment on what happened here without a lot more context on the conversation. I asked Geoff about this and his guesses were that this could have been confusion about hiring and attrition (was the conversation about hiring rates or about the total number of staff), numbers of volunteers or internal estimates for growth rate which turned out to be harder than we thought.
Question v) “negative rumours are still spread about other organizations that might compete for similar talent?”
Intentionally spreading rumours is certainly not Leverage policy nor endorsed or encouraged among staff. Of course, members of staff will have their own views on how best to improve the world and therefore I would expect them to have views on a number of EA and other organisations, both positive and negative.
I have the same difficulty in giving a useful answer to this question as with question iii, in that the original poster has made a claim about Leverage with a strong connotation but without providing the specifics or reasons they believe this.
I think it’s important for individuals to be able to share and discuss their views. I think it would be more helpful in cases like this forum post and when discussing other organisations people try to qualify what they think, why and where that information is from so that recipients of that information come away with a better understanding. This can be difficult but I think it’s important to try.
Question c) “Has it truly gone to such efforts to conceal its activities as described under the general transparency section?”
Finally on question c, yes our past website was removed from the Wayback machine. With hindsight this was a mistake. At the time people were digging up our old content (for example our old long term plan document) and using this to hype up various Leverage conspiracy theories. While this was entertaining, it was certainly distracting and meant people were even more confused about what we were doing. We thought that if we took the content off the wayback it would reduce the conspiracy style hype but removing it only fuelled the fire of conspiracies.
As you can probably tell, Leverage did not invest very many skill points or chips in public communication in the early days, instead spending them pretty much exclusively on research and experimentation. This means we’ve still got a lot to learn in the public communication space and a lot of mistakes to make up for.
As discussed above, Leverage did not use different organisations as a way to tailor messages to donors, different organisations had different focuses and donors understood this. Similarly we have no plans to replace Leverage as a brand.
2. MAKING OUR INTERACTIONS WITH ONE ANOTHER MORE PRODUCTIVE
I’d like to ensure that any future interactions between you and Leverage are a good use of both of our time, informative to readers, and helpful to you on whatever the real issues are.
Looking at your posting history, it seems that you’ve created this anonymous account only to ask questions about Leverage that come across at least as excuses to make insinuations against them. I don’t currently get the impression that you are following the spirit of the EA Forum guidelines and posting with a scout mindset, trying to give people an accurate view or being clear about what you believe and why.
If you would be willing to spend time trying to help me understand and address your particular concerns about Leverage then I am very happy to spend time on that. If you only plan to continue in the same vein as your previous comments I don’t currently expect that to yield anything that feels useful for either of us. If you would like to chat more to figure out a better way forward that gets your concerns addressed let me know. You can reach me here or at larissa.e.rowe [at] gmail.com.
Wow, I didn’t know about this. Thank you for drawing attention to it.
[I wrote an addendum to this comment, but then someone pointed out that it was unclear, so I deleted it]
Hi Buck,
For anyone that isn’t aware, I’m the founder and Executive Director of Leverage Research and as such wanted to reply to your comment.
First, I want to challenge the frame of this message. Messages like these, however moderately they are intended or executed, pull up a larger political context of attacks. People start asking the question “who should we burn” and then everyone scrambles to disavow everyone else so that they themselves don’t get burned.
I’m against disavowal and burning, at least in this case. My reaction if I found out that Jonah was “officially racist” by whatever measures would be to try to talk to him personally and convince him that the ideas were wrong. If I thought he was going to do something horrible, I’d oppose him and try to stop him. I think that disavowal and burning is a really bad way to fight racism because it pushes it underground without addressing it, and I’m not interested in getting public applause or doing short-sighted PR mitigation by doing something that is superficially good and actually harmful.
In terms of Jonah’s views, Jonah is a public figure and as such should speak for himself. He wrote a reply to the Splinter piece here: https://medium.com/@jonahbennett/statement-on-emails-83c5ebbad731 . As for myself, I know Jonah personally. If he were a hijacked racist shithead, I wouldn’t want to talk to him or be his friend, and I certainly wouldn’t want to have employed him. In all of my conversations with him I have found him to be deeply committed to making the world better for everyone, not just a select subset of people. And he’s willing to explore, take on personal risk, and speak what he believes more than most. I’m happy to count him as a friend.
As to other questions relating to Leverage, EA, funding- and attention-worthiness, etc., I’ve addressed some concerns in previous comments and I intend to address a broader range of questions later. I don’t however endorse attack posts as a discussion format, and so intend to keep my responses here brief. The issues you raise are important to a lot of people and should be addressed, so please feel free to contact me or my staff via email if it would be helpful to discuss more.
Geoff
[Own views]
If an issue is important to a lot of people, private follow-ups seem a poor solution. Even if you wholly satisfy Buck, he may not be able to relay what reassured him to all concerned parties, and thus likely duplication of effort on your part as each reaches out individually.
Of course, this makes more sense as an ill-advised attempt to dodge public scrutiny—better for PR if damning criticism remains in your inbox rather than on the internet-at-large. In this, alas, Leverage has a regrettable track record: You promised 13 months ago to write something within a month to better explain Leverage better, only to make a much more recent edit (cf.) that you’ve “changed your plans” and encourage private follow-ups rather than giving a public explanation. The pattern of ‘promised forthcoming explanation that never arrives’ has been going on about as long as Leverage itself (1, 2, 3).
The reason this is ill-advised is that silence is a poor salve for suspicion. If credible concerns remain publicly unanswered, people adversely infer they are likely ‘on the money’, and their target is staying quiet as they are rationally calculating the preserving whatever uncertainty remains still looks better than trying to contest the point. The more facially credible the concerns (e.g. Leverage has had dozens of person years and has seemed to produce extraordinarily little), and the more assiduous the attempts to avoid addressing them and obscure relevant evidence (e.g. not only taking down all old research, but doing your best to scrub any traces of it from the internet), the more persuasive the adverse inference becomes, and the more likely people are to start writing ‘attack posts’ [recte public criticism].
The public evidence looks damning to me. I hope it transpires that this is an unfortunate case of miscommunication and misunderstanding, and soon we shall see results that vindicate Leverage/Paradigm’s efforts so far. I also hope your faith in Bennett is well-placed, that whatever mix of vices led him to write vile antisemitic ridicule on an email list called ‘morning hate’ in 2016 bear little relevance to the man he was when with Leverage in ~~2018, or the man he is now.
But not all hopes are expectations.
Perhaps it’d be helpful for Bennett to publish a critique of alt-right ideas in Palladium Magazine?
In Bennett’s statement on Medium, he says now that he’s Catholic, he condemns the views he espoused. If that’s true, he should be glad to publish a piece which reduces their level of support.
Since he used to espouse those views, he has intimate understanding of the psychology of those who hold them. So a piece he edits could help deconvert/deradicalize people more effectively than a piece edited by an outsider. And whatever persuaded him to abandon those views might also work on others.
Bennet might complain that publishing such a piece would put him in an impossible bind, because any attempt to find common ground with alt-righters, and explain what originally drew him to the movement to do effective deconversion, could be spun as “Jonah Bennett doubles down on alt-right ideology” for clicks. Bennet might also complain that publishing such a piece would make him a target for alt-right harassment. However, if Bennett is sincerely sorry for what he said, it seems to me that he should be willing to accept these risks. At least he could offer to publish a critique of the alt-right that’s written by someone else.
If he does publish such a piece, I personally would be inclined to tentatively accept him back into civil society—but if he’s unwilling to publish such a piece, I think it’s reasonable to wonder if he’s “hiding his true power level” and be suspicious/condemnatory.
I do feel we should have some sort of path to forgiveness for those who sincerely wish to leave extremist movements.
I imagine most people have made up their minds by now, but there is now a first-person account from one of the former employees of Leverage 1.0, with some attendant discussion on LessWrong.
Thank you for pointing this out and increasing awareness of the issues.
Three years later, a similar post with some more details about Leverage’s internal management processes, and an update from Leverage here.
Leverage Research spent a further $388k in 2017.
At least 11 of 13 Paradigm Academy staff listed on Linkedin are known to have worked for Leverage Research or allied organizations.
The coin made by Reserve (one of the successor companies to Leverage Research) has returned −32.7% since its float at the time of writing. In the same time period, bitcoin returned 24%.
Reserve has now lost 50.6% of its value since its float, while Bitcoin has returned ~1% over the same time period.
So, after I read this comment I left thinking that Reserve performed exceptionally poorly, but it seems that almost all cryptocurrencies have gone down about the same amount since June 19th (the time of Reserve’s launch, from what I can tell). Here are some random currencies that I clicked on, on the coinmarketcap website that you linked. This is a comprehensive list, so I report the price change since June 19th for every currency that I looked at:
Bitcoin Cash:
June 19th price: $416
Price now: $244
Change: −41.3%
XRP:
June 19th price: $0.448
Price now: $0.25
Change: −44.1%
Litecoin:
June 19th price: $135
Price now: $55
Change: −59.2%
Monero
June 19th price: $100
Price now: $58
Change: −42%
You are also incorrect that Bitcoin has returned 1% over the same time period. On June 19th, the price of Bitcoin was $9273, and it now is $8027. So while you are correct that Bitcoin went down significantly less than Reserve, it performed drastically better than almost all other cryptocurrencies, and still went down by about 13%.
I don’t think Reserve is overall a super great idea, but I think the statistics you cited seem misleading to me, and it seems that Reserve overall is performing similarly to the rest of the non-Bitcoin crypto-market.
Your initial impression was correct. Reserve has entered a terrible market and managed to perform substantially worse than its terrible competitors. Since May 24, when Reserve Rights was priced:
the S&P gained 14%,
cryptocurrency at large lost 17%,
cryptocurrencies excluding Bitcoin lost 33%,
while Reserve Rights managed to lose 52.3%.
Reserve Rights was floated on May 24 according to CoinMarketCap, at which time Bitcoin was worth $7800-$7950, and it is now worth the same amount, so the error must be either with you, or with CoinMarketCap.
I used Jun 19th, because that was the first date with a market cap available, which seemed like the most reasonable date to start. So that likely explains the discrepancy.
I don’t know much about it, but isn’t Reserve meant to be a Stablecoin? If so any change in value seems significantly worse than for other coins.
I also don’t know much about it, but I think Reserve includes a couple of coins. ‘Reserve Rights’ is not intended to be a stablecoin (I think it is meant to perform some function for the stablecoin system, but I’m ignorant of what it is), whilst ‘Reserve’, yet to be released, is meant to be stable.
Huh, do you know what ‘Reserve Rights’ does / why it exists?
Is there a short explainer of it somewhere?
The reason for posting these facts now is that as of the time of writing, Leverage’s successor, the Paradigm Academy is seeking to host the EA Summit in one week. The hope is that these facts would firstly help to inform effective altruists on the matter of whether they would be well-advised to attend, and secondly, what approach they may want to take if they do attend.
Leverage Research has recruited from the EA community using mind-maps and other psychological techniques, obtaining dozens of years of work, but doing little apparent good. As a result, the author views it as inadvisable for EAs to engage with Leverage Research and its successor, Paradigm Academy. Rather, they should seek the advice of mentors outside of the Leverage orbit before deciding to attend such an event. Based on past events such as the Pareto Fellowship, invitees who ultimately decide to attend would be well-advised to be cautious about recruitment, by keeping in touch with friends and mentors throughout.
I think this would be more useful as part of the main post than as a comment.
I’ve provided my explanations for the following in this comment:
No evidence has been provided Paradigm Academy is Leverage’s successor. While the OP stated facts about Leverage, all the comments declaring more facts about Leverage Research are merely casting spurious associations between Leverage Research and the EA Summit. Along with the facts, you’ve smuggled in an assumption amounting to nothing more than a conspiracy theory about Leverage rebranding themselves as Paradigm Academy and is organizing the 2018 EA Summit for some unclear and ominous reason. In addition to no logical reason or sound evidence being provided for how Leverage’s negative reputation in EA should be transferred to the upcoming Summit, my interlocutors have admitted themselves or revealed their evidence from personal experience to be weak. I’ve provided my direct personal experience knowing the parties involved in organizing the EA Summit, and also having paid close attention from afar of Leverage’s trajectory in and around EA, contrary to the unsubstantiated thesis the 2018 EA Summit is some opaque machination by Leverage Research.
There is no logical connection between the facts about Leverage Research and the purpose of the upcoming EA Summit. Further, the claims presented as facts about the upcoming Summit aren’t actually facts.
At this point, I’ll just point out the idea Paradigm is somehow necessarily in any sense Leverage’s successor is based on no apparent evidence. So the author’s advice doesn’t logically follow from the claims made about Leverage Research. What’s more, as I demonstrated in my other comments, this event isn’t some unilateral attempt by Paradigm Academy to steer EA in some unknown direction.
As one of the primary organizers for the EA community in Vancouver, Canada; the primary organizer for the rationality community in Vancouver; a liaison for local representation of these communities with adjacent communities; and an organizer for many novel efforts to coordinate effective altruists, including the EA Newsletter, I don’t know if I’d describe myself as a “mentor.” But I know others who see me that way, and it wouldn’t be unfair of me to say both digitally, and geographically on the west coast; in Vancouver; and in Canada, I am someone who creates more opportunities for many individuals to connect to EA.
Also, if it wasn’t clear, I’m well outside the Leverage orbit. If someone wants to accuse me of being a hack for Leverage, I can make some effort to prove I’m not part of their orbit (though I’d like to state that I would still see that as unnecessarily poor faith in this conversation). Anyway, as an outsider and veteran EA community organizer, I’m willing to provide earnest and individuated answers to questions about why I’m going to the 2018 EA Summit; or why and what kind of other effective altruists should also attend. I am not speaking for anyone but myself. I’m willing to do this in-thread as replies to this comment; or, if others would prefer, on social media or in another EA Forum post. Because I don’t have as much time, and I’d to answer such questions transparently, I will only answer questions publicly asked of me.
Unlike the author of this post and comment stated, it doesn’t follow this event will be anything like the Pareto Fellowship, as there aren’t any facts linking Leverage Research’s past track record as an organization to the 2018 EA Summit.
For what it’s worth to anyone, I intend to attend the 2018 EA Summit, and I offer as a friend my support and contact regarding any concerns other attendees may have.
See Geoff’s reply to me below: Paradigm and Leverage will at some point be separate, but right now they’re closely related (both under Geoff etc). I think it’s reasonable for people to use Leverage’s history and track record in evaluating Paradigm.
Note: I was previously CEO of CEA, but stepped down from that role about 9 months ago.
I’ve long been confused about the reputation Leverage has in the EA community. After hearing lots of conflicting reports, both extremely positive and negative, I decided to investigate a little myself. As a result, I’ve had multiple conversations with Geoff, and attended a training weekend run by Paradigm. I can understand why many people get a poor impression, and question the validity of their early stage research. I think that in the past, Leverage has done a poor job communicating their mission, and relationship to the EA movement. I’d like to see Leverage continue to improve transparency, and am pleased with Geoff’s comments below.
Despite some initial hesitation, I found the Paradigm training I attended surprisingly useful, perhaps even more so than the CFAR workshop I attended. The workshop was competently run, and content was delivered in a polished fashion. I didn’t go in expecting the content to be scientifically rigorous, most self improvement content isn’t. It was fun, engaging, and useful enough to justify the time spent.
Paradigm is now running the EA summit. I know Mindy and Peter, some of the key organisers, through their long standing contributions to EA. They were both involved in running a successful student group, and Peter worked at CEA, helping us to organise EAG 2015. I believe that Mindy and Peter are dedicated EAs, who decided to organise this event because they would really like to see more focus on movement building in the EA community.
I’ve been wanting to see new and more movement building focused activities in EA. CEA can’t do it all alone, and I generally support people in the EA community attempting ambitious movement building projects. Given this, and my positive experience attending an event put on by Paradigm, I decided to provide some funding for the EA Summit personally.
I don’t think that Leverage, Paradigm or related projects are good use of EA time or money, but I do think the level of hostility towards them I’ve seen in this community is unwarranted, and I’d like to see us do better.
Found this surprising given the positive valence of the rest of the comment. Could you expand a little on why you don’t think Leverage et al. are a good use of time/money?
I think their approach is highly speculative, even if you were to agree with their overall plan. I think Leverage has contributed to EA in the past, and I expect them to continue doing so, but this alone isn’t enough to make them a better donation target than orgs like CEA or 80K.
I’m glad they exist, and hope they continue to exist, I just don’t think Leverage or Paradigm are the most effective things I could be doing with my money or time. I feel similarly about CFAR. Supporting movement building and long-termism is already meta enough for me.
Interesting. I don’t usually conflate “good use” with “most effective use.”
Seems like “not a good use” means something like “this project shouldn’t be associated with EA.”
Whereas “not the most effective use” means something like “this project isn’t my best-guess about how to do good, but it’s okay to be associated with EA.”
Perhaps this is just semantics, but I’m genuinely not sure which sense you intend.
By this what I expect Tara means is in reference to the fact Leverage Research has historically solicited all their funding from major private donors such as Peter Thiel as of a few years ago, and in the intervening years, I assume other philanthropists. Leverage both associates with EA, and appreciate what EA as a movement has done to see Leverage, just as what Leverage has done to help build up EA is appreciated, as others have expressed in the other comments on the original post.
Due to, as Geoff Anders pointed out in his own comment response, that Leverage works in the same spirit of EA but with higher-variance than the EA movement has been, as an organization Leverage works on projects other EA organizations while signaling that their stark difference from the rest of EA is non-threatening by not soliciting donations from the EA community at large. When I met Geoff Anders in person in 2014, he explained to me this is the case for Leverage’s profile within EA, and this is part of the rationale Leverage also uses to privately court funding for their operations. As of 2014, the donor in question was Peter Thiel, who I’m presuming provided enough funding at the time for Leverage, they weren’t in need to seek other donors. Since then, I haven’t in direct communication with Geoff nor Leverage. So I don’t know who, Peter Thiel or who else, is funding Leverage Research. But between my own impressions, and the anecdata provided in this thread, I presume Leverage continues to privately secure all the funding they need while occasionally partnering with EA(-adjacent) organizations on projects related to startups and the long-term future, as they have in the past.
Before Paradigm was officially a distinct organization from Leverage, as Leverage was incubating Paradigm at the time, they received their funding from the same source. I’m aware for their clients who aren’t effective altruists, Paradigm charges for some of their workshops, and does consultancy for for-profit startups and their founders in the Bay Area. This is a source of income for Paradigm I understand they use for their other projects, including providing free or discounted workshops to effective altruists. Between these things, I assume Paradigm doesn’t intend for the indefinite future to publicly solicit funding from the EA community at large, either.
I assume this is what Tara meant by Leverage, Paradigm and related project not being a good use of EA money. This reaffirms the impression that Leverage doesn’t seek donations from individual effective altruists, not in an attempt to deceive the community in any way, but to signal respect for the epistemic differences between Leverage and the EA movement at large, while collaboration between Leverage and EA organizations continues.
I don’t know what Tara means by Leverage, Paradigm or related projects not being a good use of EA time. I’m assuming she is reaffirming the public impression Leverage their executive director, Geoff Anders, provided in his own comment response to the original post. That is, while individual effective altruists who staff Leverage or Paradigm are in their free time working for the organizations (similar to how Google famously provides their software engineers with 10% free time to develop projects as they see fit, resulting in products like Gmail), effective altruists who don’t independent of their association with EA consider Leverage in the range of effectiveness as the charities EAs typically donate to should not presume Leverage promises or solicits to use EA time and money on EA lines. This is consistent with much the same Geoff mentioned in his own comment.
As someone whose experience as an outsider from Leverage, who has not done paid for any EA organizations in the past, is similar to Tara’s, I can corroborate her impression. I’ve not been in the Bay Area or had a volunteer or personal association with any EA organizations located there since 2014. Thus, my own investigation was from afar, following the spread-out info on Leverage available online, including past posts regarding Leverage on LW and the EA Forum, and online conversations with former staff, interns and visitors to Leverage Research. The impression I got from what is probably a very different data-set than Tara’s is virtually identical. Thus, I endorse as a robust yet fair characterization of Leverage Research.
I’ve also heard from several CFAR workshop alumni myself they found the Paradigm training they received more useful than the CFAR workshop they attended as well. A couple of them also noted their surprise at this impression, given their trepidation knowing Paradigm sprouted from Leverage, what with their past reputation. A confounding factor in these anecdotes would be the CFAR workshops my friends and acquaintances had attended were from a few years ago, in which time those same people revisiting CFAR, and more recent CFAR workshop alumni, remark how different and superior to their earlier workshops CFAR’s more recent ones have been. Nonetheless, the impression I’ve received is nearly unanimous positive experiences at Paradigm workshops from attendees part of the EA movement, competitive in quality with CFAR workshops, which has years of troubleshooting and experience on Paradigm.
I want to clarify the CEA has not been alone in movement-building activities, and the CEA itself has ongoing associations with the Local Effective Altruism Network (LEAN) and the Effective Altruism Foundation out of the German-speaking EA world on movement-building activities. Paradigm Academy’s staff, in seeking to kickstart grassroots movement-building efforts in EA, are aware of this, as LEAN is a participating organization in EA as well. Additionally, while Charity Science (CS) has typically been and has streamlined their focus on direct global poverty interventions, their initial incubation and association with Rethink Charity and LEAN, as well as their recent foray into cause-neutral effective charity incubation, could arguably qualify them as focused on EA movement-building as well.
This is my conjecture based on where it seems CS is headed. I haven’t asked them, and I recommend anyone curious ask CS themselves if they identify movement-building as part of their current activities in EA. I bring this up as relevant because CS is also officially participating in the EA Summit.
Also, Tara, thanks for providing funding for this event :)
Thanks for making this post, it was long overdue.
Further facts
Connection Theory has been criticized as follows: “It is incomplete and inadequate, has flawed methodology, and conflicts well established science.” The key paper has been removed from their websites and the web archive but is still available at the bottom of this post.
More of Geoff Anders’s early work can be seen at https://systematicphilosophy.com/ and https://philosophicalresearch.wordpress.com/. (I hope they don’t take down these websites as well.)
Former Leverage staff have launched a stablecoin cryptocurrency called Reserve (formerly “Flamingo”), which was backed by Peter Thiel and Coinbase.
In 2012-2014, they ran THINK.
The main person at LEAN is closely involved with Paradigm Academy and helps them recruit people.
Recruitment transparency
I have spoken with four former interns/staff who pointed out that Leverage Research (and its affiliated organizations) resembles a cult according to the criteria listed here.
The EA Summit 2018 website lists LEAN, Charity Science, and Paradigm Academy as “participating organizations,” implying they’re equally involved. However, Charity Science is merely giving a talk there. In private conversation, at least one potential attendee was told that Charity Science was more heavily involved. (Edit: This issue seems to be fixed now.)
(low confidence) I’ve heard through the grapevine that the EA Summit 2018 wasn’t coordinated with other EA organizations except for LEAN and Charity Science.
Overall, I am under the impression that a majority of EAs think that Leverage is quite culty and ineffective. Leverage staff usually respond by claiming that their unpublished research is valuable, but the insiders mentioned above seemed to disagree.
If someone has strong counterevidence to this skeptical view of Leverage, I would be very interested and open to changing my mind.
Just to add a bit of info: I helped with THINK when I was a college student. It wasn’t the most effective strategy (largely, it was founded before we knew people would coalesce so strongly into the EA identity, and we didn’t predict that), but Leverage’s involvement with it was professional and thoughtful. I didn’t get any vibes of cultishness from my time with THINK, though I did find Connection Theory a bit weird and not very useful when I learned about it.
Do you mind clarifying what you mean by “recruits people?” I.e., do you mean they recruit people to attend the workshops, or to join the organizational staff.
In this comment I laid out the threat to EA as a cohesive community itself for those within to like the worst detractors of EA and adjacent communities to level blanket accusations of an organization of being a cult. Also, that comment was only able to provide mention of a handful of people describing Leverage like a cult, admitting they could not recall any specific details. I already explained that that report doesn’t not qualify as a fact, nor even an anecdote, but hearsay, especially since further details aren’t being provided.
I’m disinclined to take seriously more hearsay of a mysterious impression of Leverage as cultish given the poor faith in which my other interlocutor was acting in. Since none of the former interns or staff this hearsay of Leverage being like a cult are coming forward to corroborate what features of a cult from the linked Lifehacker article Leverage shares, I’m unconvinced your or the other reports of Leverage as being like a cult aren’t being taken out of context from the individuals you originally heard them from, nor that this post and the comments aren’t a deliberate attempt to do nothing but tarnish Leverage.
Paradigm Academy was incubated by Leverage Research, as many organizations in and around EA are by others (e.g., MIRI incubated CFAR; CEA incubated ACE, etc.). As far as I can tell now, like with those other organizations, Paradigm and Leverage should be viewed as two distinct organizations. So that itself is not a fact about Leverage, which I also went over in this comment.
As I stated in that comment as well, there is a double standard at play here. EA Global each year is organized by the CEA. They aren’t even the only organization in EA with the letters “EA” in their name, nor are they exclusively considered among EA organizations able to wield the EA brand. And yet despite all this nobody objects on priors to the CEA as a single organization branding these events each year. As we shouldn’t. Of course, none of this necessary to invalidate the point you’re trying to make. Julia Wise as the Community Liaison for the CEA has already clarified the CEA themselves support the Summit.
So the EA Summit has already been legitimized by multiple EA organizations as a genuine EA event, including the one which is seen as the default legitimate representation for the whole movement.
As above, that the EA Summit wasn’t coordinated by more than one organization means nothing. There are already EA retreat- and conference-like events organized by local university groups and national foundations all over the world, which have gone well, such as the Czech EA Retreat in 2017. So the idea EA should be so centralized only registered non-profits with some given caliber of prestige in the EA movement, or those they approve, can organize events to be viewed as legitimate by the community is unfounded. Not even the CEA wants that centralized. Nobody does. So whatever point you’re trying to prove about the EA Summit using facts about Leverage Research is still invalid.
For what it’s worth, while no other organizations are officially participating, here are some effective altruists who will be speaking at the EA Summit, and the organizations they’re associated with. This is sufficient to warrant a correct identification that those organizations are in spirit welcome and included at EAG. So the same standard should apply to the EA Summit.
Ben Pace, Ray Arnold and Oliver Habryka: LessWrong isn’t an organization, but it’s played a formative role in EA, and with LW’s new codebase being the kernel of for the next version of the EA Forum, Ben and Oliver as admins and architects of the new LW are as important representatives of this online community as any in EA’s history.
Rob Mather is the ED of the AMF. AMF isn’t typically regarded as an “EA organization” because they’re not a metacharity in need of dependence directly on the EA movement. But that Givewell’s top-recommended charity since EA began, which continues to receive more donations from effective altruists than any other, to not been given consideration would be senseless.
Sarah Spikes runs the Berkeley REACH.
Holly Morgan is a staffer for the EA London organization.
In reviewing these speakers, and seeing so many from LEAN and Rethink Charity, with Kerry Vaughan being a director for individual outreach at CEA, I see what the EA Summit is trying to do. They’re trying to have as speakers at the event to rally local EA group organizers from around the world to more coordinated action and spirited projects. Which is exactly what the organizers of the EA Summit have been saying the whole time. This is also why as an organizer for rationality and EA projects in Vancouver, Canada, trying to develop a project to scale both here and cities everywhere a system for organizing local groups to do direct work; and as a very involved volunteer online community organizer in EA, I was invited to attend the EA Summit. It’s also why one the event organizers consulted with me before they announced the EA Summit how they thought it should be presented in the EA community.
This isn’t counterevidence to be skeptical of Leverage. This is evidence counter to the thesis the EA Summit is nothing but a launchpad for Leverage’s rebranding within the EA community as “Paradigm Academy,” being advanced in these facts about Leverage Research. No logical evidence has been presented that the tenuous links between Leverage and the organization of the 2018 EA Summit entails the negative reputation Leverage has acquired over the years should be transferred onto the upcoming Summit.
See Geoff’s reply to me above: Paradigm and Leverage will at some point be separate, but right now they’re closely related (both under Geoff etc). I don’t think viewing them as separate organizations, where learning something about Leverage should not much affect your view of Paradigm, makes sense, at least not yet.
I don’t think this is accurate. (Please excuse the lack of engagement with anything else here; I’m just skimming some of it for now but I did notice this.)
[Edit: Unless you meant EA Funds (rather than Effective Altruism Foundation, as I read it)?]
I meant the EA Foundation, who I was under the impression received incubation from CEA. Since apparently my ambiguous perception of those events might be wrong, I’ve switched the example of one CEA’s incubees to ACE.
That one is accurate.
Also “incubees” is my new favourite word.
I could list a number of specific details, but not without violating the preferences of the people who shared their experiences with me, and not without causing even more unnecessary drama.
These details wouldn’t make for a watertight case that they’re a “cult”. I deliberately didn’t claim that Leverage is a cult. (See also this.) But the details are quite alarming for anyone who strives to have well-calibrated beliefs and an open-minded and welcoming EA community. I do think their cultishness led to unnecessary harm to well-meaning, young people who wanted to do good in the world.
There’s a big difference between feeling cultlike, as in “weird”, “disorienting”, “bizarre” etc, and exhibiting the epistemic flaws of a cult, as in having people be afraid to disagree with the thought leader, a disproportionate reverence for a single idea or corpus, the excommunication of dissenters, the application of one idea or corpus to explain everything in the world, instinctively explaining away all possible counterarguments, refusal to look seriously at outside ideas, and so on.
If you could provide any sanitized, abstracted details to indicate that the latter is going on rather than merely the former, then it would go a long way towards indicating that LR is contrary to the goal of well-calibrated beliefs and open-mindedness.
(While LessWrong.com was historically run by MIRI, the new LessWrong is indeed for most intents and purposes an independent organization (while legally under the umbrella of CFAR) and we are currently filing documents to get our own 501c3 registered, and are planning to stick around as an organization for at least another 5 years or so. Since we don’t yet have a name that is different from “LessWrong”, it’s easy to get confused about whether we are an actual independent organization, and I figured I would comment to clarify that.)
Good day all,
Can anyone please provide an example of a tangible output from this ‘research organization’ of the sort EA generally recognize and encourage?
Any rationale or consideration as to how association with such opaque groups does anything other than seriously undermine EA’s mission statement would also be appreciated.
Kind Regards
Alistair Simmonds
Alistair, I regret to inform you that after four years of Leverage’s Anti-Avoidance Training, the cancer has spread: the EA Community at large is now repeatedly aghast that outsiders are noticing their subtle rug-sweeping of sexual harassment and dismissal of outside critique. In barely a decade, the self-described rats are swum ‘round a stinking sh!p. I’m still amazed that, for the last year, as I kept bringing-forth concerns and issues, the EA members each insisted ‘no problems here, no, never, we’re always so perfect....’ Yep. It shows.
CEA appears as a “participating organisation” of the EA Summit. What does this mean? Does CEA endorse paradigm academy?
CEA is not involved in the organizing of the conference, but we support efforts to build the EA community. One of our staff will be speaking at the event.
As an attendee to the 2018 EA Summit, I’ve been informed by the staff of Paradigm Academy that not even the whole organization, nor Leverage Research, initiated this idea. Geoff Anders nor the executive leadership of Leverage Research are the authors of this Summit. I don’t know the hierarchy of Paradigm Academy or where Mindy McTeigue or Peter Buckley, the primary organizers of the Summit, fall in it. As far as I can tell, the EA Summit was independently initiated by these staff at Paradigm and other individual effective altruists they connected with. In the run-up to organizing this Summit, the organizations these individual community members are staff at became sponsors of the EA Summit.
Thus, the Local Effective Altruism Network; Charity Science; Paradigm Academy and the CEA are all participants at this event, endorsing the goal of the Summit within EA, without those organizations needing to endorse each other. That’s an odd question to ask. Must each EA organization endorse every other involved at EA Global, or any other EA event, prior to its beginning for the community to regard it as “genuinely EA?”
As far as I can tell, while Paradigm is obviously physically hosting the event, what it means for the CEA and the other organizations to be participating organizations is just that, officially supporting these efforts at the EA Summit itself. It means no more and no less than for any organization other than what Julia stated in her comment.
Also, I oppose using or pressuring the CEA in a form of triangulation, and to be cast by default as the most legitimate representation of the whole EA movement. Nothing I know about the CEA would lead me to believe they condone the type of treatment where someone tries speaking on their behalf in any sense without prior consent. Also, past my own expectations, the EA community recently made clear they don’t as a whole give license to the CEA represent EA as a whole however they want. Nonetheless, to the point of vocally disagreeing with what I saw as a needless pile-on Nick Beckstead and the CEA, in that thread I’ve made an effort to maintain an ongoing and mutually respectful conversation.
Evan, thank you for these comments here. I just wanted to register, in case it’s at all useful, that I find it a bit difficult to understand your posts sometimes. It struck me that shorter and simpler sentences would probably make this easier for me. But I may be totally ideosyncratic here (English isn’t my first language), so do ignore this if it doesn’t strike you as useful.
Thanks. This is useful feedback :)
Yeah, to be fair, I was writing these comments in rapid succession based on information unique to me to quickly prevent the mischaracterization of the EA Summit next week. I am both attending the EA Summit next week, and I am significantly personally invested in it as representing efforts in EA I’d like to see greatly advanced. I also have EA projects I’ve been working on I intend to talk about at the EA Summit next week. (In spite of acknowledging my own motive here, I still made all my previous comments with as much fidelity as I could muster.)
All this made me write these comments hastily enough that I write in long sentences. Mentally, when writing quickly, it’s how I condense as much information into as few clauses as possible in making arguments. You’re not the first person to tell me writing shorter and simpler sentences would be easier to read. In general, when I’m making public comments without a time crunch, these days I’m making more of a conscious effort to be more comprehensible :)
This is useful feedback, but English not being your first language is a factor too, because that isn’t how “idiosyncratic” is spelled. :P
I also would not expect effective altruists not fluent in English to be able to follow a lot of what I write (or a lot of posts written in EA, for that matter). Often because of the continually complicated discourse exclusively in English in EA, I forget to write with a readership which largely doesn’t speak English as a first language. I’ll keep this more in mind for how I write my posts in the future.
I’m unconvinced that ole_koksvik’s fluency in English has anything to do with it. Fluent English speakers misspell words like “idiosyncratic” regularly, and I and other fluent English speakers also find your posts difficult to follow. I generally end up skimming them, because the ratio of content to unnecessary verbosity is really low. If your goal is to get your evidence and views onto the table as quickly as possible, consider that your current strategy isn’t getting them out there at all for some portion of your audience, and that a short delay for editing could significantly expand your reach.
Yeah, that has become abundantly clear to me with how many upvotes these comments were receiving. I’ve received feedback on this before, but never with such a strong signal before. Sometimes I have different goals with my public writing at different times. So it’s not always my intention for how I write to be maximally accessible to everyone. I usually know who reads my posts, and why they appreciate them, as I receive a lot of positive feedback as well. It’s evident I’ve generalized that in this thread to the point it’s hurting the general impact of spreading my message. So I completely agree. Thanks for the feedback :)
My hunch is even when there’s a time crunch, fewer words will be bigger bang for buck :-)
Seconded. As a time-saving measure, I skip any comments longer than three paragraphs unless the first couple of sentences makes their importance very clear. Unfortunately, that means I miss most of Evan’s posts. :(
Would it help if I included a summary of my posts at the top of them?
Often I write for a specific audience, which is more limited and exclusive. I don’t think there is anything necessarily wrong with taking this approach to discourse in EA. Top-level posts on the EA Forum are made specific to a single cause, written in an academic style for a niche audience. I’ve mentally generalized this to how I write about anything on the internet.
It turns out not writing in a more inclusive way is harming the impact of my messages more than I thought. I’ll make more effort to change this. Thanks for the feedback.
FYI, I a) struggle to read most of your posts (and seem like I’m supposed to be in the target audience)
b) the technique I myself use is “write the post the way I’d naturally write it (i.e. long and meandering), and then write a tldr of the post summarizing it with a few bullet points… and then realize that the tldr was all I actually needed to say in the first place.
Yes, an early summary would help. It doesn’t have to be very formal; just a clear statement of your argument in the first paragraph.
If you’re going to argue multiple things, you could use different comments.
Of course. What I was trying to explain is when there is a time crunch, I’ve habituated myself to use more words. Obviously it’s a habit worth changing. Thanks for the feedback :)
Yes, the old adage: “I don’t have time to write short texts.”
About two years have now passed since the post. Main updates:
Leverage Research appears to be just four people. They have announced new plans, and released a short introduction to their interests in early stage science, but not any other work. Their history of Leverage Research appears to have stalled at the fourth chapter.
Reserve seems to be ten people, about seven of whom were involved with Leverage Research. Reserve Rights is up by about 160% since being floated two years ago.
Paradigm Research is now branding as a self-help organisation.
I honestly don’t get all this stuff about not publishing your work. Time to brag, boy will I get shit on for this comment, but it’s really relevant to the issue here: I never even had a minor in the subject, but when I had a good philosophical argument I got it published in a journal, and it wasn’t that hard. Peer reviewed, not predatory, went through three rounds of revisions. Not a prestigious journal by any stretch of the imagination, but it proves that I knew what I was doing, which is good enough. You think that peer review is bullshit, fine: that means it’s not that hard. With your supposedly superior understanding of academic incentives and meta-science and all that stuff, I’m sure you can dress up something so that it tickles the reviewers in the right way. Not wanting to mess with it most of the time is understandable, but you can still do us the courtesy of at least getting one or two things through the gauntlet so that we aren’t left scratching our heads in confusion about whether we’re looking at Kripke or Timecube or something in between. MIRI did it so you can too. Plus, it sounds like lots of this research is being kept hidden from public view entirely, which I just can’t fathom.
The movement building sounds like good stuff however, I’m happy to see that.
Just a note. I think this might be a bit missleading. Geoff, and other members of Leverage research taught a version of goal factoring at some early CFAR workshops. And Leverage did develop a version of goal factoring inspired by CT. But my understanding is that CFAR staff independently developed goal factoring (starting from an attempt to teach applied consequentialism), and this is an instance of parallel development.
[I work for CFAR, though I had not yet joined the EA or rationality community in those early days. I am reporting what other longstanding CFAR staff told me.]
Some participants of the Pareto fellowship have told me that Leverage resembles a cult. I can’t remember many specifics. One thing is that the main guy (Geoff Anders?) thinks, 100% in earnest, that he’s the greatest philosopher who’s ever lived.
The CEA, the very organization you juxtaposed with Leverage and Paradigm in this comment has in the past been compared to a Ponzi scheme. Effective altruists who otherwise appreciated that criticism thought much of the value was lost in comparing it to a Ponzi scheme, and without it, the criticism may been better received. Additionally, LessWrong and the rationality community; CFAR and MIRI; and all of AI safety have been for years been smeared as a cult by their detractors. The rationality community isn’t perfect. There is no guarantee interactions with a self-identified (aspiring) rationality community will “rationally” go however an individual or small group of people interacting with the community, online or in person, hope or expect. But the vast majority of effective altruists, even those who are cynical about these organizations or sub-communities within EA, disagree with how these organizations have been treated, for it poisons the well of good will in EA for everyone. In this comment, you stated your past experience with the Pareto Fellowship and Leverage left you feeling humiliated and manipulated. I’ve also been a vocal critic in person throughout the EA community of both Leverage Research and how Geoff Anders has led the organization. But that to elevate a personal opposition of them to a public exposure of opposition research in an attempt to tarnish an event they’re supporting alongside many other parties in EA is not something I ever did, or will do. My contacts in EA and myself have followed Leverage. I’ve desisted in making posts like this myself, because digging for context I found Leverage has changed from any impression I’ve gotten of them. And that’s why at first I was skeptical of attending the EA Summit. But upon reflection, I realized it wasn’t supported by the evidence to conclude Leverage is so incapable of change that anything they’re associated with should be distrusted. But what you’re trying to do with Leverage Research is no different than what EA’s worst critics do not in an effort to change EA or its members, but to tarnish them. From within or outside of EA, to criticize any EA organization in such a fashion is below any acceptable epistemic standard in this movement.
If the post and comments here are stating facts about Leverage Research, and you’re reporting impressions with no ability to remember specific details that Leverage is like a cult, those are barely facts. The only fact is some people perceived Leverage to be like a cult in the past, which are only anecdotes. And without details, they’re only hearsay. Combined with the severity of the consequences if this hearsay was borne out, to be unable to produce actual facts invalidates the point you’re trying to make.
I’m confused: the comments on Less Wrong you’d see by “person” and “personN” that were the same person happened when importing from Overcoming Bias. That wouldn’t be happening here.
They might still be the same person, but I don’t think this forum being descended from LessWrong’s code tells us things one way or the other.
Thanks. I wasn’t aware of that. I’ll redact that part of my comment.
I don’t feel comfortable sharing the reasons for remaining anonymous in public, but I would be happy to disclose my identity to a trustworthy person to prove that this is my only fake account.
Upvoted. I’m sorry for the ambiguity of my comment. I meant each of the posts here under the usernames “throwaway,” “throwaway2,” and “anonymous” are each consistently being made by same three people, respectively. I was just clarifying up front as I was addressing you for others reading it’s almost certainly the same anonymous individual making the comments under the same account. I wouldn’t expect you to forgo your anonymity.
Your comments seem to be way longer than they need to be because you don’t trust other users here. Like, if someone comes and says they felt like it was a cult, I’m just going to think “OK, someone felt like it was a cult.” I’m not going to assume that they are doing secret blood rituals, I’m not going to assume that it’s a proven fact. I don’t need all these qualifications about the difference between cultishness and a stereotypical cult, I don’t need all these qualifications about the inherent uncertainty of the issue, that stuff is old hat. This is the EA Forum, an internal space where issues are supposed to be worked out calmly; surely here, if anywhere, is a place where frank criticism is okay, and where we can extend the benefit of the doubt. I think you’re wasting a lot of time, and implicitly signaling that the issue is more of a drama mine than it should be.
I admit I’m coming from a place of not entirely trusting all other users here. That may be a factor in why my comments are longer in this thread than they need to be. I tend to write more than is necessary in general. For what it’s worth, I treat the EA Forum not as an internal space but how I’d ideally like to see it be used. That is as a primary platform for EA discourse, on par with a level of activity more akin to the ‘Effective Altruism’ Facebook group, or LessWrong.
I admit I’ve been wasting time. I’ve stopped responding directly to the OP because if I’m coming across as implicitly signaling this issue is a drama mine, I should come out and say what I actually believe. I may make a top-level post about. I haven’t decided yet.
“Compared to a Ponzi scheme” seems like a pretty unfortunate compression of what I actually wrote. Better would be to say that I claimed that a large share of ventures, including a large subset of EA, and the US government, have substantial structural similarities to Ponzi schemes.
Maybe my criticism would have been better received if I’d left out the part that seems to be hard for people to understand; but then it would have been different and less important criticism.
[epistemic status: meta]
Summary: Reading comments in this thread which are similar reactions I’ve seen you or other rationality bloggers receive from effective altruists on critical posts regarding EA, I think there is a pattern to how rationalists may tend to write on important topics that doesn’t gel with the typical EA mindset. Consequentially, it seems the pragmatic thing for us to do would be to figure out how to alter how we write to get our message across to a broader audience.
Upvoted.
I don’t if you’ve read some of the other comments in this thread. But some of the most upvoted ones are about how I need to change up my writing style. So unfortunate compressions of what I actually write aren’t new to me, either. I’m sorry I compressed what you actually wrote. But even an accurate compression of what you actually wrote might make my comments too long for what most users prefer on the EA Forum. If I just linked to your original post, it would be too long for us to read.
I spend more of my time on EA projects. If there were more promising projects coming out of the rationality community, I’d spend more time on them relative to how much time I dedicate to EA projects. But I go where the action is. Socially, I’m as if not more socially involved with the rationality community than I am with EA.
From my inside view, here is how I’d describe the common problem with my writing on the EA Forum: I came here from LessWrong. Relative to LW, I haven’t found how or what I write on the EA Forum to be too long. But that’s because I’m anchoring off EA discourse looking like SSC 100% of the time. But since the majority of EAs don’t self-identify as rationalists, and the movement is so intellectually diverse, the expectation is the EA Forum won’t be formatted on any discourse style common to the rationalist diaspora.
I’ve touched upon this issue with Ray Arnold before. Zvi has touched on it too in some of his blog posts about EA. A crude rationalist impression might be the problem with discourse on the EA Forum is it isn’t LW. In terms of genres of creative non-fiction writing, the EA Forum is less tolerant of diversity than LW. That’s fine. Thinking about this consequentially, I think rationalists who want their message heard by EA more don’t need to learn to write better, but write different.
Hi everyone,
I’d like to start off by apologizing. I realize that it has been hard to understand what Leverage has been doing, and I think that that’s my fault. Last month Kerry Vaughan convinced me that I needed a new approach to PR and public engagement, and so I’ve been thinking about what to write and say about this. My plan, apart from the post here, was to post something over the next month. So I’ll give a brief response to the points here and then make a longer main post early next week [UPDATE: see 2nd edit below].
(1) I’m sorry for the lack of transparency and public engagement. We did a lot more of this in 2011-2012, but did not really succeed in getting people to understand us. After that, I decided we should just focus on our research. I think people expect more public engagement, even very early in the research process, and that I did not understand this.
(2) We do not consider ourselves an EA organization. We do not solicit funds from individual EAs. Instead, we are EA-friendly, in that (a) we employ many EAs, (b) we let people run EA projects, and (c) we contribute to EA causes, especially EA movement building. As noted in the post, we ran the EA Summit 2013 and EA Summit 2014. These were the precursors to the EA Global conferences. For a sense of what these were like, see the EA Summit 2013 video. We also ran the EA Retreat 2014 and helped out operationally with EA Global 2015. We also founded THINK, the first EA movement group network.
(3) We are probably not the correct donation choice for most EAs. We care about reason, evidence, and impact, but we are much higher variance than most EAs would like. We believe there is evidence that we are more effective than regular charities due to our contributions to movement building. These can be thought of as “impact offsets”. (See (6) for more on the EV calculation.)
(4) We are also probably not the correct employment choice for most EAs. We are looking for people with particular skills and characteristics (e.g., ambition, dedication to reason and evidence, self-improvement). These make CFAR our largest EA competitor for talent, though in actual fact we have not ended up competing substantially with them. In general if people are excited about CEA or 80k or Charity Science or GiveWell or OPP, then we typically also expect that they are better served by working there.
(5) Despite (3) and (4), we are of course very interested in meeting EAs who would be good potential donors or recruits. We definitely recruit at EA events, though again we think that most EAs would be better served by working elsewhere.
(6) To do a full EV calculation on Leverage, it is important to take into account the counterfactual cost of employees who would work on other EA projects. We think that taking this into account, counting our research as 0 value, and using the movement building impact estimates from LEAN, we come out well on EV compared to an average charity. This is because of our contribution to EA movement building and because EA movement building is so valuable. (Rather than give a specific Fermi estimate, I will let readers make their own calculations.) Of course, under these assumptions donating to movement building alone is higher EV than donating to Leverage. Donors should only consider us if they assign greater than 0 value to our research.
I hope that that clarifies to some degree Leverage’s relation to the EA movement. I’ll respond to the specific points above later today.
As for the EA Summit 2018, we agree that everyone should talk with people they know before attending. This is true of any multi-day event. Time is valuable, and it’s a good idea to get evidence of the value prior to attending.
(Leverage will not be officially presenting any content at the EA Summit 2018, so people who would like to learn more should contact us here. My own talk will be about how to plan ambitious projects.)
EDIT: I said in my earlier comment that I would write again this evening. I’ll just add a few things to my original post.
— Many of the things listed in the original post are simply good practice. Workshops should track participants to ensure the quality of their experience and that they are receiving value. CFAR also does this. Organizations engaged in recruitment should seek to proactively identify qualified candidates. I’ve spoken to the leaders of multiple organizations who do this.
— Part of what we do is help people to understand themselves better via introspection and psychological frameworks. Many people find this both interesting and useful. All of the mind mapping we did was with the full knowledge and consent of the person, at their request, typically with them watching and error-checking as we went. (I say “did” because we stopped making full mind maps in 2015.) This is just a normal part of showing people what we do. It makes sense for prospective recruits and donors to seek an in-depth look at our tools prior to becoming more involved. We also have strict privacy rules and do not share personal information from charting sessions without explicit permission from the person. This is true for everyone we work with, including prospective recruits and donors.
EDIT: I’ve changed my plans here. People who are interested in learning more, etc., can contact me or my staff via email. (cf my above comment)
Hi Geoff. I gave this a little thought and I am not sure it works. In fact it looks quite plausible that someone’s EV (expected value) calculation on Leverage might actually come out as negative (ie. Leverage would be causing harm to the world).
This is because:
Most EA orgs calculate their counterfactual expected value by taking into account what the people in that organisation would be doing otherwise if they were not in that organisation and then deduct this from their impact. (I believe at least 80K, Charity Science and EA London do this)
Given Leverage’s tendency to hire ambitious altruistic people and to look for people at EA events it is plausible that a significant proportion of Leverage staff might well have ended up at other EA organisations.
There is a talent gap at other EA organisations (see 80K on this)
Leverage does spend some time on movement building but I estimate that this is a tiny proportion of the time, >5%, best guess 3%, (based on having talked to people at leverage and also based on looking at your achievements to date compared it to the apparent 100 person-years figure)
Therefore if the amount of staff who could be expected to have found jobs in other EA organisations is thought to be above 3% (which seems reasonable) then Leverage is actually displacing EAs from productive action so the total EV of Leverage is negative
Of course this is all assuming the value of your research is 0. This is the assumption you set out in your post. Obviously in practice I don’t think the value of your research is 0 and as such I think it is possible that the total EV of Leverage is positive*. I think more transparency would help here. Given that almost no research is available I do think it would be reasonable for someone who is not at Leverage to give your research an EV of close to 0 and therefore conclude that Leverage is causing harm.
I hope this helps and maybe explains why Leverage gets a bad rep. I am excited to see a more transparency and a new approach to public engagement. Keep on fighting for a better world!
*sentence edited to better match views
Could you comment specifically on the Wayback Machine exclusion? Thanks!
Hi Geoff,
In reading this I’m confused about the relationship between Paradigm and Leverage. People in this thread (well, mostly Evan) seem to be talking about them as if Leverage incubated Paradigm but the two are now fully separate. My understanding, however, was that the two organizations function more like two branches of a single entity? I don’t have a full picture or anything, but I thought you ran both organizations, staff of both mostly live at Leverage, people move freely between the two as needed by projects, and what happens under each organization is more a matter of strategy than separate direction?
By analogy, I had thought the relationship of Leverage to Paradigm was much more like CEA vs GWWC (two brands of the same organization) or even CEA UK vs CEA USA (two organizations acting together as one brand) than CEA vs ACE (one organization that spun off another one, which is now operates entirely independently with no overlap of staff etc).
Jeff
Hi Jeff,
Sure, happy to try to clarify. I run both Leverage and Paradigm. Leverage is a non-profit and focuses on research. Paradigm is a for-profit and focuses on training and project incubation. The people in both organizations closely coordinate. My current expectation is that I will eventually hand Leverage off while working to keep the people on both projects working together.
I think this means we’re similar to MIRI/CFAR. They started with a single organization which led to the creation of a new organization. Over time, their organizations came to be under distinct leadership, while still closely coordinating.
To understand Leverage and Paradigm, it’s also important to note that we are much more decentralized than most organizations. We grant members of our teams substantial autonomy in both determining their day-to-day work and with regard to starting new projects.
On residence, new hires typically live at our main building for a few months to give them a place to land and then move out. Currently less than 1⁄3 of the total staff live on-site.
Thanks for clarifying!
Two takeaways for me:
Use of both the “Paradigm” and “Leverage” names isn’t a reputational dodge, contra throwaway in the original post. The two groups focus on different work and are in the process of fully dividing.
People using what they know about Leverage to inform their views of Paradigm is reasonable given their level of overlap in staff and culture, contra Evan here and here.
Did you end up posting anything on this subject?
What have you done to promote movement building? I didn’t see anything on the post or your website, other than the summit next week.
Leverage:
(1) founded THINK, the first EA student group network
(2) ran the EA Summit 2013, the first large EA conference (video)
(3) ran the EA Summit 2014
(4) ran the EA Retreat 2014, the first weeklong retreat for EA leaders
(5) handed off the EA Summit series to CEA; CEA renamed it EA Global
(6) helped out operationally with EA Global 2015.
Could you please specify which methods of introspection and psychological frameworks you employ to this end, and which evidence you use to assure these frameworks are based on the adequate scientific evidence, obtained by reliable methods?
“we let people run EA projects”
Your own words betray you...
Given by their own admission in a comment response to their original post, the author of this post is providing these facts so effective altruists can make an informed decision regarding potentially attending the 2018 EA Summit, with the expectation these facts can or will discourage EAs from attending the EA Summit, it’s unclear how these facts are relevant information.
In particular, no calculation or citation is provided for the estimate Leverage has consumed over 100 person-years of human capital. Numbers from nowhere aren’t facts, so this isn’t even a fact.
Regardless, no context or reference for why these numbers matter, e.g., by contrasting Leverage with what popular EA organizations have accomplished over similar timeframes or person-years of human capital consumed.
As comments from myself; Tara MacAulay, former CEO of the CEA; and Geoff Anders, executive director of Leverage, has made clear, Leverage:
has never and does not intend to solicit donations from individuals part of the EA community at large.
has in the past identified as part of EA movement, and was formative to the movement in its earlier years, but now identifies as distinct from EA, while still respecting EA, and collaborating with EA organizations where their goals overlap with Leverage.
does not present itself as effective or impactful using the evaluation criteria most typical of EA, and shouldn’t be evaluated on those grounds, as has been corroborated by EA organizations which have collaborated with Leverage in the past.
Based on this, the ~$2 million Leverage spend from 2012-16 shouldn’t be, as a lump sum, regarded as having been spent under an EA framework, or on EA grounds, nor evaluated as a means to discourage individual effective altruists from forming independent associations with Leverage distinct from EA as a community. Both EA and Leverage confirm Leverage has in the past but for the present and last few years should not be thought of as an EA organization. Thus, arguing Leverage is deceiving the EA movement on the grounds they stake a claim on EA without being effective is invalid, because Leverage does no such thing.
While like the facts in the above section, this is a fact, I fail to see how it’s notable regarding recruitment transparency regarding Leverage. I’ve also in the past criticized double standards regarding transparency in the EA movement, that organizations in EA should not form secret fora to the exclusion of others. That’s because it should be sufficient to ensure necessary privacy among and between EA organizations using things like private email, Slack channels, etc. What’s more, every EA organization I or others I’ve talked to have volunteered has something like a Slack channel. When digital communications internal to an organization are necessary to its operation, it has become standard practice for every organization in that boat to use something like an internal mailing list or Slack channel exclusive to their staff. That the Pareto Fellowship or Leverage Research would have Slack channels for evaluating potential fellows for recruitment on an individual basis may be unusual among EA organizations. But it’s not unheard of among how competent organizations operate. Also, it has no bearing on how Leverage might appeal to transparency while being opaque in a way other organizations associated with EA aren’t.
Also, since you’re seeking as much transparency about Leverage as possible, I expect your presentation will be transparent in kind. Thus, would you mind identifying the EA organization in question which was part of the collaboration with Leverage and the Pareto Fellowship you’re referring to?
As with the last statement, this may be unusual among EA organizations, but this is in Leverage’s past identifying as an EA organization, which they no longer do. There is nothing about this which is inherently a counter-effective organizational or community practice inside or outside of the EA movement, nor does have direct relevance to transparency, nor the author’s goal with this post.
Who?
Like with other statements, I don’t understand how transparently exposing this practice is meant to as a fact back the author’s goal with this post, nor move readers’ impression of Leverage in whatever sense.
Given the number of claims in this post and in the comments from the same author presented as facts aren’t indeed facts, and so many of the facts stated are presented without context or relevance to the author’s goal, I’d like to see this claim substantiated by any evidence whatsoever. Otherwise, I won’t find this claim credible enough to be believable.
In short, regarding the assorted facts, the author of this post (by their own admission in a comment response), is trying to prove something. And I can’t perceive how these facts and other claims made advances that goal. So my question to the author is: what is your point?
Meta:
It might be worthwhile to have some sort of flag or content warning for potentially controversial posts like this.
On the other hand, this could be misused by people who dislike the EA movement, who could use it as a search parameter to find and “signal-boost” content that looks bad when taken out of context.
What are the benefits of this suggestion?
This is a romp through meadows of daisies and sunflowers compared to what real Internet drama looks like. It’s perfectly healthy for a bunch of people to report on their negative experiences and debate the effectiveness of an organization. It will only look controversial if you frame it as controversial; people will only think it is a big deal if you act like it is a big deal.
I agree with kbog, while this is unusual for discourse for the EA Forum, this is still far above a bar where I think it’s practical to be worried about controversy. If someone thinks the content of a post on the EA Forum might trigger some reader(s), I don’t see anything wrong with including content warnings on posts. I’m unsure what you mean by “flagging” potentially controversial content.