Hey JP, don’t worry I won’t hold you to anything :-) I know you guys have a lot on. I think the feature of setting your own moderation guidelines at a certain karma level is a good one. It encourages top posters and also encourages the whole community to take more responsibility for good quality conversations. If you did get the chance to figure out how this was currently working on the EA Forum (e.g. at what karma level to individuals get what moderation features) and perhaps enact something similar to the LessWrong version that would be cool. Let me know what you end up deciding to do.
LarissaHeskethRowe
Hi Buck, Ozzie and Greg,
I thought I’d just add some data from my own experience.
For context, I’ve been heavily involved in the EA community, most recently running CEA. After I left CEA, I spent the summer researching what to do next and recently decided to join the Leverage Research team. I’m speaking personally here, not on behalf of Leverage.
I wanted to second Ozzie’s comment. My personal experience at least is that I’ve found the Leverage and Paradigm teams really welcoming.
They do employ people with a wide range of political views with the idea that it helps research progress to have a diversity of viewpoints. Sometimes this means looking at difficult topics and I’ve sometimes found it uncomfortable to try and challenge why I hold different viewpoints but I’ve always found that the focus is on understanding ideas and the attitude to the individual people one of deep respect. I’ve found this refreshing.
I wanted to thank Ozzie for posting this in part because I noticed reticence in myself to saying anything because my experience with conversations about Leverage is that they can get weird and personal quite fast. I know people who’ve posted positive things about Leverage on the EA Forum and then been given grief for it on and offline.
For this reason Greg, I can see why Leverage don’t engage much with the EA Forum. You and I know each other fairly well and I respect your views on a lot of topics (I was keen to seek you out for advice this summer). I notably avoided discussing Leverage though because I expected an unpleasant experience and that I had more information on the topic from investigating them myself. This feels like a real shame. Perhaps I could chat with you (and potentially others) about what you’d like to see written up by Leverage. I’m happy to commit to specific Leverage-related posts if you can help ensure that turns into a genuinely useful discussion. What do you think? :-)
I notice that some writers (e.g. Hauke in his recent post about the cost effectiveness climate change interventions) have something just above the comments section a note about their personal commenting guidelines. This seems like a potentially really useful feature. Not every user seems to have this though.
EDIT: I assume this relates to the feature on LessWrong described here where users can add moderation guidelines to their own posts so that they can treat LessWrong like a personal blog to some degree. This seems like a good way to encourage more people to post. Is there a similar feature on the EA Forum? If so, how does it work here?
Thanks so much for this list. It’s really helpful.
I’m having a quick look for ones that are on Audible Parker_Whitfill. I’m not checking all of them but using the UK Audible market I found the following ones:
The Technology Trap: Capital, Labor, and Power in the Age of Automation by Carl Benedikt Frey
The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter by Joseph Henrich
Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals by Tyler Cowen
The Pursuit of Power: Technology, Armed Force and Society Since A.D. 1000 by William H. McNeill
The Fate of Rome: Climate, Disease, and the End of an Empire by Kyle Harper
The Age of Em: Work, Love and Life when Robots Rule the Earth by Robin Hanson
Super Intelligence by Bostrom
Expert Political Judgment: How Good Is It? How Can We Know? by Philip E. Tetlock
What was the reason for Matt Fallshaw stepping down?
Hi Khorton,
Thanks for commenting. These are definitely important areas. Improving project management, time management and prioritisation, as well as external communication are a priority of mine for CEA at the moment. We’re working on planning projects further in advance with realistic timelines and communicating clearly about their status. We’ve been working on internal systems and training to try and improve this.I think we made significant improvements to our hiring process this year but there’s still a lot to do. We’re currently beginning a review of the process to improve it for next year, including analysing the timeline versus our project plan in order to improve future time estimates.
Please do keep sharing feedback so that we know how we’re doing and can continue to improve.
CEA is Fundraising for 2019
Centre for Effective Altruism (CEA): an overview of 2017 and our 2018 plans
I think you’re probably right on this when it comes to donations as it’s less likely that less money would necessarily mean less sleep or time with friends. However, the article seems to be talking more about working, whether that means in a high paid job with long hours, volunteering in all of your spare time or working long hours in an EA role you love. You’re still probably right that many people can push themselves more than they currently are. Any suggestions on how to identify where the line is for an individual would be really interesting to discuss.
I think these would be great to give to a slightly EA aligned friend but it might feel awkward to buy for someone with little knowledge of or interest in EA for a birthday etc because it wouldn’t necessarily be something you could claim they wanted.
Does anyone have any ideas about how to perhaps quantify whether you’ve made a “significant” career change? Not that that necessarily means you couldn’t donate 10%. Hours spent volunteering would be interesting.
Hi Greg,
Thanks for the message and for engaging at the level of what has Leverage achieved and what is it doing. The tone of your reply made me more comfortable in replying and more interested in sharing things about their work so thank you!
Leverage are currently working on a series of posts that are aimed at covering what has been happening at Leverage from its inception in 2011 up until a recent restructure this year. I expect this series to cover what Leverage and associated organisations were working on and what they achieved. This means that I expect Leverage to answer all of your questions in a lot more depth in the future. However, I understand that people have been waiting a long time for us to be more transparent so below I have written out some more informal answers to your questions from my understanding of Leverage to help in the meantime.
Another good way to get a quick overview of the kinds of things Leverage has been working on beyond my notes below is by checking out this survey that we recently sent to workshop participants. It’s designed for people who’ve engaged directly with our content so it won’t be that relevant for people to fill in necessarily but it gives an overview of the kinds of techniques Leverage developed and areas they researched.
What did Leverage 1.0 work on?
A very brief summary is that the first eight and a half years of Leverage (let’s call this “Leverage 1.0” as a catch-all for those organisations before the restructure) was at first a prioritisation research project looking at what should people work on if they want to improve the world. Leverage 1.0 later came to focus more on understanding and improving people as their psychological frameworks and training tools developed but they still conducted a wide range of research.
This means that in the very early days they were thinking a lot about how to prioritise, how to make good long term plans and just trying a bunch of things. I get the impression that at this stage almost nothing was ruled out in terms of what might be worth exploring if you wanted to improve the world. This meant people investigating all sorts of things like technological intelligence amplification, nootropics, and conducting polyphasic sleep experiments. People might be researching what caused the civilisational collapse that led to the dark ages, the beliefs of a particular Christian sect, or what lead to the development of Newtonian physics. Leverage felt this was important for research progress. They wanted researchers to follow what motivated them. They thought that it was important to investigate a lot of areas before deciding where to focus their efforts because deciding what to prioritise is so important to overall impact. This felt particularly important when investigating moon-shots which had the potential to be extremely valuable even if they seemed unlikely at the outset.
Some of the outputs of these early days of research included training sessions on:
Planning—How to build and error-check plans for achieving your goals
Expert assessment—how to determine if someone is an expert in a given domain when you lack domain knowledge
Learning to learn—how to improve and expand the scope of your learning process
Theorizing—how to build models and improve your model building process over time
Prioritisation and goal setting—how to find your goals, back chain plans from them etc
This is far from everything but gives you a flavour.
Geoff had developed a basic model of psychology called Connection Theory (CT) so this was a thing that was investigated alongside everything else. This involved spending a lot of time testing the various assumptions in CT.
Through experimenting with using CT in this way, Leverage eventually found they were able to use ideas from CT to make some basic predictions about individual and group behaviour, help individuals identify and remove bottlenecks so that they could self improve and perhaps even identify and share specific mental moves people were using to make research progress on particular questions. This made the team more excited about psychology research in particular (amongst the array of things people were researching) as a way to improve the world.
From there they (alongside the newly founded Paradigm Academy) developed some of the research into things like
one on one and group training,
A catalogue of different mental procedures individuals use when conducting research so that they could be taught to others to use to tackle different research and other problems. One example intellectual procedure (IP) just to give a sense of this, is the Proposal IP where you use the fact that you have a lot of implicit content in your mind and your taste response to inelegant proposals to speed up your thinking in an area.
Specific training in strategy, theorizing, and research
A collection of specific introspection and self-improvement techniques such as:
Self-alignment (a tool for increasing introspective access which handled a class of cases where our tools previously weren’t working)
Anti-avoidance techniques (making it so you can think clearly in areas you previously didn’t want to think about or had fuzzy thoughts in)
Charting (a belief change tool that has been modified and built out a lot since it’s initial release)
Mythos (tool for introspection with imagery, helpful for more visual people)
Integration and de-zoning (tools for helping people connect previously separate models)
What is Leverage doing now?
As for what Leverage is currently working on, once we have posted our retrospective we’ll then be updating Leverage’s website to reflect its current staff and focus so again a better update than I can provide is pending.
The teaser here is that from the various research threads being pursued in the early years of Leverage 1.0, Leverage today has narrowed their focus to be primarily on two areas that they found the most promising in their early years:
Scientific methodology research
Psychology
We also continue to be interested in sociology research and expect to bring on research fellows (either full time or part of future fellowship programmes) focusing on sociology in the future. However, since we’re relaunching our website and research programme we want to stay focused so we’re punting building out more of our sociology work to further down the line.
The scientific methodology research involves continuing to look at historical examples of scientific breakthroughs in order to develop better models of how progress is made. This continues some of our early threads of research in theorising, methodology, historical case studies and the history of science. We’re particularly interested in how progress was made in the earlier stages of the development of a theory or technology. Some examples include looking at what led to the transition in chemistry from Phlogiston to Lavoisier’s oxygen theory or the challenges scientists had in verifying findings from the first telescopes. We aim to share lessons from this research with researchers in a variety of fields. In particular, we want to support research that is in its earlier, more explorative stages. This is more of a moon-shot area but this means it can get less attention while being potentially high reward.
Our psychology research aims to continue to build on the progress and various research threads Leverage 1.0 was following. While this is quite a moon shot style bet, if we can improve our understanding of people then we potentially improve the ways in which they work together to solve important problems. At this point, we have developed tools for looking at the mind and mental structures that we think work fairly well on the demographics of people we’ve been working with. I got a ballpark estimate from someone at Leverage that Leverage and Paradigm have worked with around 400 people for shallower training, and about 60 for in-depth work but treat those figures as a guess until we write something up formally. We’ve focused in the last few years on improving these tools so they work in harder cases (e.g. people who have trouble introspecting initially) and using the tools to find common mental structures. Moving forward with this research we want to test the tools in a more rigorous way, in particular by communicating with people in academia to see whether or not they can validate our work.
One thing I personally like about the plans for psychology research is that it also acts as a check on our scientific methodology research. If the insights we gain from looking at the history of scientific progress aren’t useful to us in making progress in psychology then that’s one negative sign on their overall usefulness.
Who works for Leverage and Paradigm?
The team is much smaller and the organisation structure slightly more defined (although there is a way to go here still). There are four researchers (including Geoff who is also the Executive Director) and I’ll be joining as a Program Manager managing the researchers and helping communicate with the public about our work. So four in total at the moment, five once I start.
While Leverage Research in its newer form is getting going it still receives a lot of help from its sister organisation Paradigm Academy. This means that while they are two separate organisations, currently Paradigm staff give a lot of time to helping Leverage, particularly in areas like operations and helping with PR and Communications like the website relaunch. This helps allow the researchers to focus on their research and means the burden of public communication won’t all fall on their newest employee (me). Once a lot of that is done though we expect to make the division between the two organisations clearer. Paradigm currently has nine employees including Geoff.
I expect all of this will generate more questions than it answers at the moment and while my answer is to wait for Leverage’s formal content to be published I can see why this is frustrating. I hope my examples give a small amount of insight into our work while we take the time to write things up. You have every reason to be sceptical about Leverage posting content given various promises made in the past. I think given our track record on public communication that scepticism is valid. All I can perhaps offer in the meantime is that I personally am very keen to see both the retrospective and the new Leverage website published and the get sh*t done spirit that you and others on this forum know me for is part of the reason they’ve offered me a job to help with this in the first place.
Why I chose to work at Leverage
As for my personal reasons for choosing to accept an offer from Leverage, I expect this to be hard to transmit just because of inferential distance. My decision was the result of at least five months of discussions, personal research and resultant updates all of which is built on various assumptions that caused me to already be pursuing the plans I was at CEA.
I’ll attempt a short version here anyway in case it’s helpful. If there’s a lot of interest I’ll consider writing this up but I’m not sure it’ll be sufficiently useful or interesting to be worth the time cost.
Broadly speaking, I created a framework (building off a lot of 80Ks work but adapting it to suit my needs) to use to compare options on:
Impact - comparing potential career plans by looking at the scale and likelihood of success across:
the problem being tackled (e.g. preventing human extinction),
the approach to solving that problem (e.g. develop AGI in a way that’s safe)
the organisation (e.g. DeepMind)
and what I personally could contribute (e.g. say in a role as Project Manager)
Personal happiness (personal fit with the culture, how the job would fit into my life etc)
Future potential (what skills would I build and how useful are they, and what flexible resources such as useful knowledge or network would I gain)
I decided that I was willing to bet at least a few more years of my career on the more moon shot type plans to build a much better future (something like continuing personally to follow CEA’s vision of working towards an optimal world).
This narrowed my focus down to primarily considering paths related to avoiding existential risks and investing in institutions or advances that would improve humanity’s trajectory. In exploring some options around contributing to AI safety in some way I came away both not feeling convinced that I wouldn’t potentially cause harm (through speeding up the development of AGI) and less sure of the arguments for now being a particularly important hinge on this. It, therefore, seemed prudent to learn a lot more before considering this field.
This left me then both wanting to invest more time in learning more while also not wanting to delay working on something I thought was high impact indefinitely. In terms of impact, the remaining areas were advances or institutions that might improve humanity’s ability to tackle global problems.
I’d had plenty of conversations with various people about Leverage (including many Leverage sceptics) in the past and interacted with Leverage and Paradigm directly to some degree, mostly around their introspection techniques which I personally have found extremely useful for self-improvement. I knew that they were interested in psychology initially as a potential way to improve humanity’s trajectory (but didn’t yet understand the scope of their other research) so I reached out to chat about this. I found that many of the people there had already thought a lot about the kinds of things I was considering as options for improving the long-term future and they had some useful models. Those interactions plus my positive view of their introspection techniques led me to think that Leverage had the most plausible plan given my current uncertainty for improving the long-term future and was likely to be by far the best option for me in terms of self-improvement and gaining the knowledge I wanted for making better future plans. Their recent restructure, desire to establish a more structured organisation and plans to publish a lot of content meant they had an opening for my particular skill set and the rest, as they say, is history.