Updates from Leverage Research: history, mistakes and new focus
I’m posting to share information about Leverage Research, a non-profit research organisation founded in 2011 that was historically involved in EA. In particular, I wanted to share a summary of their history and their new focus following a major restructure.
My main goals for this post are to:
Establish me as a contact person for Leverage
Give readers a better understanding of Leverage in the past and an update on their new focus
Clarify and improve the relationship between Leverage and the EA community
I hope to achieve this by writing three main sections:
About me
Briefly covering my role at Leverage, why I’m the person posting this and my relationship to the EA communityWhat is Leverage?
Aiming to give readers a basic understanding of what Leverage was doing in the past and their plans for the future, including a little bit about our relationship with Paradigm Academy.Leverage and EA: Our mistakes and what to expect from Leverage moving forward
Clarifying Leverage’s relationship to the EA community, addressing some concerns and setting out what the EA community can expect from us moving forward. I wrote this section for a narrower audience of people who’ve interacted with Leverage in the past and have more context on our past relationship with EA.
Many thanks to everyone who has provided feedback or otherwise helped me in bringing together this post.
1. About me
My name is Larissa Hesketh-Rowe. I recently accepted a job at Leverage Research. I’m currently supporting Leverage with their external communications, although longer-term I expect to take on responsibility for their Research Fellows Program.
I’ve also been an active member of the EA community for many years. I started as a volunteer, group leader, and Giving What We Can member. More recently, I worked at the Centre for Effective Altruism, first in communications and community roles and later as the CEO, running the organisation in 2018. While I no longer work at an EA organisation, I still consider myself a member of the EA community, and I plan to continue to be actively engaged in supporting EA through things like volunteering as a mentor in the Women and Non-Binary Altruism Mentorship (Wanbam) programme and fulfilling my Giving What We Can pledge.
My involvement in EA means I’m in a good position to explain to the EA community what Leverage is doing and clarify Leverage’s relationship to EA. While I don’t expect Leverage to have much direct involvement in EA, both groups are working to improve the world, so I am personally motivated to improve communication between them.
If you ever want to reach out to Leverage for any reason or have any questions, you can reach out to me at larissa@leverageresearch.org.
2. What is Leverage?
Leverage has been a complicated project with a number of different components. In the past, Leverage was essentially a small community experimenting with how to conduct useful early stage research, studying cause prioritisation and social sciences and running ad-hoc world improvement projects.
One way we’re hoping to make Leverage more intelligible externally is that Geoff Anders (the founder and ED of Leverage) has just started writing an essay series about Leverage’s history. The first post is now up, which includes a list of the essays you can expect to see go up over the coming months. It’s likely Geoff will edit the posts as he gets feedback. The series will explain some of the research avenues Leverage covered and the projects they worked on. Since we wrote most of our content for the internal Leverage community, it’s a long project to work out what to prioritise sharing and how to do that. In this post, I’m merely trying to help readers understand what Leverage was doing rather than share past content.
2.1. “Leverage 1.0” vs “Leverage 2.0”
In the past, the name “Leverage” has been used broadly, not just referring to Leverage the organisation but also nearby groups with which they coordinated. To make this easier to follow, I’ll make the distinction between “Leverage 1.0” and “Leverage 2.0”. I’m using Leverage 1.0 to refer to Leverage from 2011 to 2019, including other organisations that developed out of Leverage such as Paradigm Academy. I’ll use Leverage 2.0 to refer to just the organisation Leverage Research and its staff from the summer of this year onwards. Leverage 2.0 is what we will mean by Leverage moving forward.
2.2. Some ways of understanding Leverage 2011 − 2019 (“Leverage 1.0”):
Below I have attempted to distil nearly nine years of history into its core components. My summary will end up being an oversimplification but hopefully a useful one.
Since deciding to join the Leverage (2.0) team, I have been trying to develop my own understanding of Leverage 1.0. In doing so, I’ve come to the view that the best way to understand Leverage 1.0 (2011 − 2019) is as a combination of:
an experiment in building an effective early stage research community
a cause prioritisation research collaboration
an organisation focused on understanding ideas, individuals and society
a general world improvement project
Much like early members of the EA community, the original Leverage team members were troubled by the many problems in the world: global poverty, totalitarianism, the threat of nuclear war. For humanity to be able to make progress on such issues, Leverage believed humanity first needs to understand a lot more about the world. In particular, they thought it was essential to understand better how to conduct high-quality research, what problems in the world are most important to focus on, and how people, institutions, and societies shape the world. Leverage Research was founded to contribute to that understanding.
2.2.1. Leverage as an experiment in building an effective early stage research community
The first challenge was to create a research collaboration that was capable of making progress across a wide scope of potential research avenues.
To try and create a productive research team, Leverage hired and collaborated with people with a wide range of different backgrounds, viewpoints, and credentials and gave them the freedom to investigate whatever seemed appropriate and interesting to them. They explored different ways of conducting research, studied various research traditions, and tried out different debate and discussion formats to help people learn from each other.
Leverage took an experimental approach to setting up the team structure such that in many ways, it can make more sense to think of Leverage 1.0 as a research community than a research organisation. While hiring someone and providing them a salary was one way to coordinate with someone on research, they also had lots of more informal collaborations with visiting and external researchers.
Leverage was also run in a reasonably decentralised way. In the early days, researchers were given a lot of autonomy over their research. Later new researchers joined existing teams, but these teams still developed organically around the research avenues on which people wanted to collaborate. All of this means that Leverage didn’t have what you might think of as a traditional hierarchy with management centrally directing the research. Instead, the team shared broadly overlapping viewpoints and plans, and the support and advice of the team leaders provided guidance.
2.2.2. Leverage as a cause prioritisation research collaboration
As well as understanding how best to conduct research, Leverage also had to determine what was most important to study if you wanted to improve the world. One can, therefore, also understand Leverage as a cause prioritisation research project.
To tackle the broad scope of potential research avenues, Leverage conducted extremely open-ended research, following numerous research paths to their natural endpoint. A research avenue might reach its natural endpoint if you decided it was too hard to be worth continuing, too dangerous to continue, or you generated proof of possibility.
If a research path was too hard, it could be substituted for a more manageable problem or deprioritised altogether. If you generated proof of possibility, you could either continue devoting more time to that avenue or not, depending on what open research avenues seemed to be the most promising.
From the outside, Leverage’s research was understandably confusing because they were prioritising moving through a wide range of research areas as efficiently as possible rather than communicating the results to others. This approach was designed to allow them to cover more ground with their research and narrow in quickly on areas that seemed the most promising.
If you want to read more about the various research avenues Leverage 1.0 explored, keep an eye on Geoff’s website for essays covering these.
2.2.3. Leverage as an organisation focused on understanding ideas, individuals and society
Geoff had a particular interest in psychology, sociology and philosophy as a means to understanding people, societies, and how our understanding of the world has progressed. As Leverage made progress down various research avenues, their work in these areas looked particularly promising.
In psychology, they continued to develop a model of the human mind, and the structure of individual beliefs called Connection Theory (CT) and developed various introspection techniques and processes for overcoming different types of mental blocks. In sociology, they studied group dynamics, past civilisations and institutions. Leverage also learned a lot about abstract methodological research, studied the history of science and built up knowledge about how to conduct early stage research. This latter area, in particular, contributed a lot to the work we are now doing on early stage science (the study of how scientific progress happens in fields without well-developed scientific research programs). More on this in section 2.4 on Leverage Research today.
2.2.4. Leverage as a general world improvement project
Finally, much like the EA community, Leverage did not just want to conduct research from their armchairs, they wanted to put it into practice. Leverage’s sister company Paradigm Academy, for example, developed out of a desire to put some of their findings in psychology into practice by training individuals. Paradigm provides training to individuals and incubates startups.
They also wanted to meet like-minded people and were excited about growing the number of people contributing to pressing problems in the world today. As the Leverage community ended up developing around the same time as EA, Leverage did some work to support the EA community in the early days before there were more centralised movement-building efforts. For example, Leverage set up THINK, an early EA local group building project and they ran some of the first EA conferences (e.g. the 2013 and 2014 EA Summits) before handing these over to the Centre for Effective Altruism (CEA).
2.3. Conducting research not sharing research
Notably, Leverage’s focus was never particularly on sharing research externally. Sometimes this was because it was a quick exploration of a particular avenue or seemed dangerous to share. Often though it was a time trade-off. It takes time to communicate your research well, and this is especially challenging when your research uses unusual methodology or starting assumptions. Geoff will talk more about this in his essay series, and I discuss it a bit further in section 3.2.2: Communication about our work.
2.4. Leverage organisational restructure
By the summer of 2019, Leverage’s primary research areas were mostly functioning as distinct teams. As I have mentioned, Leverage was decentralised in terms of management structure, and the various teams acted autonomously, mostly independent of any overarching management structure. As Leverage grew, they came up against more and more challenges in coordinating across those teams.
For a variety of reasons, it had begun to seem as though much greater central coordination was necessary at Leverage, which would mean more centralised management guiding the teams’ activities. However, the different groups at Leverage had already developed their own team cultures, identities, and plans. Many of the existing staff had initially joined to conduct research under a very open-ended research mandate, so the move to becoming an organisation with more central direction was not appealing to everyone.
For these reasons, after much reflection, the research collaboration that had been Leverage Research (“Leverage 1.0”) was formally disbanded earlier this year and reformed into a research institute focused on early stage science research, including early stage psychology.
Many of the different teams then formally split out to become organisations that are independently funded and run.
2.5. Leverage Research today (“Leverage 2.0”): an early stage science research institute
Following the restructure, Leverage Research (“Leverage 2.0”) stopped focusing on cause prioritisation research and, while creating a productive research environment is still important to us as a research institute, this is no longer a focus of study. Leverage’s more general world improvement projects are either handled by separate organisations (e.g. training at Paradigm Academy) or no longer relevant to their current work (e.g. the past work on EA movement building. More on this in the section below about Leverage’s relationship to the EA community).
Moving forward, Leverage Research will focus on early stage science research. Our new mission is to support scientific progress by educating people about early stage science, funding promising research projects, and conducting own early stage research, particularly in the social sciences. If you’d like to know more about our work on early stage science, check out this page of our website.
This new direction will entail our researchers engaging with academia, having our work reviewed externally, and connecting with other individuals and institutions involved in early stage science. For this reason, Leverage is now working on publishing more of our work, increasing our public engagement, and we have just released a beta version of our website.
Leverage 2.0 in this form and with this focus is still new, so I expect things to shift a bit as we continue to develop our new strategy and research agenda. We’re still working out what of our past content is relevant to our current work to write up and how to share our other content.
Since I’m conscious of making this post too long for readers, if you have any questions about anything I’ve not touched on here, please comment on this post and check out our website for future updates.
2.6. Leverage Research (Leverage 2.0) and Paradigm Academy
Of the various organisations that either had been part of Leverage 1.0, Geoff Anders now runs only Paradigm Academy and Leverage Research. The others are now all run independently so I’m afraid I can’t speak for them. Since I sometimes work with Paradigm staff and we share a management team, if you have questions about Paradigm I may be able to answer them here but the best person to speak to about Paradigm is Mindy McTeigue (mindy@paradigmacademy.com).
Over the years, Leverage has, at many points, considered having Paradigm Academy run much more independently. However, there continues to be overlap, especially since the restructure, where Paradigm is well-positioned to provide operations support as part of their incubation programme, Paradigm’s training benefits from Leverage’s psychology research and Leverage staff benefit from Paradigm training. I, therefore, expect this overlap to continue. Currently, Leverage contracts Paradigm to run their operations while we look for our own operations manager. We also contract some of their trainers to provide training to Leverage staff. Of the team, Geoff Anders and Mindy McTeigue work at both organisations. Mindy joins Geoff at Leverage (previously she worked at Paradigm) since being promoted to Chief Operating Officer, supporting Geoff in management. You can see the current Leverage team on our team page.
Paradigm currently continues to provide individual training and startup incubation. Once more of the groundwork has been laid for Leverage to focus on its new mission, Geoff and Mindy will likely focus more on updating Paradigm’s website and communicating about their work.
3. Our mistakes and what to expect moving forward
Hopefully, my last section has helped people understand the basics of what Leverage was and will be. In this next section, I’d like to talk about some questions and concerns that people in the EA community have brought up about Leverage starting with some brief context.
The mistakes discussed here are from Leverage 1.0. While I work at Leverage and have written this on their behalf, I’ve been authorised to speak on behalf of Paradigm here too, to make the same commitments moving forward for Leverage and Paradigm. I’ll often refer to the team as “we” instead of “they” as I have so far when describing Leverage in the past when I was not involved. I chose to write this way because I’m conveying apologies from the entire team that I will be helping to ensure we keep these commitments moving forward, so this section feels to me like a team effort. Apologies if this becomes confusing.
I don’t expect this section to put an end to all disagreements or settle all concerns. If you’re working on something that’s both important and highly uncertain, there are bound to be disagreements, and some amount of this seems healthy for broadening your perspective and challenging one another to do better. However, we think the EA community is doing important work, and so we don’t want to jeopardise that.
3.1 Leverage and the EA community
Leverage 1.0 started up around the same time as the EA community did, and shared similar motivations; a deep commitment to improving the world and a belief that through careful reasoning, we can do more good. In the past, we supported the growth of the EA community and were involved in EA movement-building projects.
And yet, Leverage has, as a friend recently described it, often seemed like “a bit of a square peg in a round hole in the EA community”. Leverage 1.0 started from different ideas and assumptions. While Leverage is made up of individuals with different views, in general Leverage has much more baseline scepticism of mainstream institutions than I’ve generally found in the EA community; we don’t prioritise Bayesian reasoning when trying to improve our thinking or tend to use quantitative models as often; we place much more weight on the importance of understanding individual psychology, group dynamics and global incentives and power structures. And, although improving scientific progress is the kind of high-risk, high-reward bet some EAs do prioritise, Leverage’s plan for it looks very different.
In an ideal world, neighbouring groups like this might have spurred each other on and been a way to productively challenging each others’ ideas. But in our not-so-ideal reality, real-life interactions can be messy, and even well-intentioned communication sometimes fails.
Since centralising in Leverage 2.0, we’ve been thinking about our plans and realising we need to do a lot more to communicate our work and engage with external groups effectively. In reflecting on this, we ended up thinking a lot more about the mistakes we made in the past.
Leverage, therefore, wants to apologise for instances where we’ve caused damage, lay out some of the mistakes we have made and set out what neighbouring projects can expect from us moving forward.
We also want to make clear our relationship to the EA community today. While historically we were involved in EA movement-building among our other world improvement projects, and we continue to support any communities trying to make the world better, the EA community is not part of our current focus. This means that while our work may be of interest to some people here, we may work with people in the EA community, and we are broadly supportive of the work the EA community is doing, neither Leverage nor Paradigm is directly involved in trying to build or promote EA. I expect staff that are also part of the EA community (such as myself) will continue to be interested in supporting EA projects in our spare time, but this won’t be a focus for Leverage or Paradigm as organisations.
3.2 Concerns about Leverage and what to expect from us moving forward
The main mistakes Leverage has made with regards to our relationship to the EA community, which I will try to address are:
our approach to coordination with other organisations,
how we communicate about our work,
our attitude toward PR and reputation
some of our interpersonal interactions with individuals.
There’s also the question of how to assess Leverage’s impact, which I will discuss in the final part of this section.
3.2.1. Coordination with other organisations
In the early days, Leverage 1.0 had very different views on PR and movement-building from others in the EA community. Leverage staff were excited to get more people involved in figuring out how to understand the world and donating to pressing problems in global development. While the opinions of individual Leverage members differed in many ways, it would be fair to say that as a group, we tended to think concerns about branding and risks arising from growing the movement too quickly were overblown. These differences meant Leverage had early strategic disagreements with organisations in the EA community and our coordination attempts were often clumsy and naive.
We think we then later took too adversarial an approach in disagreements with neighbouring organisations. For instance, Leverage leadership concluded that other organisations were not going to prioritise EA movement-building adequately. Instead of engaging in dialogue about the differences, Leverage took unilateral action to try to build the EA movement, running conferences and allying with pro-movement growth EAs.
Sometimes unilateral action is necessary to tackle entrenched powers and incentives that are unresponsive to concerns, but this should not be taken lightly. It’s important to seriously consider all the potential consequences and take such action only as a last resort when you have exhausted other options. While Leverage tried to weigh the consequences of our planned activities and assess the realistic chances of coordination with other organisations, we’ve concluded that we should have continued to reach out and try to engage in collaboration and dialogue, even though earlier attempts at this failed.
Moving forward, EA organisations can expect both Leverage and Paradigm to:
Not run EA movement-building projects. Working directly on growing EA, is not our comparative advantage nor any longer part of our focus
Reach out earlier if we end up planning initiatives that might impact the EA community and engage more in dialogue over strategic disagreements
Where the EA community would find this helpful, do more to support their work. For example, connecting individuals we meet to the EA community and providing resources and training if requested by members of the EA community. We wouldn’t expect to do this under an EA brand, we’d just be supporting projects we thought were good for the world.
Be more responsive to concerns from EA community members or organisational leaders. We’ve received a range of feedback over the years. Some of this feedback was not always constructive or was hard to engage in dialogue around (e.g. anonymous posters). However, we realise that we could have worked harder to bridge misunderstandings and work out which feedback was constructive so that we could incorporate that.
If there are further suggestions people have here, please feel free to add them in the comments or to email me (larissa@leverageresearch.org ).
3.2.2. Communication about our work
We know that it hasn’t been easy to understand Leverage’s work in the past. Communication about Leverage has often been sporadic, hard for external audiences to understand, and access to our materials restricted, leading Leverage to be shrouded in mystery. This confusion then contributes to conversations being less productive and our impact being less clear.
As I mentioned in the history of Leverage, there was a trade-off in time spent conducting research versus time spent communicating it. As we didn’t invest time early on in communicating about our work effectively, it only became harder over time as we built up our models and ontologies. While often this was the right trade-off for Leverage 1.0 where the focus was advancing our ideas, sometimes it wasn’t and in either case, this makes the job of communicating our work moving forward challenging[1].
We also made more general communications errors. In particular, we often made promises to provide further updates on various topics on particular time frames but then rarely posted the promised updates at all, let alone by the stated deadline. In my experience, this is a common communications mistake but an easily avoidable one[2].
The main communication improvements people can expect from Leverage Research moving forward are:
Clear information about who we are and what we do on our website, including information about our work and a team page
New research related to psychology and early stage science to be shared on the Leverage website moving forward
Some of our content relating to training techniques and self-improvement to be available on the Paradigm Academy website (although likely not until next year)
Content that doesn’t fit under either organisation to be shared as part of Geoff’s essay series on the history of Leverage or published by individuals.
More small events hosted by our staff in the Bay that give people who are interested in our work the opportunity to meet us and ask questions
A responsive point person to whom you can direct inquiries about Leverage’s work (me!)
We will seek to communicate more accurately and clearly about our future work and updates, including our uncertainty. If people are interested in particular updates they can reach out to me[2].
We expect it’s unlikely that much of our increased external communication will be in EA channels (like the EA Forum) as much of it may not be directly relevant to EA, but we will share research through our own channels for those who are interested.
Paradigm Academy is not currently focused on external communication as much as Leverage is, so I don’t have updates on their communication plans. If you have questions about Paradigm, please contact Mindy.
3.2.3. PR and reputation
Leverage 1.0 has historically undervalued reputation and PR and instead focused more single-mindedly on the achievement of its goals. This contributed to our lack of focus on communicating about our work which, in turn, damaged our reputation in the EA community.
The EA community has had a great deal of success with things like bringing in large funders and working with governments and other institutions. This success has been critical to spreading EA ideas, improving policy, tackling diseases and saving animals from factory farms. Much of this success is attributable to the ways the EA community has carefully managed its reputation. We think that the EA community is doing incredibly important work, and we don’t want to jeopardise that.
While we now better understand the considerations other EA organisations had when we first disagreed about movement-building strategy, our primary focus is research. Our research focus (early stage science) often involves working with ideas and theories that are untested, unusual and misunderstood in the mainstream.
There’s a delicate line to walk here. The world has significant problems, and it may require revolutionary new ideas to solve those problems. Entertaining unusual perspectives and exploring neglected areas is vital to generating new ideas and often requires a unique culture. But this can also lead to missing crucial conventional wisdom, putting off potential allies and can be an easy excuse for poor communication.
Given this tension and because we support the work that EA is doing, we will do more to ensure that the way we’re perceived doesn’t negatively impact the EA community. In particular, Leverage and Paradigm will:
Be more open to feedback on ways we might be adversely affecting the EA community
Actively seek more advice on our PR and communicate with EA organisations earlier where we think our work might impact other groups
Collaborate with EA organisations if we want to do things like present more controversial ideas at EA events and take more seriously how anyone participating in an EA space might be taken by others to represent EA in some way regardless of whether or not they are an EA organisation.
3.2.4. Interpersonal interactions
I know some individuals have had interactions with some members of staff at Leverage in the past where they’ve felt dismissed, put down or uncomfortable. Where this has been the case, it has understandably coloured some people’s impression of Leverage as a whole, and we want to apologise for negative experiences people have had.
The kinds of concerns I’ve heard most frequently include:
People feeling like we were judging them on whether or not they were worthy of collaborating with, or feeling like we were assessing them on whether or not they would be useful instead of caring about them as a person
Leverage staff asking weird and probing questions which might feel particularly unsafe in contexts like interviews or when discussing psychology research
Leverage staff being overconfident when presenting their ideas
Leverage staff being dismissive of other people’s plans, projects or ideas
Leverage staff generally being weird.
Firstly, we want to apologise for any interactions people have had with Leverage 1.0 staff that have made them feel uncomfortable, judged, looked down on or for times people thought we were treating them as instrumental. We do not see people this way nor want to make them feel like that. A big part of our work is about understanding people, caring about their problems and supporting their growth as individuals. We want to help people in developing their ideas and their projects, not just because the world needs more people working together to do good, but also because we genuinely care about people.
Moving forward, we want to do a much better job of explaining our ideas and giving a much more accurate impression of how uncertain we are. I can’t promise you won’t end up discussing weird ideas when interacting with us, but we want this to be engaging, not off-putting. Feel free to give us feedback directly when talking to us in the future or, if you prefer, you can email me (larissa@leverageresearch.org) or fill in this form with feedback or concerns.
To wrap up this section, I will share a couple of thoughts that relate to some of the concerns I’ve heard. I don’t want to make excuses for people being unfriendly or making others feel bad in interactions, but this might help people understand Leverage better.
One of the additional adverse effects of our poor public communication is that when Leverage staff have interacted with people, they often didn’t understand our work and had a lot of questions and concerns about it. While this was understandable, I think it sometimes led staff to feel attacked which I suspect, in some cases, they handled poorly, becoming defensive and perhaps even withdrawing from engaging with people in neighbouring communities. If you don’t build up relationships and discuss updates to your thinking inferential distance builds up, and it becomes easy to see some distant, amorphous organisation rather than a collection of people.
I think Leverage also struggles with the same challenge the EA community faces when it comes to managing both truth-seeking and individual wellbeing. On the whole, I believe leaders in the EA community do a great job of challenging seemingly mistaken ideas with curiosity and kindness. I’m sure we at Leverage can do better on this dimension.
Speaking purely from my personal experience, I’ve found the Leverage and Paradigm staff to be very welcoming and empathetic. My experience of the Leverage culture is one where it feels exceptionally safe to express ideas and be wrong. This sense of safety has benefited my ability to develop my models and independent thinking. It’s also a place with a strong focus on understanding people and caring about helping them improve. I want to ensure that more people have this experience when interacting with Leverage.
Moving forward, I hope the greater focus on external engagement at Leverage 2.0 will give rise to more opportunities for people to have discussions and generally hang out with the staff at Leverage. I’ll be honest, you should still expect us to be pretty weird sometimes—but we’re also very friendly.
3.2.5. Leverage’s impact
The final concern I’ll discuss is around what impact Leverage has had and whether it has been a good use of resources. I don’t think there are mistakes Leverage has made here beyond the ones discussed above (e.g. communication). Instead, I think this is just a difficult question.
If the concern is about whether there has been rigorous cost-effectiveness analysis of Leverage’s work, the answer is no and, to be honest, I don’t think that framework makes sense for assessing this kind of research.
If the question is instead, “was Leverage 1.0 generally a good use of resources?”, my honest answer is that I don’t know.
Assessing the value of research, especially unpublished research outputs, is a complex problem. Several people in the EA community have faced this challenge when evaluating organisations like MIRI, FHI, and GPI or evaluating career opportunities in AI strategy and governance (which often involves unpublished research). My intuition is that most people without access to insider information about the research organisation or without the technical ability to assess the research should conclude that they don’t know whether a particular research organisation is a good use of resources and, where they need to make some calls, heavily defer to those who do have that information and ability. Therefore, I suspect most people should similarly conclude they don’t know if Leverage was a good use of resources, given the lack of published research or external signs of credibility.
If you did want to try and make some headway on this question in the Leverage case, my suggestion would be to try and think about:
Whether or not you believe social science research into understanding people and societies seems especially crucial for world improvement
The degree to which it looks as though Leverage 1.0 was successful in better understanding people and societies through its research.
From my perspective, this kind of social science research does seem both important to many plans for significantly improving the world and generally useful. However, I expect a lot of disagreements about the tractability of the area and its importance relative to other things.
When it comes to how successful Leverage 1.0 was in its research, readers can gain some information by reading Geoff’s essay series, evaluating work we publish in the future, and assessing how useful the training techniques we have developed are.
From the inside, I’m optimistic that Leverage 1.0 was a good use of resources, but with lots of uncertainty, especially around how to assess research in general. I’ve found Leverage’s introspection tools and people models to be directly useful, which makes me think their psychology work is promising. It also appears to me that they have a vast amount of high quality, internal material on understanding the mind and on methods for making progress on challenging research questions. Many of the staff I’ve interacted with at Leverage seem to have detailed models of Leverage 1.0 research areas and seem to have developed impressive skills over their time at Leverage 1.0.
For most people, I don’t think my insider view should substantially change their opinion. Instead, if assessing Leverage’s impact is of particular interest to you, I’d suggest looking for more publically-accessible signs of Leverage’s success or failure in the future and using that to inform whether Leverage’s past work was useful.
3.3 Feedback on how we’re doing
Since I will be taking the lead on Leverage’s engagement with other communities, I want to end this post by strongly encouraging readers to reach out to me if you notice ways that we can do better on any of these dimensions moving forward.
You can email me at larissa@leverageresearch.org or fill in this form.
The form asks for, but does not require, a name. In general, we do prefer non-anonymous feedback, but we are open to receiving anonymous input if it is constructive. Your feedback will provide me with information on whether or not Leverage does improve on the dimensions we’ve laid out here and will help me to work out how we can do better.
Similarly, if you have questions about anything I’ve not been able to cover here or feedback on this post, please feel free to add it in the comments.
Footnotes:
[1] Edited in response to feedback in the comments here. Previously this sentence read:
While often this was the right trade-off for Leverage 1.0 where the focus was advancing our ideas, this makes the job of communicating our work moving forward challenging.
[2] These two additions were made after the post had been published in response to email feedback I received pointing out that I’d forgotten to mention our past promises to provide updates that we didn’t fulfil.
- Research Deprioritizing External Communication by 6 Oct 2022 12:20 UTC; 89 points) (
- Updates from Leverage Research: History and Recent Progress by 27 Sep 2021 5:23 UTC; 38 points) (LessWrong;
- Research Deprioritizing External Communication by 6 Oct 2022 12:20 UTC; 34 points) (LessWrong;
- Leverage Research shutting down? by 4 Jul 2019 20:55 UTC; 22 points) (
- 10 Jul 2020 17:41 UTC; 11 points) 's comment on Leverage Research: reviewing the basic facts by (
I don’t have a fully-formed gestalt take yet, other than: thanks for writing this.
I do want to focus on 3.2.2 Communication about our work (it’s a very Larissa thing to do to have 3 layers nesting of headers 🙂). You explain why you didn’t prioritize public communication, but not why you restricted access to existing work. Scrubbing yourself from archive.org seems to be an action taken not from desire to save time communicating, but from a desire to avoid others learning. It seems like that’s a pretty big factor that’s going on here and would be worth mentioning.
[Speaking for myself, not my employer.]
This is especially jarring alongside the subsequent recommendation (3.2.5) that one should withhold judgement on whether ‘Leverage 1.0’ was a good use of resources given (inter alia) the difficulties of assessing unpublished research.
Happily, given the laudable intention of Leverage to present their work going forward (including, one presumes, developments of efforts under ‘Leverage 1.0’), folks who weren’t around at the start of the decade will be able to do this—and those of us who were should find the ~2012 evidence base superseded.
For reference, a version & commentary of some Leverage 1.0 research:
https://rationalconspiracy.com/2014/04/22/the-problem-with-connection-theory/ (a)
When I wrote this comment, I also wrote the following.
I now think you maybe did mean it as i or ii? Specifically
Implies that sometimes it was the right call and sometimes it wasn’t. This is pretty nit-pick-y but if you agree it’s not type iii, maybe you could change it to
Yeah this makes sense, thanks for asking for clarification. The communication section is meant to be a mixture of i) and ii). I think in many cases it was the right decision for Leverage not to prioritise publishing a lot of their research where doing so wouldn’t have been particularly useful. However we think it was a mistake to do some public communication and then remove it, and not to figure out how to communicate about more of our work.
I’m not sure what the best post etiquette is here, should I just edit the post to put in your suggestion and note that the post was edited based on comments?
Thanks for the clarification and tolerating the nitpick. I don’t know that anyone has an etiquette book for this, but I’d put a footnote with the update.[1]
[1] In the fullness of time we’ll have built in footnotes in our rich-text editor, but for now you can do hacky footnotes like this.
Perfect, thank you. I’ve edited it and added a footnote.
Hi JP,
(Haha, I did wonder about having so many headings, but it just felt so organised that way, you know 😉)
With regards to removing content we published online, I think we hit the obvious failure mode I expect a lot of new researchers and writers run into, which was that we underestimated how time-consuming, but also stressful, posting publicly and then replying to all the questions can be. To be honest, I kind of suspect early and unexpected negative experiences with public engagement led Leverage to be overly sceptical of it being useful and nudged them away from prioritising communicating their ideas.
From what I understand, some of the key things we ended up removing were:
1) content on Connection Theory (CT)
2) a long-term plan document
3) a version of our website that was very focused on “world-saving.”
With the CT content, I don’t think we made sufficiently clear that we thought of CT as a Kuhnian paradigm worth investigating rather than a fully-fledged, literally true-about-the-world claim.
Speaking to Geoff, it sounds like he assumed people would naturally be thinking in terms of paradigms for this kind of research, often discussed CT under that assumption and then was surprised when people mistook claims about CT to be literal truth claims. To clarify, members of Leverage 1.0 typically didn’t think about CT as being literally true as stated, and the same is true of today’s Leverage 2.0 staff. I can understand why people got this impression from some of their earlier writing though.
This confusion meant people critiqued CT as having insufficient evidence to believe it upfront (which we agree with). While the critiques were understandable, it wasn’t a reason to believe that the research path wasn’t worth following, and we struggled to get people to engage with CT as a potential paradigm. I think the cause of the disagreement wasn’t as clear to us at the time, which made our approach challenging to convey and discuss.
With the long-term planning documents, people misinterpreted the materials in ways that we didn’t expect and hadn’t intended (e.g. as being a claim about what we’d already achieved or as a sign that we were intending something sinister). It seems as though people read the plan as a series of predictions about the future and fixed steps that we were confident we would achieve alone. Instead, we were thinking of it as a way to orient on the scale of the problems we were trying to tackle. We think it’s worth trying to think through very long-term goals you have to see the assumptions that are baked in into your current thinking and world model. We expect solving any problems on a large scale to take a great deal of coordinated effort and plans to change a lot as you learn more.
We also found that a) these kinds of things got a lot more focus than any of our other work which distorted people’s perceptions of what we were doing and b) people would frequently find old versions online and then react badly to them (e.g. becoming upset, confused or concerned) in ways we found difficult to manage.
In the end, I think Leverage concluded the easiest way to solve this was just to remove everything. I think this was a mistake (especially as it only intensified everyone’s curiosity) and it would have been better to post something explaining this problem at the time, but I can see why it might have seemed like just removing the content would solve the problem.
(totally unrelated to the actual post but how did you include an emoticon JP?)
⌘-^-Space, gets you emoji and unicode on any text field on a Mac. I assume other operating systems have their own versions.
Thanks JP and Edoarad! 😄
Winky-. on Windows (That’s the windows key + dot) 😊
Why is Leverage working on psychology? What is it hoping to accomplish?
Hi casebash,
We are conducting psychology research based on the following assumptions:
1) psychology is an important area to understand if you want to improve the world
2) it is possible to make progress in understanding the human mind
3) the current field of psychology lags behind its potential
4) part of the reason psychology is lagging behind its potential is that it has not completed the relevant early stage science steps
5) during Leverage 1.0, we developed some useful tools that could be used by academics in the field to make progress in psychology.
Assumptions 2) and 5) are based on our experience in conducting psychology research as part of Leverage 1.0. The next step will be to test these assumptions by seeing if we can train academics on a couple of the introspection tools we developed and have them use them to conduct academic research.
Assumptions 3) and 4) are something we have come to believe from our investigations so far into existing psychology research and early stage science. We are currently very uncertain about this and so further study on our part is warranted.
What we are trying to accomplish is to further the field of psychology, initially by providing tools that others in the field can use to develop and test new theories. The hope is that we might make contributions to the field that would help it advance. Contributing to significant progress in psychology is, of course, a very speculative bet but, given our views on the importance of understanding psychology, one that still seems worth making.
I hope that helps. Let me know if you have further questions.
Greater knowledge of psychology would be powerful, but why should we expect the sign to be positive, instead of say making the world worse by improving propaganda and marketing?
Hi Casebash,
Thank you for the question; this is an important topic.
We believe that advances in psychology could make improvements to many people’s lives by helping with depression, increasing happiness, improving relationships, and helping people think more clearly and rationally. As a result, we’re optimistic that the sign can be positive. Our past work was primarily focused on these kinds of upsides, especially self-improvement; developing skills, improving rationality, and helping people solve problems in their lives.
That said, there are potential downsides to advancing knowledge in a lot of areas, which are important to think through in advance. I know the EA community has thought about some of the relevant areas such as flow-through effects and how to think about them (e.g. the impact of AMF on population and the meat-eater problem) and cases where extra effort might be harmful (e.g. possible risks to AI safety from increasing hardware capacities and whether or not working on AI safety might contribute to capabilities).
Leverage 1.0 thought a lot about the impact of psychology research and came to the view that sharing the research would be positive. Evaluating this is an area where it’s hard to build detailed models though so I’d be keen to learn more about EA research on these kinds of questions.
Thank you for writing this. I was very curious about Leverage and I’m excited to see more clearly what you are going for.
Some off the bat skepticism. It seems a priori that the research on early stage science is motivated by early stage research directions and tools in Psychology. I’m wary of motivated reasoning when coming to conclusions regarding the resulting models in early stage, especially as it seems to me that this kind of research (like historical research) is very malleable and can be inadvertently argued to almost any conclusions one is initially inclined to.
What’s your take on it?
Also, I’m not quite sure where do you put the line on what is an early stage research. To take some familiar examples, Einstein’s theory of relativity, Turing’s cryptanalysis research on the enigma (with new computing tools), Wiles’s proof of Fermat’s last theorem, EA’s work on longtermism, Current research on String theory—are they early stage scientific research?
Hi edoarad,
Thanks for the question. This seems like the right kind of thing to be skeptical about. Here are a few thoughts.
First, I want to emphasize that we hypothesize that there may be a pattern here. Part of our initial reasoning for thinking that the hypothesis is plausible comes from both the historical case studies and our results from attempting early stage psychology research, but it could very well turn out that science doesn’t follow phases in the way we’ve hypothesized or that we aren’t able to find a single justified, describable pattern in the development of functional knowledge acquisition programs. If this happens we’d abandon or change the research program depending on what we find.
I expect that claims we make about early stage science will ultimately involve three justification types. The first is whether we can make abstractly plausible claims that fit the fact pattern from historical cases. The second is that our claims will need to follow a coherent logic of discovery that makes sense given the obstacles that scientists face in understanding new phenomena. Finally, if our research program goes well, I expect us to be able to make claims about how scientists should conduct early stage science today and then see whether those claims help scientists achieve more scientific progress. The use of multiple justification types makes it more difficult to simply argue for whatever conclusion one is already inclined towards.
Finally, I should note that the epistemic status of claims made on the basis of historical cases is something of an open question. There’s an active debate in academia about the use of history for reaching methodological conclusions, but at least one camp holds that historical cases can be used in an epistemically sound way. Working through the details of this debate is one of the topics I’m researching at the moment.
I don’t yet have a precise answer to the question of which instances of scientific progress count as early stage science although I expect to work out a more detailed account in the future. Figuring out whether a case of intellectual progress counts as early stage science involves both figuring out whether it is science and then figuring out whether it is early stage science. I probably wouldn’t consider Wiles’s proof of Fermat’s last theorem and the development of cryptography as early stage science because I wouldn’t consider mathematical research of this type as science. Similarly, I probably wouldn’t consider EA work on longtermism as early stage science because I would consider it philosophy instead of science.
In terms of whether a particular work of science is early stage science, in our paper we gesture at the characteristics one might look for by identifying the following cluster of attributes:
I don’t know enough about the details of how Einstein arrived at his general theory of relativity to say whether it fits this attribute cluster, but it appears to be missing the experimentation and improvement of measurements tools, and disagreements among researchers. Similarly, while there is significant disagreement among researchers working on theories in modern physics, I think there is substantial agreement on which phenomena need to be explained, how the relevant instruments work and so on.
Great, this helps me understand my confusion regarding what counts as early stage science. I come from a math background, and I feel that the cluster of attributes above represent a lot of how I see some of the progress there. There are clear examples where the language, intuitions and background facts are understood to be very far from grasping an observed phenomenon.
Instruments and measurement tools in Math can be anything from intuitions of experts to familiar simplifications to technical tools that helps (graduate students) to tackle subcases (which would themselves be considered as “observations”).
Different researchers may be in complete disagreement on what are the relevant tools (in the above sense) and directions to solve the problem. There is a constant feeling of progress even though it may be completely unrelated to the goal. Some tools require deep expertise in a specific subbranch of mathematics that makes it harder to collaborate and reach consensus.
So I’m curious if intellectual progress which is dependent on physical tools is really that much different. I’d naively expect your results to translate to math as well.
This is an interesting point, and it’s useful to know that your experience indicates there might be a similar phenomenon in math.
My initial reaction is that I wouldn’t expect models of early stage science to straightforwardly apply to mathematics because observations are central to scientific inquiry and don’t appear to have a straightforward analogue in the mathematical case (observations are obviously involved in math, but the role and type seems possibly different).
I’ll keep the question of whether the models apply to mathematics in mind as we start specifying the early stage science hypotheses in more detail.
Given Leverage 2.0′s focus on scientific methods, is it planning to engage with folks working on metascience and/or progress studies?
Hey Milan,
I’m Kerry and I’m the program manager for our early stage science research.
We’ve already been engaging with some of the progress studies folks (we’ve attended some of their meetups and members of our team know some of the people involved). I haven’t talked to any of the folks working on metascience since taking on this position, but I used to work at the Arnold Foundation (now Arnold Ventures) who are funders in the space, so I know a bit about the area. Plus, some of our initial research has involved gaining some familiarity with the academic research in both metascience and the history and philosophy of science and I expect to stay up to date with the research in these areas in the future. There was also a good meetup for people interested in improving science at EAG: London this year and I was able to meet a few EAs who are becoming interested in this general topic.
I expect to engage with all of these groups more in the future, but will personally be prioritizing research and laying out the intellectual foundations for early stage science first before prioritizing engaging with nearby communities.
Thanks for writing this! I’m really glad Leverage has decided to start sharing more.
Thanks Jeff :-) I hope it’s helpful.
Who are Leverage 2.0′s main donors? Are they different from Leverage 1.0′s main donors?
Hi Milan, we’re still deciding what, if any, information it seems appropriate to share about our donors publicly. We do expect some Leverage 1.0 donors to continue to support Leverage 2.0. We will also soon start fundraising for Leverage 2.0 and will probably engage with communities that are interested in areas like early stage science and meta science.
Looking over the website I noticed Studying Early Stage Science under “Recent Research”. I haven’t read it yet, but will!
Thoughts, now that I’ve read it:
This sort thing where you try things until you figure out what’s going on, starting from a place of pretty minimal knowledge feels very familiar to me. I think a lot of my hobby projects have worked this way, partly because I often find it more more fun to try things than to try to find out what people already know about them. This comment thread, trying to understand what frequencies forked brass instruments make, is an example that came to mind several times reading the post.
Not exactly the same, but this also feels a lot like my experience with making new musical instruments. With an established instrument in an established field the path to being really good generally looks like “figure out what the top people do, and practice a ton,” while with something experimental you have much more of a tradeoff between “put effort into playing your current thing better” and “put effort into improving your current thing”. If you have early batteries or telescopes or something you probably spend a lot of time with that tradeoff. Whereas in mature fields it makes much more sense for individuals to specialize in either “develop the next generation of equipment” or “use the current generation of equipment to understand the world”.
How controversial is the idea that early stage science works pretty differently from more established explorations, and that you need pretty different approaches and skills? I don’t know that much history/philosophy of science but I’m having trouble telling from the paper which of the hypotheses in section 4 are ones that you expect people to already agree with, vs ones that you think you’re going to need to demonstrate?
One question that comes to mind is whether there is still early stage science today. Maybe the patterns that you’re seeing are all about what happens if you’re very early in the development of science in general, but now you only get those patterns when people are playing around (like I am above)? So I’d be interested in the most recent cases you can find that you’d consider to be early-stage.
And a typo: “make the same observers with different telescopes” should be “make the same observations with different telescopes”.
Hi Jeff,
Thanks for your comment :) I totally agree with Larissa’s response, and also really liked your example about instrument building.
I’ve been working with Kerry at Leverage to set-up the early stage science research program by doing background reading on the academic literature in history and philosophy of science, so I’m going to follow-up on Larissa’s comment to respond to the question raised in your 3rd bullet point (and the 4th bullet in a later comment, likely tomorrow).
Large hedge: this entire comment reflects my current views—I expect my thinking to change as we keep doing research and become more familiar with the literature.
“How controversial is the idea that early stage science works pretty differently from more established explorations, and that you need pretty different approaches and skills? I don’t know that much history/philosophy of science but I’m having trouble telling from the paper which of the hypotheses in section 4 are ones that you expect people to already agree with, vs ones that you think you’re going to need to demonstrate?”
This is a great question. The history and philosophy of science literature is fairly large and complicated, and we’ve just begun looking at it, but here is my take so far.
I think it’s somewhat controversial that early stage science works pretty differently from more established explorations and that you need pretty different approaches and skills. It’s also a bit hard to measure, because our claims slightly cross-cut the academic debate.
Summary of our position
To make the discussion a little simpler, I’ve distilled down our hypothesis and starting assumptions to four claims to compare to positions in the surrounding literature[1]. Here are the claims:
(1) You can learn[2] about scientific development and methodology via looking at history.
(2) Scientific development in a domain tends to go through phases, with distinct methodologies and patterns of researcher behavior.
(3) There is an early phase of science that uses less well-established and very different methods than the later phases of science (following on claim (2)).
(4) We can study the early phase of science to figure out how these methods work and use them to make methodological recommendations (following on (1) and (3)).
We take (1) as an assumption, whereas (2)-(4) are elements of our starting hypothesis. As a result, we’ll aim to defend (2)-(4) in the course of our research.
I’ve included a super brief overview of my understanding of the literature below and our position with respect to it, but I worry/expect that I didn’t summarize the perspectives enough to be particularly useful for people who haven’t read Kuhn, Popper, or responses to them. As a result, I’ve tried to answer the original question without going into detail on the literature and then gone into more detail below.
Summary of controversiality of our position in the literature
I’d say that claims (1), (2), and (3) are typically relatively uncontroversial to people who agree with Thomas Kuhn’s ideas in The Structure of Scientific Revolutions. Kuhn’s work is accepted by many people in academia and the mainstream, but there are also a lot of people in academia who dispute it. Of the people who disagree with Kuhn, most disagree with (1), and then (2) (and therefore (3) and (4)). I find their objections to (2) more confusing than their objections to (1), and will need to look into it more. Kuhn himself disagrees with claim (4).
We want our research to (at least initially) avoid getting into the debate directly around (1) (the use of historical case studies to learn about scientific development and make methodological conclusions), largely because it’s a bit outside of scope for us. I expect our work to lose some people, including some academics, due to not believing that we can learn about scientific methodology from history. That said, we’re hopeful that as we try to look at history to learn about methodology, we’ll be able to tell if the entire endeavor doesn’t make sense. We’ll try to be clean about distinguishing what our investigation is indicating we can learn from history and whether that should be generalized. If it turns out we need to investigate and discuss (1) more directly, we’ll do that later. So, for now, we will not justify (1) directly.
Due to all the disagreement around (2), (3), and (4), we’re going take those as hypotheses that we’ll need to demonstrate and justify.
Rather than justifying (2) directly off the bat (e.g., taking a random sampling of cases throughout history and comparing them), we’ll build a methodological model of how discovery works in (4) and then see whether that model predicts that there would be different methods used later which would imply phases of scientific development. If so, we’ll then expand our investigation into cases likely to fall into later phases of science, to see if the model’s predictions hold. We’re hoping that this method will supplement Kuhn’s way of investigating phases of scientific development, let us build a narrower (and hopefully more checkable) initial model, and let us focus most on (4), which has had the least academic attention.
Hope that helps!
Details of our initial take on the literature
This section describes my take on the debate that I’ve situated us with respect to above, but (mostly for length) doesn’t go into detail on the researcher positions (e.g., what paradigms are or what Popper argues about falsification). For an initial overview of positions in the space, I recommend checking out: https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions, https://en.wikipedia.org/wiki/Falsifiability, https://www.jstor.org/stable/686690, http://philsci-archive.pitt.edu/15876/1/A%20New%20Functional%20Approach%20to%20Scientific%20Progress%20-%20preprint.pdf(The latter aren’t designed as overviews, but I’ve found them helpful to get a sense of the discussion in philosophy of science)
The specific academic research area most relevant to the work we’re doing that I’ve found so far is a set of research done at the intersection between history of science and philosophy of science, primarily done between 1960 and 1990. A lot of the work in the field since then is still centered on and generated by this debate. Important figures in the debate include Kuhn, Lakatos, Feyerabend, Smart, and Popper. I expect to find discussion in other areas relevant to what we’re doing, but I’ve mostly investigated this debate so far, as it seemed most central.
Of this work, our hypothesis is most similar to Kuhn’s thinking in The Structure of Scientific Revolutions. I read Kuhn as saying and implying[3]: you can learn about scientific development and methodology by investigating history, there are phases of scientific development, and some phases work differently than other phases with respect to the methods used and researcher behavior.
Kuhn’s ontology differs from ours a bit. In Kuhn’s work, the phases he thinks are different than established explorations are “pre-science” and the period after “anomaly” and “crisis.” I expect our early stage science hypothesis to apply at least in what Kuhn calls “pre-science,” but also possibly at other points in the development of fields (e.g., in some cases after “anomaly” and “crisis”). I take this to mean that Kuhn agrees with a narrower claim than (3). It is also possible that examining the different ontologies will turn up further disagreements.
The clearest disagreement is with respect to (4). Kuhn is quoted (both in his book and verbally) as saying that he considered the methodology used in pre-science to be mysterious, possibly not uniform, and (implied) not fruitful to study. I take him to be saying that there is a distinction between the methodology used in phases of science, but we can’t understand how the earliest phases of discovery work mechanistically. This means he disagrees with (4).[4]
So, on this reading, Kuhn agrees with (1), (2), and parts of (3) of our claims above, but (at least importantly) disagrees with (4). A significant portion of academics (and mainstream audiences) agree with and respect Kuhn, so in that sense, a bunch of our hypothesis is less controversial.
However, lots of people disagree with Kuhn. The best way I’ve found to explain this so far centers around Karl Popper and falsifiability.
Specifically, because of work done by Popper around falsifiability, there’s a lot of debate about claim (1) listed above—whether history can be used to make claims about how science should, does, or has worked. Since Kuhn wrote his book, his methodology has become even more controversial due to questions about the falsifiability of his conclusions. There’s a sizeable contingent of people in the history and philosophy of science that thus don’t use historical cases to reach methodological conclusions and argue against methodologies that do so.
Also, Popper’s views on falsification have created object-level skepticism about Kuhn’s view that science works on the basis of paradigms. You could imagine Popper saying about Kuhn: “Normatively speaking, good science must avoid the pitfalls of unfalsifiable claims, and paradigms don’t work that way, therefore Kuhn is wrong in his description of history or historical science wasn’t making scientific progress.” I want to hedge here that I find interpreting Popper and his effect on surrounding science very difficult, so this interpretation is somewhat likely to be wrong[5]. This debate has spawned many other different interpretations of how to measure, describe, and chunk scientific progress (see Laudan’s functionalism and Lakatos’ Research Programmes, as examples).
As a result of the object-level debate on paradigms and the methodological debate about history, some researchers don’t affirm phases of science (either because we can’t make conclusions like that from history or because they disagree on the object-level), and others agree that there are phases but disagree with the validity of using Kuhnian paradigms to demarcate the phases. This means claim (2) isn’t accepted enough that we can take it and run with it. Because claim (3) and (4) rest on claim (2), this means we’ll need to go into (2), (3), and (4).
—
Notes:
[1] Our current hypothesis actually has 6 claims, and doesn’t include starting assumption (1). I’ve found the version I included in this comment simpler to talk about, but less clear and detailed. Check out the paper for the full hypothesis.
[2] This is overly general, sorry about that. The debate on what can be learned from history (descriptive and normative) is complicated, and I want to largely skip it for now. There is a bit more information in the section below on details of the surrounding literature, but not much. We expect to later write a paper that situates us in the surrounding literature, which will clarify.
[3] It’s a bit tricky because he published work in the 1990s that some philosophers think rescinded his previous views and some do not. This might mean that he back-tracked on (2), though I currently don’t interpret his comments this way.
[4] Kuhn also believes in paradigm incommensurability, which we don’t affirm and don’t want to get into off the bat.
[5] A lot of work has gone into trying to square Popper and Kuhn’s perspectives by arguing that people have misinterpreted either Kuhn or Popper. People differ widely in how sophisticated a claim they view Popper as making about falsification, and how ambiguous they take Kuhn’s description of paradigms to be. Lakatosian Research Programmes are a potential example of a way to square paradigms and falsifiability.
Hi Jeff,
Thanks for taking the time to check out the paper and for sending us your thoughts.
I really like the examples of building new instruments and figuring out how that works versus creating something that’s a refinement of an existing instrument. I think these seem very illustrative of early stage science.
My guess is that the process you were using to work out how your forked brass works, feels similar to how it might feel to be conducting early stage science. One thing that stood out to me was that someone else trying to replicate the instrument found, if I understood this correctly, they could only do so with much longer tubes. That person then theorised that perhaps the mouth and diaphragm of the person playing the instrument have an effect. This is reminiscent of the problems with Galileo’s telescope and the difference in people’s eyesight.
Another thought this example gave me is how video can play a big part in today’s early stage science, in the same way, that demonstrations did in the past. It’s much easier to demonstrate to a wide audience that you really can make the sounds you claim with the instrument you’re describing if they can watch a video of it. If all people had was a description of what you had built, but they couldn’t create the same sound on a replica instrument, they might have been more sceptical. Being able to replicate the experiment will matter more in areas where the claims made are further outside of people’s current expectations. “I can play these notes with this instrument” is probably less unexpected than “Jupiter has satellites we hadn’t seen before and I can see them with this new contraption”. This is outside of the scope of our research, it’s just a thought prompted by the example.
I’ve asked my colleagues to provide an answer to your questions about how controversial the claim that early stage science works differently is and whether it seems likely that there would still be early stage science today. I believe Mindy will add a comment about that soon. We’ll also amend the typo, thanks for pointing that out!
“One question that comes to mind is whether there is still early stage science today. Maybe the patterns that you’re seeing are all about what happens if you’re very early in the development of science in general, but now you only get those patterns when people are playing around (like I am above)? So I’d be interested in the most recent cases you can find that you’d consider to be early-stage.”
This is also a great question.
It is totally possible that early stage science occurred only in the past, and science as a whole has developed past it. We talked to a number of people in our network to try to gather plausible alternatives to our hypothesis about early stage science, and this is one of the most common ones we found. I’m currently thinking of this as one of the possible views we’ll need to argue against or refute for our original hypothesis to hold, as opposed to a perspective we’ve already solidly eliminated.
On recent past cases:
If you go back a bit, there are lots of plausible early stage science success cases in the late 1800s and early 1900s. The study of radiation is a potential case in this period with some possible indicators of early stage methods. This period is arguably not recent enough to refute the “science as a whole has moved past early stage exploration” hypothesis, so I want to seek out more recent examples in addition to studying these.
To get a better answer here, I’ll want us to look more specifically at the window between 1940 and 2000, which we haven’t looked at much so far—I expect it will be our best shot at finding early stage discoveries that have already been verified and accepted, while still being recent.
On current cases:
Finding current cases or cases in the more recent past is trickier. For refuting the hypothesis you laid out, we’d be most interested in finding recent applications of early stage methods that produced successful discoveries. Unfortunately, it can be hard to identify these cases, because when the early research is still happening, it’s often still unclear if it’s on track to being successful.
That said, we think it is possible to identify areas that are potentially early stage science. This is a pretty different activity from looking at more confirmed success cases, but it’s something we’re looking into.
Leverage just released a working paper, “On Intention Research”. From the post: