Depends what you mean by median core EA, but defining it as “someone we could bump into at EAG multiple years in a row”, I would say:
We think there are limitations to what desktop research can uncover, especially in more neglected areas where evidence is more scarce. Therefore, we are time-capping stages of our research process and putting a lot of attention into making it more and more efficient over time (e.g., experimenting on and evaluating the research process itself and iteratively improving it). We think that some parts of research & strategy work will have to be done after a charity gets started as they will be in a better position to gain more information, design quick experiments with fast feedback loops, and iterate. In general, we are making sure our research can be used to start a high-impact charity in the span of months and sometimes deliberately do not provide answers to all questions. As a result, we tend to take more risks.
We are less focused on longtermism, and prioritize global well-being (global health and development, animal welfare ect.) based on epistemics the team shares.
I think there is a cluster of beliefs related to: taking moral uncertainty seriously, the team being interested in greater cause diversity and in actively exploring candidates for Cause X.
Depends what you mean by median core EA, but defining it as “someone we could bump into at EAG multiple years in a row”, I would say:
We think there are limitations to what desktop research can uncover, especially in more neglected areas where evidence is more scarce. Therefore, we are time-capping stages of our research process and putting a lot of attention into making it more and more efficient over time (e.g., experimenting on and evaluating the research process itself and iteratively improving it). We think that some parts of research & strategy work will have to be done after a charity gets started as they will be in a better position to gain more information, design quick experiments with fast feedback loops, and iterate. In general, we are making sure our research can be used to start a high-impact charity in the span of months and sometimes deliberately do not provide answers to all questions. As a result, we tend to take more risks.
We are less focused on longtermism, and prioritize global well-being (global health and development, animal welfare ect.) based on epistemics the team shares.
I think there is a cluster of beliefs related to: taking moral uncertainty seriously, the team being interested in greater cause diversity and in actively exploring candidates for Cause X.