Could you say more about which fields / career paths you have in mind?
No particular fields or career paths in particular. But there are some strong reasons for reaching out to people who already have or are on a good track to having impact in a field/career path we care about. These people will need a lot less training to be able to contribute and they will already have been selected for being able to contribute to the field.
The issue that people point to is that it seems hard to change people’s career plans or research agendas once they are already established in a field. E.g. students are much more moveable. I think this is true, but I also think we haven’t worked hard enough to find ways of doing this. For example, we currently have a problem where someone with over a decade of experience might not be comfortable with EA groups, because they tend to skew very young.
Things we could try more:
Find senior researchers working on topics adjacent to those that seem important and who seem interested in the questions we’re asking. Then get them involved by connecting them with an interesting research agenda, excellent collaborators, and potentially funding. This is something that a lot of the academic institutions in the longtermist space (which I know better than other EA causes) are trying out. I’m excited to see how it goes.
Introduce more specialist community building. Perhaps this can be done by e.g. hosting more conferences on specific EA-related topics, where researchers already involved in a field can see how their research fits into the topic (this is something e.g. GPI seems to be doing a good job of).
You already give some examples later but, again, which fields do you have in mind?
Some categories that spring to mind:
Collecting important data series. One example is Benn Todd’s recent work on retention in the EA movement. Another example is data regarding folk’s views on AI timelines. I’m part of a group working on a follow-up survey to Grace et al 2018, where we’ve resurveyed folks who responded to these questions in 2016. This is the first time (to my knowledge) results where the same person is asked the same HLMI timeline question twice, with some time in between, even though the first survey of people’s beliefs on the question were done around 2005 (link).
Doing rigorous historical case studies. These tend to be fairly informative to my world view (for better or worse), but I don’t know of anyone within the EA community who spends a significant amount of time on it.
Thoroughly trying to understand an important actor, e.g. becoming a China specialist with deep understanding of longtermist or animal welfare issues.
Do you think there are any systemic reasons for why certain areas of research aren’t considered to be ‘interesting’ in EA / longtermism (assuming those are the people you have in mind here)?
I think something systematic is that people are excited about work that is conceptually hard. A lot of people tend to have philosophical mathsy minds. This means that e.g. empirical research gets left by the wayside. I.e. it’s not by any means inherently uninteresting work.
No particular fields or career paths in particular. But there are some strong reasons for reaching out to people who already have or are on a good track to having impact in a field/career path we care about. These people will need a lot less training to be able to contribute and they will already have been selected for being able to contribute to the field.
The issue that people point to is that it seems hard to change people’s career plans or research agendas once they are already established in a field. E.g. students are much more moveable. I think this is true, but I also think we haven’t worked hard enough to find ways of doing this. For example, we currently have a problem where someone with over a decade of experience might not be comfortable with EA groups, because they tend to skew very young.
Things we could try more:
Find senior researchers working on topics adjacent to those that seem important and who seem interested in the questions we’re asking. Then get them involved by connecting them with an interesting research agenda, excellent collaborators, and potentially funding. This is something that a lot of the academic institutions in the longtermist space (which I know better than other EA causes) are trying out. I’m excited to see how it goes.
Introduce more specialist community building. Perhaps this can be done by e.g. hosting more conferences on specific EA-related topics, where researchers already involved in a field can see how their research fits into the topic (this is something e.g. GPI seems to be doing a good job of).
Some categories that spring to mind:
Collecting important data series. One example is Benn Todd’s recent work on retention in the EA movement. Another example is data regarding folk’s views on AI timelines. I’m part of a group working on a follow-up survey to Grace et al 2018, where we’ve resurveyed folks who responded to these questions in 2016. This is the first time (to my knowledge) results where the same person is asked the same HLMI timeline question twice, with some time in between, even though the first survey of people’s beliefs on the question were done around 2005 (link).
Doing rigorous historical case studies. These tend to be fairly informative to my world view (for better or worse), but I don’t know of anyone within the EA community who spends a significant amount of time on it.
Thoroughly trying to understand an important actor, e.g. becoming a China specialist with deep understanding of longtermist or animal welfare issues.
I think something systematic is that people are excited about work that is conceptually hard. A lot of people tend to have philosophical mathsy minds. This means that e.g. empirical research gets left by the wayside. I.e. it’s not by any means inherently uninteresting work.