My 2c on what research I and others like me would find useful from groups like this:
Overviewing empirical and planning-relevant considerations (rather than philosophical theorizing).
Focusing on obstacles and major events on the path to “technological maturity” I.e. risky or transformative techs.
Investigate specific risky and transformative tech in detail. FHI has done a little of this but it is very neglected on the margin. Scanning microscopy for neural tissue, invasive brain-computer interfaces, surveillance, brain imaging for mind-reading, CRISPR, genome synthesis, GWAS studies in areas of psychology, etc.
Help us understand AI progress. AI Impacts has done a bit of this but they are tiny. It would be really useful to have a solid understanding of growth of capabilities, funding and academic resources in a field like deep learning. How big is the current bubble compared to previous ones, et cetera.
Also, in its last year, GPP largely specialized on tech and long-run issues. This meant it did a higher density of work on prioritization questions that mattered. Prima facie, this and other reasons would also make Oxford Prioritization Project want to specialize on the same.
Lastly, you’ll get more views and comments if you use a (more beautiful) Medum blog.
Too few EAs are doing object-level work, (excluding donations), and this can be helped by doing empirical research around possible actions. One can note that there were not enough people interested in starting ventures for EAV, and that newbies are often at a loss to figuring out what EA does apart from philosophize. This makes it hard to attract people who are practically competent, such as businesspeople and scientists, and overcome our philosopher-founder effect. From a standpoint of running useful projects, I think that what would be most useful would be business-plans and research agendas, followed by empirical investigations of issues, followed by theoretical prioritization, followed by philosophical investigations. However, it seems to me that most people are working in the latter categories.
For EAs who are actually acting, their actions would more easily be swayed by empirical research. Although most people working on high-impact areas were brought there by theoretical reasoning, their ongoing questions are more concrete. For example, in AI, I wonder: To what extent have concerns about edge-instantiation and incorrigibility borne out in actual AI systems? To what extent has AI progress been driven by new mathematical theory, rather than empirical results? What kind of CV do you need to have to participate in governing AI? What can we learn about this from the case of nuclear governance? This would help people to prioritize much more than, for example, philosophical arguments about the different reasons for working on AI as compared to immigration.
Empirical research is easier to build on.
One counterargument would be that perhaps these action-oriented EAs have too-short memories. Since their previous decisions relied on theory from people like Bostrom, shouldn’t we expect the same from their future decisions? There are two rebuttals to this. One is that theoretical investigations are especially dependent on the talent of their authors. I would not argue that people like Bostrom (if we know of any) should stop philosophizing about deeply theoretical issues, such as infinite ethics or decision theory. However, that research must be supported by many more empirically-minded investigators. Second, there are reasons to expect the usefulness of theoretical investigations to decrease relative to empirical research over time as the important insights are harvested, people start implementing plans, and plausible catastrophes get nearer.
I guess you’d get more shares, views, and hence comments on a Medium, even accounting for a small inconvenience from signup. Traffic is almost all through sharing nowadays. e.g. EA Forum gets 70% of referrals from Facebook, >80% if you include other social media, and >90% if you include other blogs.
The proposal would not require embedding anything inside a Squarespace. You can just put it on a subdomain with the right logos, and linking back to the main page as in the recent EA example of https://blog.ought.com/
Great to see this!
My 2c on what research I and others like me would find useful from groups like this:
Overviewing empirical and planning-relevant considerations (rather than philosophical theorizing).
Focusing on obstacles and major events on the path to “technological maturity” I.e. risky or transformative techs.
Investigate specific risky and transformative tech in detail. FHI has done a little of this but it is very neglected on the margin. Scanning microscopy for neural tissue, invasive brain-computer interfaces, surveillance, brain imaging for mind-reading, CRISPR, genome synthesis, GWAS studies in areas of psychology, etc.
Help us understand AI progress. AI Impacts has done a bit of this but they are tiny. It would be really useful to have a solid understanding of growth of capabilities, funding and academic resources in a field like deep learning. How big is the current bubble compared to previous ones, et cetera.
Also, in its last year, GPP largely specialized on tech and long-run issues. This meant it did a higher density of work on prioritization questions that mattered. Prima facie, this and other reasons would also make Oxford Prioritization Project want to specialize on the same.
Lastly, you’ll get more views and comments if you use a (more beautiful) Medum blog.
Happy to justify these positions further.
Good luck!
Hey Ryan, I’d be particularly interested in hearing more about your reasons for your first point (about theoretical vs. empirical work).
Sure. Here are some reasons I think this:
Too few EAs are doing object-level work, (excluding donations), and this can be helped by doing empirical research around possible actions. One can note that there were not enough people interested in starting ventures for EAV, and that newbies are often at a loss to figuring out what EA does apart from philosophize. This makes it hard to attract people who are practically competent, such as businesspeople and scientists, and overcome our philosopher-founder effect. From a standpoint of running useful projects, I think that what would be most useful would be business-plans and research agendas, followed by empirical investigations of issues, followed by theoretical prioritization, followed by philosophical investigations. However, it seems to me that most people are working in the latter categories.
For EAs who are actually acting, their actions would more easily be swayed by empirical research. Although most people working on high-impact areas were brought there by theoretical reasoning, their ongoing questions are more concrete. For example, in AI, I wonder: To what extent have concerns about edge-instantiation and incorrigibility borne out in actual AI systems? To what extent has AI progress been driven by new mathematical theory, rather than empirical results? What kind of CV do you need to have to participate in governing AI? What can we learn about this from the case of nuclear governance? This would help people to prioritize much more than, for example, philosophical arguments about the different reasons for working on AI as compared to immigration.
Empirical research is easier to build on.
One counterargument would be that perhaps these action-oriented EAs have too-short memories. Since their previous decisions relied on theory from people like Bostrom, shouldn’t we expect the same from their future decisions? There are two rebuttals to this. One is that theoretical investigations are especially dependent on the talent of their authors. I would not argue that people like Bostrom (if we know of any) should stop philosophizing about deeply theoretical issues, such as infinite ethics or decision theory. However, that research must be supported by many more empirically-minded investigators. Second, there are reasons to expect the usefulness of theoretical investigations to decrease relative to empirical research over time as the important insights are harvested, people start implementing plans, and plausible catastrophes get nearer.
I guess you’d get more shares, views, and hence comments on a Medium, even accounting for a small inconvenience from signup. Traffic is almost all through sharing nowadays. e.g. EA Forum gets 70% of referrals from Facebook, >80% if you include other social media, and >90% if you include other blogs.
The proposal would not require embedding anything inside a Squarespace. You can just put it on a subdomain with the right logos, and linking back to the main page as in the recent EA example of https://blog.ought.com/
The name “Oxford Prioritisation Project” has an unhelpful acronym collision :)
Do you have a standard abbreviated form that avoids it? Maybe OxPri, following the website address?
edit: I’ve found this issue addressed in other comments, and the official answer is apparently “oxprio”.
Thus the website, oxpr.io. OxPrio normally has a capital ‘O’ and ‘P’ too.