One thing I want to push back on is the way you framed applied vs fundamental research and how to decide how much to prioritise fundamental research. I think the following claims from the post seem somewhat correct but also somewhat misleading:
The second framing is more focused on the intrinsic value of knowledge:
āBetter science is science that more effectively improves our understanding of the universe.ā
This might appear closer to the approach of fundamental research, where the practical usefulness of eventual results is sometimes unclear.
[...]
Steering science too much towards societal gains might be counterproductive as it is difficult to predict the usefulness of new knowledge before it has been obtained.
[...]
Iām leaning towards some sort of pragmatic compromise here where the ultimate aim is the production of value in the welfarist sense, while āunderstanding the universeā remains an important instrumental goal. Even if we cannot predict the usefulness of new knowledge before we obtain it, it seems reasonable that advancing our understanding of the universe has a lot of potential to improve lives. How much we should prioritize fundamental research where there is not yet any apparent societal usefulness of the results is then an issue of balancing exploration and exploitation
I share the view that the ultimate aim should basically be the production of value in a welfarist (and impartial) sense, and that āunderstanding the universeā can be an important instrumental goal for that ultimate aim. But I think that, as you seem to suggest elsewhere, how much āunderstanding the universeā helps and whether it instead harms depends on which parts of the universe are being understood, by whom, and in what context (e.g., what other technologies also exist).
So I wouldnāt frame it primarily as exploration vs exploitation, but as trying to predict how useful/āharmful a given area of fundamental researchāor fundamental research by a given actor, - will be. And, crucially, that prediction need not be solely based on detailed, explicit ideas about what insights and applications might occur and howāit can also incorporate things like reference class forecasting. Quick, under-justified examples:
Fundamental research in some areas of physics and biology have seemed harmful in the past, so further work in those areas may be more concerning than work in an average area.
Meanwhile, fundamental-ish research by Nick Bostrom has seemed to produce many useful insights in the past, which weāve gradually developed better ideas about how to actually apply, so further work of that sort by him may be quite useful, even if itās not immediately obvious what insights will result or how weāll apply them.
And āSteering science too much towards societal gains might be counterproductive as it is difficult to predict the usefulness of new knowledge before it has been obtainedā reminds me of critiques that utilitarianism would actually be counterproductive on its own terms, because constantly thinking like a utilitarian would be crippling (or whatever). But if thatās true, then utilitarianism just wouldnāt recommend constantly thinking like a utilitarian.
Likewise, if it is the case that āSteering science too much [based on explicitly predictable-in-advance paths to] societal gains might be counterproductiveā, then a sophisticated approach to achieving societal gains just wouldnāt actually recommend doing that.
I share the view that the ultimate aim should basically be the production of value in a welfarist (and impartial) sense, and that āunderstanding the universeā can be an important instrumental goal for that ultimate aim. But I think that, as you seem to suggest elsewhere, how much āunderstanding the universeā helps and whether it instead harms depends on which parts of the universe are being understood, by whom, and in what context (e.g., what other technologies also exist).
So I wouldnāt frame it primarily as exploration vs exploitation, but as trying to predict how useful/āharmful a given area of fundamental researchāor fundamental research by a given actor, - will be. And, crucially, that prediction need not be solely based on detailed, explicit ideas about what insights and applications might occur and howāit can also incorporate things like reference class forecasting.
My thought is that the exploration vs exploitation issue remains, even if we also attempt to favour the areas where progress would be most beneficial. I am not really convinced that itās possible to make very good predictions about the consequences of new discoveries in fundamental research. I donāt have a strong position/ābelief regarding this but Iām somewhat skeptical that itās possible.
Thanks for the reading suggestions, I will be sure to check them out ā if you think of any other reading recommendations supporting the feasibility of forecasting consequences of research, I would be very grateful!
And āSteering science too much towards societal gains might be counterproductive as it is difficult to predict the usefulness of new knowledge before it has been obtainedā reminds me of critiques that utilitarianism would actually be counterproductive on its own terms, because constantly thinking like a utilitarian would be crippling (or whatever). But if thatās true, then utilitarianism just wouldnāt recommend constantly thinking like a utilitarian.
Likewise, if it is the case that āSteering science too much [based on explicitly predictable-in-advance paths to] societal gains might be counterproductiveā, then a sophisticated approach to achieving societal gains just wouldnāt actually recommend doing that.
This is more or less my conclusion in the post, even if I donāt use the same wording. The reason why I think itās worth mentioning potential issues with a (naĆÆve) welfarist focus is that if Iād work with science reform and only mention the utilitarian/āwelfarist framing, I think this could come across as naĆÆve or perhaps as opposed to fundamental research and that would make discussions unnecessarily difficult. I think this is less of a problem on the EA Forum than elsewhere š
Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental researchāexamples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if itās feasible to predict the consequences of fundamental research.
Thanks for this post, I found it interesting.
One thing I want to push back on is the way you framed applied vs fundamental research and how to decide how much to prioritise fundamental research. I think the following claims from the post seem somewhat correct but also somewhat misleading:
I share the view that the ultimate aim should basically be the production of value in a welfarist (and impartial) sense, and that āunderstanding the universeā can be an important instrumental goal for that ultimate aim. But I think that, as you seem to suggest elsewhere, how much āunderstanding the universeā helps and whether it instead harms depends on which parts of the universe are being understood, by whom, and in what context (e.g., what other technologies also exist).
So I wouldnāt frame it primarily as exploration vs exploitation, but as trying to predict how useful/āharmful a given area of fundamental researchāor fundamental research by a given actor, - will be. And, crucially, that prediction need not be solely based on detailed, explicit ideas about what insights and applications might occur and howāit can also incorporate things like reference class forecasting. Quick, under-justified examples:
Fundamental research in some areas of physics and biology have seemed harmful in the past, so further work in those areas may be more concerning than work in an average area.
Meanwhile, fundamental-ish research by Nick Bostrom has seemed to produce many useful insights in the past, which weāve gradually developed better ideas about how to actually apply, so further work of that sort by him may be quite useful, even if itās not immediately obvious what insights will result or how weāll apply them.
(See also Should marginal longtermist donations support fundamental or intervention research?)
And āSteering science too much towards societal gains might be counterproductive as it is difficult to predict the usefulness of new knowledge before it has been obtainedā reminds me of critiques that utilitarianism would actually be counterproductive on its own terms, because constantly thinking like a utilitarian would be crippling (or whatever). But if thatās true, then utilitarianism just wouldnāt recommend constantly thinking like a utilitarian.
Likewise, if it is the case that āSteering science too much [based on explicitly predictable-in-advance paths to] societal gains might be counterproductiveā, then a sophisticated approach to achieving societal gains just wouldnāt actually recommend doing that.
(On those points, I recommend Act utilitarianism: criterion of rightness vs. decision procedure and Naive vs. sophisticated consequentialism.)
My thought is that the exploration vs exploitation issue remains, even if we also attempt to favour the areas where progress would be most beneficial. I am not really convinced that itās possible to make very good predictions about the consequences of new discoveries in fundamental research. I donāt have a strong position/ābelief regarding this but Iām somewhat skeptical that itās possible.
Thanks for the reading suggestions, I will be sure to check them out ā if you think of any other reading recommendations supporting the feasibility of forecasting consequences of research, I would be very grateful!
This is more or less my conclusion in the post, even if I donāt use the same wording. The reason why I think itās worth mentioning potential issues with a (naĆÆve) welfarist focus is that if Iād work with science reform and only mention the utilitarian/āwelfarist framing, I think this could come across as naĆÆve or perhaps as opposed to fundamental research and that would make discussions unnecessarily difficult. I think this is less of a problem on the EA Forum than elsewhere š
Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental researchāexamples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if itās feasible to predict the consequences of fundamental research.