I share the view that the ultimate aim should basically be the production of value in a welfarist (and impartial) sense, and that “understanding the universe” can be an important instrumental goal for that ultimate aim. But I think that, as you seem to suggest elsewhere, how much “understanding the universe” helps and whether it instead harms depends on which parts of the universe are being understood, by whom, and in what context (e.g., what other technologies also exist).
So I wouldn’t frame it primarily as exploration vs exploitation, but as trying to predict how useful/harmful a given area of fundamental research—or fundamental research by a given actor, - will be. And, crucially, that prediction need not be solely based on detailed, explicit ideas about what insights and applications might occur and how—it can also incorporate things like reference class forecasting.
My thought is that the exploration vs exploitation issue remains, even if we also attempt to favour the areas where progress would be most beneficial. I am not really convinced that it’s possible to make very good predictions about the consequences of new discoveries in fundamental research. I don’t have a strong position/belief regarding this but I’m somewhat skeptical that it’s possible.
Thanks for the reading suggestions, I will be sure to check them out – if you think of any other reading recommendations supporting the feasibility of forecasting consequences of research, I would be very grateful!
And “Steering science too much towards societal gains might be counterproductive as it is difficult to predict the usefulness of new knowledge before it has been obtained” reminds me of critiques that utilitarianism would actually be counterproductive on its own terms, because constantly thinking like a utilitarian would be crippling (or whatever). But if that’s true, then utilitarianism just wouldn’t recommend constantly thinking like a utilitarian.
Likewise, if it is the case that “Steering science too much [based on explicitly predictable-in-advance paths to] societal gains might be counterproductive”, then a sophisticated approach to achieving societal gains just wouldn’t actually recommend doing that.
This is more or less my conclusion in the post, even if I don’t use the same wording. The reason why I think it’s worth mentioning potential issues with a (naïve) welfarist focus is that if I’d work with science reform and only mention the utilitarian/welfarist framing, I think this could come across as naïve or perhaps as opposed to fundamental research and that would make discussions unnecessarily difficult. I think this is less of a problem on the EA Forum than elsewhere 😊
Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental research—examples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if it’s feasible to predict the consequences of fundamental research.
My thought is that the exploration vs exploitation issue remains, even if we also attempt to favour the areas where progress would be most beneficial. I am not really convinced that it’s possible to make very good predictions about the consequences of new discoveries in fundamental research. I don’t have a strong position/belief regarding this but I’m somewhat skeptical that it’s possible.
Thanks for the reading suggestions, I will be sure to check them out – if you think of any other reading recommendations supporting the feasibility of forecasting consequences of research, I would be very grateful!
This is more or less my conclusion in the post, even if I don’t use the same wording. The reason why I think it’s worth mentioning potential issues with a (naïve) welfarist focus is that if I’d work with science reform and only mention the utilitarian/welfarist framing, I think this could come across as naïve or perhaps as opposed to fundamental research and that would make discussions unnecessarily difficult. I think this is less of a problem on the EA Forum than elsewhere 😊
Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental research—examples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if it’s feasible to predict the consequences of fundamental research.