Working for Cooperative AI Foundation. I have a background in engineering and entrepreneurship and have previously been running a small non-profit focused on prevention of antibiotic resistance and worked for EA Sweden. Received an EA Infrastructure grant for cause exploration in meta-science during 2021-22.
C Tilli
Improving science: Influencing the direction of research and the choice of research questions
As the author of this post, I found it interesting to re-read it more than a year later, because even though I remember the experience and feelings I describe in it, I do feel quite differently now. This is not because I came to some rational conclusion about how to think of self-worth vs instrumental value, but rather the issue has just kind of faded away for me.
It’s difficult to say exactly why, but I think it might be related to that I have developed more close friendships with people who are also highly engaged EAs, where I feel that they genuinely care about me and spend time not just supporting me on high-impact work, but on socially checking in and hanging out, joking or talking about private stuff—that they like me and care about me as a person.
This makes me question the assumptions I made in the post about how feelings of self-worth are created in the religious context. Perhaps even in church the thing is not the abstract idea being “perfect in Gods eyes”, but rather the practical experience of feeling loved and accepted by the community and knowing they have your back. If this is right, that’s a very good thing as that is something that can be re-created in a non-religious context.
So, if I’d update this post now, I might be able to develop some ideas for how we could work on this: perhaps a reason to be careful with over-optimizing our interpersonal meetings?
Thanks a lot for this post! I really appreciate it and think (as you also noted) that it could be really useful also for career decisions, all well as for structuring ideas around how to improve specific organizations.
we must be careful to avoid scenarios in which improving the technical quality of decision-making at an institution yields outcomes that are beneficial for the institution but harmful by the standards of the “better world”
I think this is a really important consideration that you highlight here. When working in an organization my hunch is that one tends to get relatively immediate feedback on if decisions are good for the organization itself, while feedback on how good decisions are for the world and in the long term is much more difficult to get.
For a user seeking to make casual or fast choices about prioritizing between institutional engagement strategies, for example a small consulting firm choosing among competing client offers, it’s perfectly acceptable to eschew calculations and instead treat the questions as general prompts to add structure and guidance to an otherwise intuitive process. Since institutional engagement can often carry high stakes, however, where possible we recommend at least trying a heuristic quantitative approach to deciding how much additional quantification is useful, if not more fully quantifying the decision.
I’m doing some work on potential improvements to the scientific research system, and after reading this post I’m thinking I should try to apply this framework to specific funding agencies and other meta-organizations in the research system. Do you have any further thoughts since posting this regarding how difficult vs valuable it is to attempt quantification of the values? Approximately how time-consuming is such work in your experience?
Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental research—examples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if it’s feasible to predict the consequences of fundamental research.
I share the view that the ultimate aim should basically be the production of value in a welfarist (and impartial) sense, and that “understanding the universe” can be an important instrumental goal for that ultimate aim. But I think that, as you seem to suggest elsewhere, how much “understanding the universe” helps and whether it instead harms depends on which parts of the universe are being understood, by whom, and in what context (e.g., what other technologies also exist).
So I wouldn’t frame it primarily as exploration vs exploitation, but as trying to predict how useful/harmful a given area of fundamental research—or fundamental research by a given actor, - will be. And, crucially, that prediction need not be solely based on detailed, explicit ideas about what insights and applications might occur and how—it can also incorporate things like reference class forecasting.
My thought is that the exploration vs exploitation issue remains, even if we also attempt to favour the areas where progress would be most beneficial. I am not really convinced that it’s possible to make very good predictions about the consequences of new discoveries in fundamental research. I don’t have a strong position/belief regarding this but I’m somewhat skeptical that it’s possible.
Thanks for the reading suggestions, I will be sure to check them out – if you think of any other reading recommendations supporting the feasibility of forecasting consequences of research, I would be very grateful!
And “Steering science too much towards societal gains might be counterproductive as it is difficult to predict the usefulness of new knowledge before it has been obtained” reminds me of critiques that utilitarianism would actually be counterproductive on its own terms, because constantly thinking like a utilitarian would be crippling (or whatever). But if that’s true, then utilitarianism just wouldn’t recommend constantly thinking like a utilitarian.
Likewise, if it is the case that “Steering science too much [based on explicitly predictable-in-advance paths to] societal gains might be counterproductive”, then a sophisticated approach to achieving societal gains just wouldn’t actually recommend doing that.
This is more or less my conclusion in the post, even if I don’t use the same wording. The reason why I think it’s worth mentioning potential issues with a (naïve) welfarist focus is that if I’d work with science reform and only mention the utilitarian/welfarist framing, I think this could come across as naïve or perhaps as opposed to fundamental research and that would make discussions unnecessarily difficult. I think this is less of a problem on the EA Forum than elsewhere 😊
What would better science look like?
And totally agree about the Replacing Guilt series, it’s really good.
Hi Miranda! I’m glad you liked it, and I hope you feel better now. Since it’s been a while since I wrote this I realize my perspective changes a lot over time—it feels less like a conflict or a problem for me right now, and not necessarily because I have rationally figured something out, it’s more like I have been focusing on other things and am generally in a better place. I don’t know how useful that is to you or anyone else, but to some extent it might mean that things can sometimes get better even if we don’t solve the issue that bothered us in the first place.
Regardless, the instrumental argument is difficult enough for me to put into practice!
Thinking of myself as a role model to others has been the most useful to me. Instead of thinking of exactly how much rest/vacation/enjoyment I need to function optimally, I try to think more about what are healthy norms to establish in a workplace or a community. What is good about that is that I get away from the tendency of thinking of myself as an exception who can somehow manage more than others—instead of thinking “Can I push myself a bit further?” the question becomes “Is it healthy/constructive if we all push ourselves in this way?”
But more than having “figured it out”, I have mostly just reached some kind of pragmatic stance where I just allow myself to be egoistic in the sense that I prioritize myself and my loved ones muchy, much higher than we would “deserve” from some kind of detached ethical perspective. I don’t have any way to justify it really, I just admit it and accept it, and it helps me to move on to thinking about other things instead.
Feel free to reach out over DM if you want to chat!
I added an edit with a link to this thread now =)
Notably, my definition is a broader tent (in the context of metascience) than prioritization of science/metascience entirely from a purely impartial EA perspective.
I hadn’t formulated it so clearly for myself, but at this stage I would say I’m using the same perspective as you—I think one would have to have a lot clearer view of the field / problems /potential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.
What I mean about this is that I think it’s plausible that there are immense (tens of billions to trillions) dollar bills laying on the floor in figuring out optimal allocation of the above. I think a lot of these decisions are, in practice, based on lore, political incentives, and intuition. I believe (could definitely be wrong), there’s very little careful theorizing and even less empirical data.
I think this seems like a really exciting opportunity!
On your listing of things that would be valuable vs less valuable, I have a roughly similar view at this stage though I think I might be thinking a bit more about institutional/global incentives and a bit less about improving specific teams (e.g. improving publishing standards vs improving the productivity of a promising research group). But at this stage, I have very little basis for any kind of ranking of how pressing different issues are. I agree with your view that replication crisis stuff seems important but relatively less neglected.
I think it would be very interesting/valuable to investigate what impactful careers in meta-research or improving research could be, and specifically to identify gaps where there are problems that are not currently being addressed in a useful way.
I think that we have a rather similar view actually—maybe it’s just the topic of the post that makes it seem like I am more pessimistic than I am? Even though this post focuses on mapping up problems in the research system, my point is not in any way that scientific research would be useless—rather the opposite, I think it is very valuable, and that is why I’m so interested in exploring if there are ways that it can be improved. It’s not at all my intention to say that research, or researchers, or any other people working in the system for that matter, are “bad”.
It would be nice for the reader if papers were a crystal-clear guide for a novice to the field. Instead, you need a decent amount of sophistication with the field to know what to make of it all.
My concern is not that published papers are not clear guides that a novice could follow or understand. Especially now that there is an active debate around reproducibility I would also not expect (good) researchers to be naive about it (and that has not at all been my personal experience from working with researchers). Still it seems to me that if reproducibility is lacking in fields that produce a lot of value, initiatives that would improve reproducibility would be very valuable?
Rigor and transparency are good things. What would we have to do to get more of them, and what would the tradeoffs be?
From what I have seen so far, I think that the work by OSF (particularly on preregistration) and publications from METRICS seems like it could be impactful—what do you think of these? The ARRIVE guidelines also seem like a very valuable initiative for reporting of research with animals.
Thanks for this!
You make a good point, the part on funding priorities does become kind of circular. Initially the heading there was “Grantmakers are not driven by impact”—but that got confusing since I wanted to avoid defining impact (because that seemed like a rabbit hole that would make it impossible to finish the post). So I just changed it to “Funding priorities of grantmakers”—but your comment is valid with either wording, it does make sense that the one who spends the resources should set the priorities for what they want to achieve.
I think there is still something there though—maybe as you say, a lack of alignment in values—but maybe even more that a lack of skill in how the priorities are enforced or translated to incentives? It seems like even though the high-level priorities of a grant-maker is theirs to define, the execution of the grantmaking sometimes promotes something else? E.g. a grantmaker that has a high-level objective of improving public health, but where the actual grants go to very hyped-up fields that are already getting enough funding, or where the investments are mismatched with disease burdens or patient needs. In a way, this is similar to ineffective philantropy in general—perhaps “ineffective grantmaking” would be an appropriate heading?
Thank you for this perspective, very interesting.
I definitely agree with you that a field is not worthless just because the published figures are not reproducible. My assumption would be that even if it has value now, it could be a lot more valuable if reporting were more rigorous and transparent (and that potential increase in value would justify some serious effort to improve the rigorousness and transparency).
Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, and that publications are not very important for communicating within the field to other researchers or towards end users in the industry?
That got me thinking—if that were the case, would we actually need peer-reviewed publications at all for such a field? I’m thinking that the public would anyway rather read popular science articles, and that this could be produced with much less effort by science journalists? (Maybe I’m totally misunderstanding your point here, but if not I would be very curious to hear your take on such a model).
I think it’s a really interesting, but also very difficult, idea. Perhaps one could identify a limited field of research where this would be especially valuable (or especially feasible, or ideally both), and try it out within that field as an experiment?
I would be very interested to know more if you have specific ideas of how to go about it.
Thank you! Joined both and looking forward to reading your posts!
So glad to hear that, and thanks for the added reference to letsfund!
On peer review I agree with Edo’s comment, I think it’s more about setting a standard than about improving specific papers.
On IP, I think this is very complex and I think “IP issues” can be a barrier both when something is protected and when it’s not. I have personally worked in the periphery of projects where failing to protect/maintain IP has been the end of road for potentially great discoveries, but also seen the other phenomena where researchers avoid a specific area because someone else holds the IP. It would be interesting to get a better understanding both of the scale of these problems and if any of the initiatives that currently exists seem promising for improving it.
Why scientific research is less effective in producing value than it could be: a mapping
I am so grateful for WAMBAM, the mentorship program for women, trans and non-binary people in EA. It is so well-run and well-thought through, and it has really helped me develop professionally and personally and also made me a lot more connected to the international EA community.
I am also really grateful that the EA Forum exists!
I can obviously only speak for myself, but for me just having this kind of conversation is in itself very comforting since it shows that there are more people who think about this (i.e. it’s not just “me being stupid”). Disagreement doesn’t seem threatening as long as the tone is respectful and kind. In a way, I think it rather becomes easier to treat my own thoughts more lightly when I see that there are many different ways that people think about it.
Thanks for your comment! I’m uncertain, I think it might depend also in what context the discussion is brought up and with what framing. But it’s a tricky one for sure, and I agree specific targeted advocacy seem less risky.