Working for Cooperative AI Foundation. I have a background in engineering and entrepreneurship and have previously been running a small non-profit focused on prevention of antibiotic resistance and worked for EA Sweden. Received an EA Infrastructure grant for cause exploration in meta-science during 2021-22.
C Tilli
Thanks a lot for this comment. I feel like I need to read it over again and think more about it, so I don’t have a detailed or clever response, but I really appreciate it. The comparison to other things that have mainly or only instrumental value, and how much we actually value those things, was also a new and useful perspective for me.
Interesting thought. I’m not sure if what I had was the mainstream understanding of Christianity, but I didn’t experience that there was this kind of conflict in the same way. I’d think that the intrinsic value of being created and loved by God was not really something that could pale in comparison to anything. But I don’t know, and maybe it’s not very important.
I think there is a difference between justifying spending resources on our own wellbeing and being able to feel valuable independent of performance. Feeling valuable is of course related to feeling like we deserve to be spent resources on, but I don’t think it’s exactly the same.
Actually my concerns are more practical, along the lines of Roberts comment, that this kind of thinking could be bad for mental health and, indeed, long-term productivity and impact. If the perception of self-worth didn’t seem important for mental health, I would not care much about it.
But it would be a sad scenario if we look back in 50 years and see that the EA movement has led to a lot of capable, ambitious people burning out because we (inadvertently) encouraged (or failed to counteract) destructive thought patterns.I don’t think there is a simple solution, but I think Will Bradshaw is on to something in his comment about the need to “generate community structures and wisdom literature to help manage this tension, care for each other, and create the emotional (as well as intellectual) conditions we need to survive and flourish.”
Edit: I originally made mistakes in the calculation below, have edited to correct this. See comment below by willbradshaw for details of the calculation.
Thanks! I completely agree there are other strong reasons to reduce (or eliminate) factory farming.
About your other comment – I also don’t think the situation is reassuring at all. I think it’s very plausible that the antibiotic use in agriculture could be an important driver of antibiotic resistance.
I think that we need more research on both the jumping of species barriers and on horizontal gene transfer. This paper could be interesting if you want to read more on how common horizontal gene transfer is, but I haven’t been able to find anything that gives a good assessment on how important this is for resistance (I will be very grateful for suggestions if you or someone else know good research on this!).
I know an analogous problem is that human patients often develop resistant bacteria in the normal gut microbiota when they take antibiotics, and this could also be transferred to pathogens through horizontal gene transfer – again, we don’t know how much it happens. Another thing is that some bacteria can both be living in the environment or on animals or in the normal bacterial flora, and then act as pathogens when they end op in the “wrong” place – for example bacteria that is harmless in the gut flora could cause urinary tract infection. If these develop resistance, they don’t need to do any gene transfer but simply change location to cause problems.
Personally, if I try to speculate, I would reason that it’s very unlikely that antibiotic use on animals drives resistance in human infections as much per kg used as antibiotic use on humans. So if we assume that 75% of the kg of antibiotics is used on animals, I would say that it’s unlikely to drive as much as 75% of the resistance burden on humans. It could be that it is 10% as efficient in driving resistance on humans because of the “transfer barriers”, that would mean that 7.5% (CORRECTION: should be 23%, see comments below) of the resistance burden on humans would go away if we eliminated antibiotic use in agriculture. That would still be very significant and worth pursuing. Of course, I don’t know at all if the number should be 10% or 50% or 1%, and I could also not answer if that makes it a more significant driver than for example the misuse of antibiotics for viral infections.
Apart from speculative, this reasoning is also very simplified, since there are many different classes of antibiotics that target different bacteria. It is unclear which would be the best way to measure “resistance burden”. If we would mean it as QALYs lost, it will make a big difference which bacteria is resistant and to which antibiotic(s). In some cases resistance would just mean switching over the patient to another drug and the patient recovers a couple of days later. In some cases, such as with tuberculosis, resistance could mean the alternative treatment is prolonged by years, has serious side-effects and is also much more expensive. In some cases, the patient would die. Resistance to so-called last-line antibiotics is much more severe than others. This report lists the WHO priority pathogens, I think it’s quite good in how it gives an understanding of how much worse the treatment for resistant tuberculosis is compared to “regular” tuberculosis, for example (p 21).
Even if I think we need more research to understand this better (and thereby how to design interventions and allocate resources), I don’t think that should be taken as a reason to wait with reforming agriculture to eliminate (non-therapeutic) antibiotic use. We know the mechanisms are there, and we also know that it’s possible to remove the antibiotics from production, and I don’t think we can afford to wait and see. This urgency to achieve change is also why I think it unwise to tie this cause too closely to elimination of factory farming, for example – even if that would be good, I think it would take too long time, and I think it’s possible to eliminate the antibiotic use much faster if we keep that as a (relatively) separate issue. In context where one can be nuanced I don’t think it’s wrong to bring up antibiotic use and resistance as a negative effect of factory farming, but I think as you say that there are stronger arguments to front to end factory farming.
Thanks a lot for this post!
Thanks for this! I’ve been thinking quite a bit about this (see some previous posts) and there is a bit of an emerging EA/metascience community, would be happy to chat if you’re interested!
Some specific comments:
In consequence, a possible solution is some kind of coordinated action by scientists (or universities) to decline being referees for high-fee journals.
Could you elaborate the change in the system you envision as a result of something like this? My current thinking (but very open to being convinced otherwise) is that lower fees to access publications wouldn’t really change anything fundamental about what science is being done, which makes it seem like a lot of work for limited gains?
I agree with him that we need to split up work. Some people like, enjoy, and are better at teaching. Others, at doing research. I really don’t think one should be requested to do everything. In addition, dedicated science evaluators might help a lot with replication problems, referee quality, and speed…
I think there is something here—I think it could be valuable to have more diverse career paths that would allow people to build on their strengths, rather than just having tasks depending on seniority. It also seems like something where it’s not necessary to design one perfect system, but rather that different institutions could work with different models (just like different private companies work with different models of recruitment and internal career paths). I think it would be very interesting if someone would do (have done?) an overview of how this looks today globally, perhaps there are already some institutions that have quite different ways of allocating tasks?
My crux here would be that even though I think this has a potential to make research much more enjoyable to a broader group, it’s a bit unclear if it would actually lead to better science being done. I want to think that it would, but I can’t really make a strong argument for it. I do think efficiency would increase, but I’m not sure we’d work on more important questions or do work of higher quality because of it (though we might!).
this is probably a consequence of too many people enjoying doing science with respect to the number of available research jobs
You could be right, but it’s not obvious to me. I have the impression a lot of people doing science are finding it quite hard and not very enjoyable, especially on junior levels. It would be very interesting to know more about what attracts people to science careers, and what reasons for staying are—I think it’s very possible that status/being in a completely academic social context that makes other career paths abstract plays an important role. Anecdotally, I dropped out of a phd position after one year, and even though I really didn’t enjoy it dropping out felt like a huge failure at the time in a way that voluntarily quitting a “normal” job would not.
As the author of this post, I found it interesting to re-read it more than a year later, because even though I remember the experience and feelings I describe in it, I do feel quite differently now. This is not because I came to some rational conclusion about how to think of self-worth vs instrumental value, but rather the issue has just kind of faded away for me.
It’s difficult to say exactly why, but I think it might be related to that I have developed more close friendships with people who are also highly engaged EAs, where I feel that they genuinely care about me and spend time not just supporting me on high-impact work, but on socially checking in and hanging out, joking or talking about private stuff—that they like me and care about me as a person.
This makes me question the assumptions I made in the post about how feelings of self-worth are created in the religious context. Perhaps even in church the thing is not the abstract idea being “perfect in Gods eyes”, but rather the practical experience of feeling loved and accepted by the community and knowing they have your back. If this is right, that’s a very good thing as that is something that can be re-created in a non-religious context.
So, if I’d update this post now, I might be able to develop some ideas for how we could work on this: perhaps a reason to be careful with over-optimizing our interpersonal meetings?
Hi Miranda! I’m glad you liked it, and I hope you feel better now. Since it’s been a while since I wrote this I realize my perspective changes a lot over time—it feels less like a conflict or a problem for me right now, and not necessarily because I have rationally figured something out, it’s more like I have been focusing on other things and am generally in a better place. I don’t know how useful that is to you or anyone else, but to some extent it might mean that things can sometimes get better even if we don’t solve the issue that bothered us in the first place.
Regardless, the instrumental argument is difficult enough for me to put into practice!
Thinking of myself as a role model to others has been the most useful to me. Instead of thinking of exactly how much rest/vacation/enjoyment I need to function optimally, I try to think more about what are healthy norms to establish in a workplace or a community. What is good about that is that I get away from the tendency of thinking of myself as an exception who can somehow manage more than others—instead of thinking “Can I push myself a bit further?” the question becomes “Is it healthy/constructive if we all push ourselves in this way?”
But more than having “figured it out”, I have mostly just reached some kind of pragmatic stance where I just allow myself to be egoistic in the sense that I prioritize myself and my loved ones muchy, much higher than we would “deserve” from some kind of detached ethical perspective. I don’t have any way to justify it really, I just admit it and accept it, and it helps me to move on to thinking about other things instead.
Feel free to reach out over DM if you want to chat!
Interesting perspective!
I personally believe that many, if not most, of the world’s most pressing problems are political problems, at least in part.
I agree! But if this is true, doesn’t it seem very problematic if a movement that means to do the most good does not have tools for assessing political problems? I think you may be right that we are not great at that at the moment, but it seems… unambitious to just accept that?
I also think that many people in EA do work with political questions, and my guess would be that some do it very well—but that most of those do it in a full-time capacity that is something different from “citizen politics”. Could it be than rather than EA being poorly suited to assessing political issues, EA does not (yet) have great tools for assessing part-time activism, which would be a much more narrow claim?
Thank you so much for this post! It is SO nice to read about this in a framing that is inspiring/positive—I think it’s unavoidable and not wrong that we often focus on criticism and problem description in relation to diversity/equality issues but that can also make it difficult and uninspiring to work with improvement. I love the framing you have here!
Thanks a lot for this post! I really appreciate it and think (as you also noted) that it could be really useful also for career decisions, all well as for structuring ideas around how to improve specific organizations.
we must be careful to avoid scenarios in which improving the technical quality of decision-making at an institution yields outcomes that are beneficial for the institution but harmful by the standards of the “better world”
I think this is a really important consideration that you highlight here. When working in an organization my hunch is that one tends to get relatively immediate feedback on if decisions are good for the organization itself, while feedback on how good decisions are for the world and in the long term is much more difficult to get.
For a user seeking to make casual or fast choices about prioritizing between institutional engagement strategies, for example a small consulting firm choosing among competing client offers, it’s perfectly acceptable to eschew calculations and instead treat the questions as general prompts to add structure and guidance to an otherwise intuitive process. Since institutional engagement can often carry high stakes, however, where possible we recommend at least trying a heuristic quantitative approach to deciding how much additional quantification is useful, if not more fully quantifying the decision.
I’m doing some work on potential improvements to the scientific research system, and after reading this post I’m thinking I should try to apply this framework to specific funding agencies and other meta-organizations in the research system. Do you have any further thoughts since posting this regarding how difficult vs valuable it is to attempt quantification of the values? Approximately how time-consuming is such work in your experience?
So glad to hear that, and thanks for the added reference to letsfund!
On peer review I agree with Edo’s comment, I think it’s more about setting a standard than about improving specific papers.
On IP, I think this is very complex and I think “IP issues” can be a barrier both when something is protected and when it’s not. I have personally worked in the periphery of projects where failing to protect/maintain IP has been the end of road for potentially great discoveries, but also seen the other phenomena where researchers avoid a specific area because someone else holds the IP. It would be interesting to get a better understanding both of the scale of these problems and if any of the initiatives that currently exists seem promising for improving it.
I can obviously only speak for myself, but for me just having this kind of conversation is in itself very comforting since it shows that there are more people who think about this (i.e. it’s not just “me being stupid”). Disagreement doesn’t seem threatening as long as the tone is respectful and kind. In a way, I think it rather becomes easier to treat my own thoughts more lightly when I see that there are many different ways that people think about it.
I think I mostly agree with this, and I’d also like to clarify that I don’t think this problem originates from EA or from my contact with EA. It is not that I feel that “EA” demands too much of me, rather that when I focus a lot on impact potential it becomes (even more) difficult to separate self-worth from performance.
Different versions of contingent self-worth (contingent self-esteem, performance-contingent self-esteem—there are a lot of similar concepts and I am not completely sure about which terms to use, but basically the concept that how much we like and value ourselves is connected strongly to our ability to perform) seem to be a problem for a lot of people outside of EA, that also relates to the risk for burn-out.
My thinking is that there are people with this issue in EA, possibly more than in the general population, and that even though it does not come from EA philosophy there is some relation between these types of self-worth issues and a focus on instrumental value. I’m not arguing that this is “right” or useful, I think it’d be a lot better if we could all have a strong and stable sense of non-contingent self-worth.
Thanks—yes I agree, and study of collusion is often included into the scope of cooperative AI (e.g. methods for detecting and preventing collusion between AI models is among the priority areas of our current grant call at Cooperative AI Foundation).
Thanks for commenting!
I think there are two different things to figure out: 1) should we engage with the situation at all? and 2) if we engage, what should we do/advocate for?
I might be wrong about this, but my perception so far is that many EAs based on some ITN reasoning answer the first question with a no, and then the second question becomes irrelevant. My main point here is that I think it is likely that the answer to the first question could be yes?
For this specific case I personally believe that a ceasefire would be more constructive than the alternative, but even if you disagree with that this would not automatically mean that the best thing is not to engage at all. Or do you think it does?
Interesting!
What is your assessment of current risk awareness among the researchers you work with (outside of survey responses), and their interest in such perspectives?
Interesting. I think a challenge would be to find the right level of complexity of a map like that—it needs to be simple enough to give a useful overview, but complex enough that it models everything that’s necessary to make it a good tool for decisionmaking.
Who do you imagine would be the main user of such a mappning? And for which decisions would they mainly use it? I think the requirements would be quite different depending on if it’s to be used by non-experts such as policymakers or grantmakers, or if it’s to be used by researchers themselves?
Addition: When reading your post Should marginal longtermist donations support fundamental or intervention research? I realize that we maybe draw the line a bit differently between applied and fundamental research—examples you give of fundamental research there (e.g. the drivers of and barriers to public support for animal welfare interventions) seems quite applied to me. When I think of fundamental research I imagine more things like research on elementary particles or black holes. This difference could explain why we might think differently about if it’s feasible to predict the consequences of fundamental research.
For me Magnify has been super important to balance my idea of what kind of people the EA movement consists of and to feel more at home in the community!