Working for Cooperative AI Foundation. I have a background in engineering and entrepreneurship and have previously been running a small non-profit focused on prevention of antibiotic resistance and worked for EA Sweden. Received an EA Infrastructure grant for cause exploration in meta-science during 2021-22.
C Tilli
Thank you Shaun!
I found myself wondering where we would fit AI Law / AI Policy into that model.
I would think policy work might be spread out over the landscape? As an example, if we think of policy work aiming to establishing the use of certain evaluations of systems, such evaluations could target different kinds of risk/qualities that would map to different parts of the diagram?
Cooperative AI: Three things that confused me as a beginner (and my current understanding)
Interesting perspective!
I personally believe that many, if not most, of the world’s most pressing problems are political problems, at least in part.
I agree! But if this is true, doesn’t it seem very problematic if a movement that means to do the most good does not have tools for assessing political problems? I think you may be right that we are not great at that at the moment, but it seems… unambitious to just accept that?
I also think that many people in EA do work with political questions, and my guess would be that some do it very well—but that most of those do it in a full-time capacity that is something different from “citizen politics”. Could it be than rather than EA being poorly suited to assessing political issues, EA does not (yet) have great tools for assessing part-time activism, which would be a much more narrow claim?
Thanks you for this comment—this is indeed very relevant context, much of which I was not previously aware of.
Thanks for commenting!
I think there are two different things to figure out: 1) should we engage with the situation at all? and 2) if we engage, what should we do/advocate for?
I might be wrong about this, but my perception so far is that many EAs based on some ITN reasoning answer the first question with a no, and then the second question becomes irrelevant. My main point here is that I think it is likely that the answer to the first question could be yes?
For this specific case I personally believe that a ceasefire would be more constructive than the alternative, but even if you disagree with that this would not automatically mean that the best thing is not to engage at all. Or do you think it does?
Naive application of the ITN framework on a situation like the one in Gaza might lead us wrong
Strongly agree. Of course it’s different what works for different people but I think it’s a little odd that both EAG and EAGx seem to always be over the weekend, and I would be curious to see how the composition of attendees would shift if an event was held on work days.
Thanks, I’m glad you found it useful!
- Having spent a couple of months working on this topic, do you still think AI science capabilities are especially important to explore, cf AI in other contexts? I ask because I’ve been thinking and reading a lot about this recently, and I keep changing my mind about the answer.
Answering just for myself and not for the team: I don’t have a confident answer to this. I have updated in the direction that capabilities for autonomous science work are more similar to general problem-solving capabilities than I thought previously. I think that means that these capabilities could be more likely to emerge from a powerful general model than from a narrow “science model”.
Still, I think there is something specific about how the scientific process develops new knowledge and then builds on that, and how new findings can update the world-view in a way that might discredit a lot of the previous training data (or change how it’s interpreted).
A Study of AI Science Models
Interesting!
What is your assessment of current risk awareness among the researchers you work with (outside of survey responses), and their interest in such perspectives?
Thank you so much for this post! It is SO nice to read about this in a framing that is inspiring/positive—I think it’s unavoidable and not wrong that we often focus on criticism and problem description in relation to diversity/equality issues but that can also make it difficult and uninspiring to work with improvement. I love the framing you have here!
For me Magnify has been super important to balance my idea of what kind of people the EA movement consists of and to feel more at home in the community!
Bioweapons shelter project launch
Thanks a lot for your comment and offer! I’ll send you a message =)
Thanks for this! I’ve been thinking quite a bit about this (see some previous posts) and there is a bit of an emerging EA/metascience community, would be happy to chat if you’re interested!
Some specific comments:
In consequence, a possible solution is some kind of coordinated action by scientists (or universities) to decline being referees for high-fee journals.
Could you elaborate the change in the system you envision as a result of something like this? My current thinking (but very open to being convinced otherwise) is that lower fees to access publications wouldn’t really change anything fundamental about what science is being done, which makes it seem like a lot of work for limited gains?
I agree with him that we need to split up work. Some people like, enjoy, and are better at teaching. Others, at doing research. I really don’t think one should be requested to do everything. In addition, dedicated science evaluators might help a lot with replication problems, referee quality, and speed…
I think there is something here—I think it could be valuable to have more diverse career paths that would allow people to build on their strengths, rather than just having tasks depending on seniority. It also seems like something where it’s not necessary to design one perfect system, but rather that different institutions could work with different models (just like different private companies work with different models of recruitment and internal career paths). I think it would be very interesting if someone would do (have done?) an overview of how this looks today globally, perhaps there are already some institutions that have quite different ways of allocating tasks?
My crux here would be that even though I think this has a potential to make research much more enjoyable to a broader group, it’s a bit unclear if it would actually lead to better science being done. I want to think that it would, but I can’t really make a strong argument for it. I do think efficiency would increase, but I’m not sure we’d work on more important questions or do work of higher quality because of it (though we might!).
this is probably a consequence of too many people enjoying doing science with respect to the number of available research jobs
You could be right, but it’s not obvious to me. I have the impression a lot of people doing science are finding it quite hard and not very enjoyable, especially on junior levels. It would be very interesting to know more about what attracts people to science careers, and what reasons for staying are—I think it’s very possible that status/being in a completely academic social context that makes other career paths abstract plays an important role. Anecdotally, I dropped out of a phd position after one year, and even though I really didn’t enjoy it dropping out felt like a huge failure at the time in a way that voluntarily quitting a “normal” job would not.
Thanks, great to hear =)
I’m quite unsure about which ideas has the best ROI, and I think it would depend a lot on who was looking to execute a project which idea would be most suitable. That said, I’m personallymost excited about the potential of working with research policy at different levels—from my current understanding this just seems extremely neglected compared to how important it could be, and if I’d make a guess about which of these ideas I might myself be working on in a few years it would be research policy.
Short term, I’d be most excited to see the projects happen that would provide more information (e.g. identify the most important institutions, understand how policy documents actually translate into specific research being done, understanding the dynamics of exisiting contexts where experts and non-academics discuss research agenda, evaluation of existing R&D hubs, etc) - with this information avaliable I would hope it would be possible to prioritise between different larger projects.
I’m curious, what would you yourself think would be most important and/or have the best ROI?
Cool—my immediate thought is that it would be interesting to see a case study of (1) and/or (2) - do you know of this being done for any specific case? Perhaps we could schedule a call to talk further—I’ll send you a DM!
Interesting. I think a challenge would be to find the right level of complexity of a map like that—it needs to be simple enough to give a useful overview, but complex enough that it models everything that’s necessary to make it a good tool for decisionmaking.
Who do you imagine would be the main user of such a mappning? And for which decisions would they mainly use it? I think the requirements would be quite different depending on if it’s to be used by non-experts such as policymakers or grantmakers, or if it’s to be used by researchers themselves?
Thanks for your comment! I’m uncertain, I think it might depend also in what context the discussion is brought up and with what framing. But it’s a tricky one for sure, and I agree specific targeted advocacy seem less risky.
Thanks—yes I agree, and study of collusion is often included into the scope of cooperative AI (e.g. methods for detecting and preventing collusion between AI models is among the priority areas of our current grant call at Cooperative AI Foundation).