I am a research engineer working on AI safety at DeepMind. Formerly working at Improbable on simulations for decision making
I’m interested in AGI safety, complexity science, software engineering, models and simulations.
I am a research engineer working on AI safety at DeepMind. Formerly working at Improbable on simulations for decision making
I’m interested in AGI safety, complexity science, software engineering, models and simulations.
@djbinder Thanks for taking the time to write these comments. No need to worry about being negative, this is exactly the sort of healthy debate that I want to see around this subject.
I think you make a lot of fair points, and it’s great to have these insights from someone with a background in theoretical physics, however I would still disagree slightly on some of them, I will try to explain myself below.
I don’t think the only meaningful definition of complex systems is that they aren’t amenable to mathematical analysis, that is perhaps a feature of them, but not always true. I would say the main hallmark is that there is a surprising level of sophisticated behaviour arising from only apparently simple rules at the level of the individual components that make up that system, and that it can be a challenge to manage and predict such systems.
It is true that the terms “complexity” and “emergence” are not formally defined, and this maybe means that they end up getting used in an overly broad way. The area of complexity science has also been a bit prone to hype. I myself have felt uncomfortable with the term “emergence” at times, it is maybe still a bit vague for my tastes, however I have landed on the opinion that it is a good way to recognise certain properties of a system and categorise different systems. I agree with Eliezer Yudkowky’s point that it isn’t a sufficient explanation of behaviour, but it is still a relevant aspect of a system to look for, and can shape expectations. The aspiration of complexity science is to provide more formal definitions of these terms. So I do agree that there is more work to do to further refine these terms. However just because these terms can’t be formally or mathematically defined yet, doesn’t mean they have no place in science. This is also true of words like “meaning” and “consciousness”, however these are still important concepts.
I think the main point of disagreement is whether “complexity science” is a useful umbrella term. I agree that plenty of valuable interdisciplinary work applying ideas from physics to social sciences is done without reference to “complexity” or “complex systems”, however by highlighting common themes between these different areas I think complexity science has promoted a lot more interdisciplinary work than would have been done otherwise. With the review paper you linked, I would be surprised if many of the authors of those papers didn’t have some connection to complexity science or SFI at some point. In fact one of the authors is director of a lab called “Center for complex networks and systems research”. Even Steven Strogatz, whose textbook you mentioned, was an external SFI professor for a while! Although it’s true that just because he’s affiliated with them doesn’t mean that complexity science can take credit for all his prior work. Most complexity scientists do not typically mention complexity or emergence much in their published papers, they will just look like rigorous papers in a specific domain. Although the flip side of this is maybe that casts into doubt the utility of these terms, as you argued. But I would say that this framing of the problem (as “complex systems” in different domains having underlying features in common) has helped to motivate and initiate a lot of this work. The area of complexity economics is a great example of this, economics has always borrowed ideas from physics (all the way back to Walrasian equilibrium), however this process had stalled somewhat in the latter half of the 20th century. Complexity science has injected a lot of new and valuable ideas into economics, and I would say this comes from the idea of framing the economy as a complex system, not just because SFI got the right people in the same room together (although that is a necessary part).
Perhaps I am just less optimistic than you about how easy it is to do good interdisciplinary work, and how much of this would happen organically in this area without a dedicated movement towards this. I maintain that complexity science is a good way to encourage researchers to push into problem areas that are less amenable to reductionism or mathematical analysis, since this is often very difficult and risky.
Anyway the main reason I wanted to write this blog post is not so that EA people go around waxing lyrical with words like “complexity” and “emergence” all the time, but to point to complexity science as an example of a successful interdisciplinary movement, which maybe EA can learn from (even just from a public relations point of view), and also to look at some of the tools from complexity science (eg. ABMs) and suggest that these might be useful. @Venkatesh makes a good point that my main recommendation here is that ABMs may be useful to apply to EA cause areas, so perhaps I should have separated that bit out into a separate forum post.
Regarding AI alignment and existential risk in general, Cummings already has a blog post where he mentions these: https://dominiccummings.com/2019/03/01/on-the-referendum-31-project-maven-procurement-lollapalooza-results-nuclear-agi-safety/
So he is clearly aware and responsive to the these ideas, it would be great to have an EA minded person on his new team to emphasise these.
This is a good point, although I suppose you could still think of this in the framing of “just in time learning”, i.e. you can attempt a deep RL project, realise you are hopelessly out of your depth, then you know you’d better go through Spinning Up in Deep RL before you can continue.
Although the risk is that it may be demoralising to start something which is too far outside of your comfort zone.
Is there a list of the ideas that the fellows were working on? I’d be curious.
It’s not surprising to me that there aren’t many “product focused” traditional startup style ideas in the longtermist space, but what does that leave? Are most of the potential organisations research focused? Or are there some other classes of organisation that could be founded? (Maybe this is a lack of imagination on my part!)
Thanks for the great list of resources!
Coincidentally I just discovered the Jim Rutt Show podcast recently and I’ve been enjoying it.
I can think of a few other areas of direct impact which could particularly benefit from talented software engineers:
Improving climate models is a potential route for high impact on climate change, there are computational modelling initiatives such as the Climate Modeling Alliance and startups such as Cervest. It would also be valuable to contribute to open source computational tools such as the Julia programming language and certain Python libraries etc.
There is also the area of computer simulations for organisational / government decision making, such as Improbable Defence (disclosure: I am a former employee and current shareholder), Simudyne and Hash.ai. I’ve heard anecdotally that a few employees of Hash.ai are sympathetic to EA, but I don’t have first hand evidence of this.
More broadly there are many areas of academic research, not just AI safety, which could benefit from more research software engineers. The Society of Research Software Engineering aims to provide a community for research engineers and to make this a more established career path. This type of work in academia tends to pay significantly lower than private sector software salaries, so this is worse for ETG, but on the flip side this is an argument for it being a relatively neglected opportunity.
These links are excellent! I hadn’t come across these before, but I am really excited about the idea of using roleplay and table top games as a way of generating insight and getting people to think through problems. It’s great to see this being applied to AI scenarios.
Thanks for writing this, in my opinion the field of complex systems provides a useful and under-explored perspective and set of tools for AI safety. I particularly like the insights you provide in the “Complex Systems for AI Safety” section, for example that ideas in complex systems foreshadowed inner alignment / mesa-optimisation.
I’d be interested in your thoughts on modelling AGI governance as a complex system, for example race dynamics.
I previously wrote a forum post on how complex systems and simulation could be a useful tool in EA for improving institutional decision making, among other things: https://forum.effectivealtruism.org/posts/kWsRthSf6DCaqTaLS/what-complexity-science-and-simulation-have-to-offer
Very useful to know, thanks for the context!
Congratulations, this is really great to hear, and seems like a fantastic opportunity!
Out of interest, what was the sequence of events, did you already have a PhD program lined up when you applied for funding? Or are you going to apply for one now that you have the funding? Also had you already discussed this with your current employer before applying for funding?
I only ask because I have been considering attempting to do something similar!
I massively agree with the idea of “just do a project”, particularly since it’s a better way of getting practice of the type of research skills (like prioritisation and project management) that you will need to be a successful researcher.
I suppose the challenge may be choosing a topic for your project, but reaching out to others in the community may be one good avenue for harvesting project ideas.
What are your thoughts on re-implementing existing papers? It can be a good way to develop technical skills, and maybe a middle ground between learning pre-requisites and doing your own research project? Or would you say it’s better to just go for your own project?
Hey Alex, thanks for writing this, loads of useful advice in here that I want to try!
I have had similar (but seemingly milder than yours) problems with low energy, where I just felt very lethargic and drained periodically (about once a month). I would compare it to how you feel in the first day of getting a flu or cold, with low energy and mild muscle aches. I went to the doctor and had a similar story to you, they ran some blood tests and found nothing wrong, and that was it.
The answer: it was almost definitely stress. I was in a management position at work, I think I was kidding myself about the stress because I wasn’t working super long hours or anything like that, but the thing that really made it so bad was the constant uncertainty and chaos. I was working at a startup that was going through constant re-organisations and strategy pivots, which really took its toll after a while. It was made much worse by the fact that I was a manager and felt responsible for shielding my team from this. This is all to say that people will have varying levels of resilience to different types of stress, for me uncertainty and being responsible for others is difficult, but I am quite resilient to other situations that a lot of people find stressful (for example tight deadlines or public speaking etc.).
The solution was to change my role away from being a manager and into an individual contributor role in a research team. It took quite a long time for recovery but it’s been about a year and a half now and the situation is much better. It felt like a very difficult decision at the time (because I was really stressed and this was affecting my decision making) but in retrospect it was a really obvious and great decision! I have also subsequently turned down several opportunities to go back to being a manager.
I also think avoiding my commute because of working remotely during covid has helped.
I took way longer than it should have for me to realise that stress was the likely culprit. I think I was convinced that there was something “really wrong”, like some sort of more medical explanation. I don’t think I fully appreciated the affects that stress can have on the body, particularly when it builds up over a very long time. Another lesson is that it can take almost as long to unwind and undo those effects, sometimes a 2 week break will not be enough, it requires a more permanent change in role or lifestyle.
Additionally I found that this book helped change my attitude on some things: Stress-related Illness: Advice for People Who Give Too Much