I am a research engineer working on AI safety at DeepMind. Formerly working at Improbable on simulations for decision making
I’m interested in AGI safety, complexity science, software engineering, models and simulations.
I am a research engineer working on AI safety at DeepMind. Formerly working at Improbable on simulations for decision making
I’m interested in AGI safety, complexity science, software engineering, models and simulations.
I just wrote a relevant forum post on how simulation models / Agent-based models could be highly impactful for pandemic preparedness: https://forum.effectivealtruism.org/posts/2hTDF62hfHAPpJDvk/simulation-models-could-help-prepare-for-the-next-pandemic
A crucial aspect of this is better software tools for building large scale simulations, so I would say this is a large opportunity for someone who wants to work in software engineering.
Even just working as a research engineer in an existing academic group building epidemiological models would be impactful in my opinion. The role of research engineer within academia is quite neglected because it tends to pay less than equivalent industry jobs.
Thanks for writing this, in my opinion the field of complex systems provides a useful and under-explored perspective and set of tools for AI safety. I particularly like the insights you provide in the “Complex Systems for AI Safety” section, for example that ideas in complex systems foreshadowed inner alignment / mesa-optimisation.
I’d be interested in your thoughts on modelling AGI governance as a complex system, for example race dynamics.
I previously wrote a forum post on how complex systems and simulation could be a useful tool in EA for improving institutional decision making, among other things: https://forum.effectivealtruism.org/posts/kWsRthSf6DCaqTaLS/what-complexity-science-and-simulation-have-to-offer
I can think of a few other areas of direct impact which could particularly benefit from talented software engineers:
Improving climate models is a potential route for high impact on climate change, there are computational modelling initiatives such as the Climate Modeling Alliance and startups such as Cervest. It would also be valuable to contribute to open source computational tools such as the Julia programming language and certain Python libraries etc.
There is also the area of computer simulations for organisational / government decision making, such as Improbable Defence (disclosure: I am a former employee and current shareholder), Simudyne and Hash.ai. I’ve heard anecdotally that a few employees of Hash.ai are sympathetic to EA, but I don’t have first hand evidence of this.
More broadly there are many areas of academic research, not just AI safety, which could benefit from more research software engineers. The Society of Research Software Engineering aims to provide a community for research engineers and to make this a more established career path. This type of work in academia tends to pay significantly lower than private sector software salaries, so this is worse for ETG, but on the flip side this is an argument for it being a relatively neglected opportunity.
Is there a list of the ideas that the fellows were working on? I’d be curious.
It’s not surprising to me that there aren’t many “product focused” traditional startup style ideas in the longtermist space, but what does that leave? Are most of the potential organisations research focused? Or are there some other classes of organisation that could be founded? (Maybe this is a lack of imagination on my part!)
Very useful to know, thanks for the context!
Congratulations, this is really great to hear, and seems like a fantastic opportunity!
Out of interest, what was the sequence of events, did you already have a PhD program lined up when you applied for funding? Or are you going to apply for one now that you have the funding? Also had you already discussed this with your current employer before applying for funding?
I only ask because I have been considering attempting to do something similar!
This is a good point, although I suppose you could still think of this in the framing of “just in time learning”, i.e. you can attempt a deep RL project, realise you are hopelessly out of your depth, then you know you’d better go through Spinning Up in Deep RL before you can continue.
Although the risk is that it may be demoralising to start something which is too far outside of your comfort zone.
I massively agree with the idea of “just do a project”, particularly since it’s a better way of getting practice of the type of research skills (like prioritisation and project management) that you will need to be a successful researcher.
I suppose the challenge may be choosing a topic for your project, but reaching out to others in the community may be one good avenue for harvesting project ideas.
What are your thoughts on re-implementing existing papers? It can be a good way to develop technical skills, and maybe a middle ground between learning pre-requisites and doing your own research project? Or would you say it’s better to just go for your own project?
These links are excellent! I hadn’t come across these before, but I am really excited about the idea of using roleplay and table top games as a way of generating insight and getting people to think through problems. It’s great to see this being applied to AI scenarios.
@djbinder Thanks for taking the time to write these comments. No need to worry about being negative, this is exactly the sort of healthy debate that I want to see around this subject.
I think you make a lot of fair points, and it’s great to have these insights from someone with a background in theoretical physics, however I would still disagree slightly on some of them, I will try to explain myself below.
I don’t think the only meaningful definition of complex systems is that they aren’t amenable to mathematical analysis, that is perhaps a feature of them, but not always true. I would say the main hallmark is that there is a surprising level of sophisticated behaviour arising from only apparently simple rules at the level of the individual components that make up that system, and that it can be a challenge to manage and predict such systems.
It is true that the terms “complexity” and “emergence” are not formally defined, and this maybe means that they end up getting used in an overly broad way. The area of complexity science has also been a bit prone to hype. I myself have felt uncomfortable with the term “emergence” at times, it is maybe still a bit vague for my tastes, however I have landed on the opinion that it is a good way to recognise certain properties of a system and categorise different systems. I agree with Eliezer Yudkowky’s point that it isn’t a sufficient explanation of behaviour, but it is still a relevant aspect of a system to look for, and can shape expectations. The aspiration of complexity science is to provide more formal definitions of these terms. So I do agree that there is more work to do to further refine these terms. However just because these terms can’t be formally or mathematically defined yet, doesn’t mean they have no place in science. This is also true of words like “meaning” and “consciousness”, however these are still important concepts.
I think the main point of disagreement is whether “complexity science” is a useful umbrella term. I agree that plenty of valuable interdisciplinary work applying ideas from physics to social sciences is done without reference to “complexity” or “complex systems”, however by highlighting common themes between these different areas I think complexity science has promoted a lot more interdisciplinary work than would have been done otherwise. With the review paper you linked, I would be surprised if many of the authors of those papers didn’t have some connection to complexity science or SFI at some point. In fact one of the authors is director of a lab called “Center for complex networks and systems research”. Even Steven Strogatz, whose textbook you mentioned, was an external SFI professor for a while! Although it’s true that just because he’s affiliated with them doesn’t mean that complexity science can take credit for all his prior work. Most complexity scientists do not typically mention complexity or emergence much in their published papers, they will just look like rigorous papers in a specific domain. Although the flip side of this is maybe that casts into doubt the utility of these terms, as you argued. But I would say that this framing of the problem (as “complex systems” in different domains having underlying features in common) has helped to motivate and initiate a lot of this work. The area of complexity economics is a great example of this, economics has always borrowed ideas from physics (all the way back to Walrasian equilibrium), however this process had stalled somewhat in the latter half of the 20th century. Complexity science has injected a lot of new and valuable ideas into economics, and I would say this comes from the idea of framing the economy as a complex system, not just because SFI got the right people in the same room together (although that is a necessary part).
Perhaps I am just less optimistic than you about how easy it is to do good interdisciplinary work, and how much of this would happen organically in this area without a dedicated movement towards this. I maintain that complexity science is a good way to encourage researchers to push into problem areas that are less amenable to reductionism or mathematical analysis, since this is often very difficult and risky.
Anyway the main reason I wanted to write this blog post is not so that EA people go around waxing lyrical with words like “complexity” and “emergence” all the time, but to point to complexity science as an example of a successful interdisciplinary movement, which maybe EA can learn from (even just from a public relations point of view), and also to look at some of the tools from complexity science (eg. ABMs) and suggest that these might be useful. @Venkatesh makes a good point that my main recommendation here is that ABMs may be useful to apply to EA cause areas, so perhaps I should have separated that bit out into a separate forum post.
I would add the New England Complex Systems Institute, particularly Yaneer Bar Yam: https://necsi.edu/corona-virus-pandemic
In this article from January 2020, which has aged very well, they were advocating for restrictions on international movement and warning of the effect of superspreader events on estimates of R0.
Yaneer Bar Yam also started this multidisciplinary effort to tackle covid: https://www.endcoronavirus.org/
Hey Alex, thanks for writing this, loads of useful advice in here that I want to try!
I have had similar (but seemingly milder than yours) problems with low energy, where I just felt very lethargic and drained periodically (about once a month). I would compare it to how you feel in the first day of getting a flu or cold, with low energy and mild muscle aches. I went to the doctor and had a similar story to you, they ran some blood tests and found nothing wrong, and that was it.
The answer: it was almost definitely stress. I was in a management position at work, I think I was kidding myself about the stress because I wasn’t working super long hours or anything like that, but the thing that really made it so bad was the constant uncertainty and chaos. I was working at a startup that was going through constant re-organisations and strategy pivots, which really took its toll after a while. It was made much worse by the fact that I was a manager and felt responsible for shielding my team from this. This is all to say that people will have varying levels of resilience to different types of stress, for me uncertainty and being responsible for others is difficult, but I am quite resilient to other situations that a lot of people find stressful (for example tight deadlines or public speaking etc.).
The solution was to change my role away from being a manager and into an individual contributor role in a research team. It took quite a long time for recovery but it’s been about a year and a half now and the situation is much better. It felt like a very difficult decision at the time (because I was really stressed and this was affecting my decision making) but in retrospect it was a really obvious and great decision! I have also subsequently turned down several opportunities to go back to being a manager.
I also think avoiding my commute because of working remotely during covid has helped.
I took way longer than it should have for me to realise that stress was the likely culprit. I think I was convinced that there was something “really wrong”, like some sort of more medical explanation. I don’t think I fully appreciated the affects that stress can have on the body, particularly when it builds up over a very long time. Another lesson is that it can take almost as long to unwind and undo those effects, sometimes a 2 week break will not be enough, it requires a more permanent change in role or lifestyle.
Additionally I found that this book helped change my attitude on some things: Stress-related Illness: Advice for People Who Give Too Much
Hey Venkatesh, I am also really interested in Complexity Science, in fact I am going to publish a blog post on here soon about Complexity Science and how it relates to EA.
I’ve also read Bookstaber’s book, in fact Doyne Farmer has a similar book coming out soon which looks great, you can read the intro here.
I hadn’t heard of the Complexity Weekend event but it looks great, will check that out!
This is an interesting thought experiment and I like the specific framing of the question.
My initial thoughts are that this clearly would have been a good thing to try to work on, mainly due to the fact that the 2008 financial crisis cost trillions of dollars and arguably also led to lots of bad political developments in the western world (eg. setting the stage for Trumpism). If you buy the Tyler Cowen arguemnts that economic growth is very important for the long term then that bolsters this case. However a caveat would be that due to moral uncertainty it’s hard to actually know the long term consequences of such a large event.
Here are some other ways to think about this:
Neglectedness
As Ramiro mentions below, very few people were alert to the potential risk before the crisis, so an additional person thinking and advocating for this would have increased the proportion of people thinking about this by a lot.
Tractability
Even if you had predicted how the crisis would have unfolded and what caused it, could you have actually done anything about it? Would you just ring up the federal reserve and tell them to change their policy? Would anyone have listened to you as a random EA person? This is potentially the biggest problem in my view.
Other benefits—better economic modelling
I am a strong believer in alternative approaches to economic modelling which can take into account things like financial contagion (such as agent-based models), so a potential benefit of working on this type of thing before the crisis is that you might have developed and promoted these techniques further, and these tools could help with other economic problems. In my view this is still a valuable and neglected thing to work on.
Other benefits—reputation
An additional benefit of calling the alarm on the financial crisis before it happened is the reputational benefit. Even if no one listened to you at the time, you would be recognised as one of the few people who “foresaw the crisis”, and therefore your opinions might be given more weight, for example the people from The Big Short who are always wheeled out on TV to provide opinions. You could say “hey, I predicted the financial crisis, so maybe you should also listen to me about this other <<insert EA cause area here>> stuff”.
Hey JP, thanks for your question, here are some questions that may be useful in your search, and may help people other provide you more advice:
1. Do you have any criteria for what you consider a “job within EA”? There are many types of job which could be considered EA related, from jobs within EA organisations, to jobs which have a large potential impact but are not directly for EA orgs (for example working in certain government departments or private companies). It might be worth reframing how you think about this as “how can I find a job that has the biggest impact”, rather than “how can I get an EA job”.
2. Do you have a particular cause area that you care a lot about? That may help to focus your search, and help your CV stand out to any potential employers.
3. What would you consider to be your comparative advantage? Since you have worked as an economist and data scientist do you consider these technical skills to be your main strengths? Are you looking for a hands on technical job? Technical skills such as software engineering and data science are always in demand so this is worth bearing in mind.
I would add to this that it’s obviously worth checking out 80,000 hours careers advice if you haven’t already, they spend a lot more time thinking about this than me!
I wish you the best of luck in your job search!
Thanks for the great list of resources!
Coincidentally I just discovered the Jim Rutt Show podcast recently and I’ve been enjoying it.