How dependent is current AGI safety work on deep RL? Recently there has been a lot of emphasis on advances in deep RL (and ML more generally), so it would be interesting to know what the implications would be if it turns out that this particular paradigm cannot lead to AGI.
Jakob_J
In general, I don’t spend much time worrying about “neglectedness” in the ITN framework. I think that is because while many important problems are not neglected in an absolute sense (e.g. climate change), it is still the case that some solutions will be much more effective than others within each problem area. Therefore, one may have a large impact on less neglected problems by simply focusing on more effective solutions.
Don’t try to wake up and save the world. Don’t be bycatch. Take 15 years and become a domain expert. Take a career and become a macrostrategy expert. Mentor. Run small and non-EA projects. Circle back to EA periodically with your newfound skills and see what a difference you can make then. There is absolutely no way we can have a longtermist movement if we can’t be longtermist about our own lives and careers. But if we can, then we can.
Many social movements I have been a part of (political, sports, religious etc.) have a sort of “more is more” aspect to them that I see a lot in EA. The basic idea is this: it is fine if you just want try things out, but the more involved you are in the organization, the more events you go to, the more you align your life around the mission of the organization, the better. To a large part, this is why I have often experienced “organizational burnout”: when due to changing circumstances I cannot or do not want to be as involved anymore, it is often easier to quit altogether rather than to scale things back. With most of my commitments these days I try to follow a “less is more” approach, where I try to capture about ~10% of the scope of an organization or movement. The 10% scope of EA seems to be the thinking that we should consider how important a problem is when choosing a career or where to donate our funds. If we get more involved than this, there is a danger of not being able to sustain it over a long time. The other advantage of being minimally involved is that we can join a larger number of organizations and get a diverse and rich set of benefits that exceed what any one organization can provide.
PhDs get a lot of negative hype these days, so much that I wonder if it is potentially underrated it as a viable career step. I am just coming out of a PhD programme in the UK, and while I didn’t always enjoy my topic nor want to continue research in my field, I still think it is overall a positive experience. It is important to realize that most people starting PhDs are well aware of the low chances of becoming a professor, but luckily there are still many good career options outside academia (however I do think all the negative hype around academic careers probably means that it is also underrated relative to its true value). Some of the good aspects of (science) PhDs are:
- Lots of flexibility. You are basically guaranteed an income for ~4-5 years that you cannot lose no matter how poorly you do. While your supervisor certainly has some influence over what you do during this time, you are surprisingly free to work on what you want.
- High potential for growth. You get to experience just how difficult it is to become a world expert in something, albeit very niche, because most often you are the first to attempt your line of research. Learning about what has been done before in a field, how to find a viable approach, and how to overcome unexpected setbacks are very transferable skills.
- This is a bit more speculative, but I’d reckon that you have higher chances of landing an intellectually stimulating career with a PhD compared to a Master’s degree, at least in fields where credentials matter. I’d also reckon that as a PhD plus a few years experience you are likely to get promoted to leadership positions faster.
- Many fields EAs care about (nuclear security, biosafety, AI etc.) are very science-heavy fields where having a PhD is useful. For example, with a Physics PhD it would be relatively easy to get a job in either of these, since your skillset is sufficiently adaptable to most analytic fields.
Thanks for sharing your perspective! It seems like having a family in major metropolitan areas are especially challenging due to the much higher housing cost. I am wondering if you have any examples of the types of jobs you think would be difficult to afford raising a family in London (alternatively, what salary)? For example, it seems that a civil servant could earn £40,000 per year after a few years of experience, and I suspect other sectors where EAs would want to work might pay a similar amount (academia, NGOs etc).
Regarding having lots of time, it is true that being a stay at home parent leads to substantial loss of income. What I was wondering was more along the lines of: is it worth trying to earn say £80,000+ per year working in finance just to be able to afford a larger house, but working 80+hours/week, when say a civil servant would have fixed 40 working hours per week, free weekends, but earning half as much. In terms of income vs time, my intuition is that time is more valuable than income when having children, even if it means saving on housing costs.
How much money is required to raise a family?
A big part of many peoples motivation for earning a high income seems to be the perception that it is a necessity in order to raise a family. Many EA-aligned jobs are in the public or NGO sector and are less paid than what people could earn in the private sector, and since close to 80% people have children, this could be big factor for people to give up on an EA-aligned career.
I am wondering whether this reasoning is valid, and where the extra cost for children comes from. In most western countries, there is high quality education available freely for everyone, healthcare is either free or subsidized, and children generally receive lots of support from the government (free dentist appointments, free/subsidized school meals, student loans etc). Food is a small part of the monthly budget and shouldn’t be a big factor, clothes could be obtained from low-cost outlets or second-hand. That leaves more expensive budget items such as having a big house, a large car, vacation money etc. However, these might just be luxuries that make a relatively small difference in how “successful” one is in raising a family compared to other things, such as having lots of time.
It would be really interesting to see some analysis on how the decision about whether to have children or not should impact one’s career planning, especially for those considering EA-aligned career options.
Most human effort is being wasted on endeavors with no abiding value.
Nothing we do matters for any of several reasons (moral non-realism, infinite ethics, living in a simulation, being a Boltzmann brain, ..?)
Things certainly feel very doom & gloom right now, but I still think there is scope for optimism in the current moment. If I had been asked in February last year what the best and worst outcomes would have been of the pandemic a year later, I would probably have guessed a whole lot worse than what turned out to be the case. I also don’t think that we are living in some special age of incompetent governance right now, and I would argue that throughout history we have come up with policies that have been disastrously wrong one way or the other. Competence have appeared elsewhere—as Tyler Cowen has argued, businesses seem unusually competent in the current crises compared to governments. Where would we have been without supermarkets’ supply chains, Amazon, Pfizer, Zoom etc during the pandemic? According to this article there are more reasons to be optimistic than pessimistic right now:
As people lose jobs and income, many go hungry. Projections from the Food and Agricultural Organization point to an increase in the global number of chronically undernourished from 8.9 to around 9.9 per cent. A terrible outcome, but it still represents a reduction by a quarter since 2000.
It took mankind 3,000 years to develop a vaccine against polio and smallpox. Moderna designed a vaccine against Covid-19 in two days. Had we faced this new coronavirus in 2005, we would not have had the technology to even imagine such mRNA vaccines, if it had appeared in 1975 we would not have the ability to read the genome of the virus, if it came in 1950, we would not have had a single ventilator on the planet.
[T]he progress of the last few decades has been so fast, and human creativity under duress so impressive, that even major setbacks only pushes us back a few years. Only three years in history have been better in terms of GDP per capita, extreme poverty and child mortality – 2017, 2018 and 2019.
This terrible news. I also did not know Tommy, but my heart really goes out to his friends and family. Reading the statement I was touched by his strength of character—he appears to have been extremely gifted, loving and humble and with a genuine interest in helping others and the world. We can all surely find great inspiration in the exemplary life Tommy led.
This is also a stark reminder that no matter how outwardly successful or happy we might appear, we all carry our share of troubles and negative feelings; doubting ourselves and our worthiness, often trying to hide our sadness and our loneliness out of shame. Though just like Tommy, we’re all merely human, we all have flaws and imperfections. We are not alone in our struggles.
For anyone who might be feeling overwhelmed and unable to cope with their negative feelings, or are just interested in more mental health resources, I would like to recommend The Feeling Good Podcast (available with most podcast apps) with David Burns (author of the book Feeling Good, though the podcast is much better). There are episodes available on suicide prevention, loneliness, perfectionism, feelings of worthlessness, COVID-19 and much more. It has drastically improved my own mood and those that I know that have listened to it, and I believe it might help others as well.
Yes, I agree that when we are trying to maximise the amount of good we do with limited resources, these local charities are not likely to be a good target for donations. However, as you mention, EA is different from utilitarianism because we don’t believe everyone should use all or most of their resources to do as much good as possible.
So when we spend money on ourselves or others for reasons other than trying to maximise the good this might also include donations to local causes. It seems inconsistent to say that we can spend money on whatever we want for ourselves, but if we choose to spend money on others, it can’t be for those in our community.
My point was therefore about communication: it’s not correct to say that EAs should never donate to local causes, when what we mean is that donating to local causes is unlikely to bring about the most good (but people might have other reasons for doing so anyway).
“Most supporters of EA don’t tell people not to go out to nice restaurants and get gourmet food for themselves, or not to go the the opera, or not to support local organizations they are involved with or wish to support, including the arts.”
Thanks, I agree with this statement! However, in Halsteads comment it said
“I just think it is true that EAs shouldn’t donate to their local opera house, pet sanctuary, homeless shelter or to their private school, and that is what makes EA distinctive.”
I think it would be good to be clearer in our communication and say that we don’t consider local opera houses, pet sanctuaries, homeless shelters, or private schools to be good cause areas, but there might be other good reasons for you to donate to them. For example, maybe you like opera and you want to help your local opera house survive during the pandemic, or you got a new dog from a pet sanctuary and want to donate some money in return, or perhaps your kids private school is fundraising for scholarships for disadvantaged students and you want to contribute. In my view, the claim EA is making isn’t that we shouldn’t donate to these places, same as that its not telling us not to buy a car or go to restaurants, but that your earmarked “EA budget” should be spent on the causes that do the most good.
“I just think it is true that EAs shouldn’t donate to their local opera house, pet sanctuary, homeless shelter or to their private school”
This is a very minor point, but I don’t quite understand what EA has against cultural establishments like opera houses and museums. Of course, counting the number of lives saved one shouldn’t donate to museums, but that kind of misses the point that these institutions might be offering free or discounted tickets in exchange for charitable donations. If they switched over to everyone paying a full price they would probably still get a similar revenue, but it would be an objectively worse situation since fewer people would get the chance to visit.
I think lots of people can relate to this sentiment!
I could recommend having a look at Escape the City which provides a list of career opportunities for mid-career professionals wanting more social impact in their work: https://www.escapethecity.org/
If you are interested in short or long term volunteering with your tech skills, I can recommend a number of organisations that provide ample opportunities for this in the UK:
https://techforuk.com/
”Tech For UK aims to enable people to transform British democracy through technology and digital media that impacts the systems not just the symptoms of its problems.”https://democracyclub.org.uk/
”We build digital tools to support everyone’s participation in UK democracy. Our services are trusted by organisations in government, charities and the media, and have reached millions of people since 2015.”http://md4sg.com/
”Mechanism Design for Social Good (MD4SG) is a multi-institutional initiative using techniques from algorithms, optimization, and mechanism design, along with insights from other disciplines, to improve access to opportunity for historically underserved and disadvantaged communities. Members of MD4SG include researchers from computer science, economics, operations research, public policy, sociology, humanistic studies, and other disciplines as well as domain experts working in non-profit organizations, municipalities, and companies.”
I would also highlight the contribution towards creating an educational platform that extends beyond the immediate participants in the course. I believe most of the talks are available on Youtube: https://www.youtube.com/channel/UCR4WNZP7Uxfe4F1XNugu5_g
A great resource!
I actually think the principles of deference to expertise and avoiding accidental harm are in principle good and we should continue using them. However, in EA the barrier to being seen as an expert is very low—often its enough to have written a blog or forum post on something, having invested less than 100 hours in total. For me an expert is someone who has spent the better part of his or her career working in a field, for example climate policy. While I think the former is still useful to give an introduction to a field, the latter form of expertise has been somewhat undervalued in EA.
Hi Michael, thanks for your reply!
I agree with everything you are saying, and I did not mean to imply that people should not consider working at explicit EA organisations. Indeed, I would also be interested at working at one of them at some point!
The point I wanted to make is that the goal of “getting a job at an EA organisation” in itself is a near-term career goal, since it does not answer many of the questions choosing a career entails, many of which have been highlighted in the post above as well as by 80,000 hours. I am thinking of questions like:
How do I choose a field where I would both enjoy the work and have an impact?
How do I avoid significant negatives that would stop me having a meaningful and happy life and career?
How do I build the skills that make me attractive in the field I want to work in?Of course, we’ll never get everything right, but this is a more nuanced view than focussing all your efforts on getting a job at an EA organisation. I would also like to see more discussions of “hybrid” careers, where one for example builds a career as an expert in the Civil Service and then joins an EA organisation or acts as an advisor during a one year break to exchange experiences.
Thanks for writing and sharing your insights! I think the whole EA community would be a lot healthier if people had a much more limited view of EA, striving to take jobs that have positive impact in the long run, rather than focussing so much on the much shorter-term goal of taking jobs in high-profile EA organisations, at great personal expense.
I agree with most of the benefits, but think that the “employees may freely choose to leave” part may be somewhat contentious. People need money to survive, and one argument that is often brought forward is that Amazon has driven a lot of smaller businesses out of the market, so that employees may not have that many choices of where to work any more.
Great post! I’ve also experienced similar things during my time with EA. I think there are several ways to approach the issue of self-worth:
Its important to realize that EA is not the same as utilitarianism and therefore does not suffer from the problem of demandingness (this is also discussed in the latest 80K podcast with Benjamin Todd). EA does not prescribe how much of resources we should share, only that the ones we do share should be distributed in an effective way.
Unfortunately there is a tendency in EA to undervalue “small” contributions (i.e. those made by care workers, nurses, GPs etc). I think we need to realize that every contribution people can make to the common good is good no matter how small. I don’t think that someone who saves less than one life in expectation should feel any worse than people who saves thousands or millions of lives. In any case, I wouldn’t go around telling people that they should feel worthless if they are not working on something super important for humanity (if that was the case we’d need to reach more than 99% of humans on earth to tell them that they are worthless). This is clearly an absurd position, so why should we be telling ourselves that?
Are emergencies different from non-emergencies? A new paper ( https://link.springer.com/article/10.1007/s11098-020-01566-0 ) argues that the obligation of saving a drowning child is different from the obligation of donating to effective charities in order to save a life. They claim that in emergencies where we can directly intervene to save a life, we are obliged as participants in an informal insurance scheme in society to intervene even at great cost to ourselves. Through this model they aim to explain the “common sense” moral intuition that it is worse to ignore a drowning child than to not donate $3000 to the Against Malaria Foundation. Overall an interesting read that may be of interest to EA-aligned folks.
Thank you for writing this well-argued post—I think its important to keep discussing exactly how big P(doom) is. However, and I say this as someone who believes that P(doom) is on the lower end, it would also be good to be clear about what the implications would be for EAs if P(doom) was low. It seems likely that many of the same recommendations—reduce spending on risky AI technologies and increase spending on AI safety—would still hold, at least until we get a clearer idea of the exact nature of AI risks.