Yep, the recommended orgs list on the 80,000 Hours Job Board (and the job board itself) is certainly not aiming to be comprehensive.
Niel_Bowerman
Thanks for writing up this post. I’m excited to see more software engineers and other folks with tech backgrounds moving into impactful roles.
Part of my role involves leading the 80,000 Hours Job Board. In case it’s helpful I wanted to mention that I don’t think of all of the roles on the job board as being directly impactful. Several tech roles are listed there primarily for career capital reasons, such as roles working on AI capabilities and cybersecurity. I’m keen for people to take these “career capital” roles so that in the future they can contribute more directly to making the development of powerful AI systems go well.
Thanks for sharing this data. Would it be possible to share the wording of a sample question, e.g. for 1:1s, and how the scoring scale was introduced?
I really enjoyed this post. I personally feel as though I don’t understand our users enough or have detailed enough models of how they are likely to react to our content, and so I appreciate write-ups like this.
FWIW, I found the Swapcard app to be a net improvement to my EAG experience. I found it easier to schedule meetings than my default approach of Google Sheets + Calendly links + emails. I wonder if part of it is that people seem more responsive on the app than via email?
Not trying to detract from Rohin’s experience. Just pipping up in case it’s helpful. I also ran into a number of the issues that Rohin had, but just sighed and worked around them.
Disclaimer: I work for 80,000 Hours, which is fiscally sponsored by CEA, which runs EA Global.
My wife and I are currently allocating 10% of my income to “giving later” , investing the funds 100% in stocks in the interim.
We will likely make our regular donation to the donor lottery this year, which will come out of these funds. I would consider giving more to the donor lottery, but on first glance I am less excited about needing to put money into a DAF or equivalent if we win because it is less flexible than money in an investment account.
If users have thoughts on the ideal vehicle to put “giving later” funds in, I would be interested to hear. I currently feel good about it being fairly flexible, such that it could be spend on things that are not charities or 501c3s. I am currently keeping it in a fairly standard investment account.
Hey Jia, I haven’t done many online courses, but one that I did and enjoyed was the Coursera Deep Learning course with Andrew Ng. https://www.coursera.org/specializations/deep-learning
I think if you will be working on multi-agent RL and haven’t played around with deep learning models, you will likely find it helpful. You code up a python model that gets increasingly complicated until it does things like attempting to identify a cat (if I’m remembering it correctly). It’s fairly ‘hands on’ but also somewhat accessible to people without a technical background.
Friends of mine starting out at both CSET and OpenAI worked through it and found it helpful to get context as they moved into their new roles.
This post is extremely helpful, and I have referred to it multiple times as I plan my finances. Thanks again for putting it together.
The importance of this and related topics is premised on humanity’s ability to achieve interstellar travel and settle other solar systems. Nick Beckstead did a shallow investigation into this question back in 2014, which didn’t find any knockdown arguments against. Posting this here mainly as I haven’t seen some of these arguments discussed in the wider community much.
[Spitballing] I’m wondering if Angry Birds has just not been attempted by a major labs with sufficient compute resources? If you trained an agent like Agent57 or MuZero on Angry Birds then I am curious as to whether the agent would outperform humans?
Louis Dixon has written a helpful summary of this talk here. It also has some interesting discussion in the comments: https://forum.effectivealtruism.org/posts/NLJpMEST6pJhyq99S/notes-could-climate-change-make-earth-uninhabitable-for
This is one of the most thought-provoking (for me) posts that I’ve seen on the forum for a while. Thanks to you both for taking the time to put this together!
Thanks for flagging this. I think estimating temperature rise after burning all available fossil fuels is mostly educated guesswork. Both estimating the total amount of fossil fuels is hard and estimate the climate response from them is hard.
However, I hadn’t seen this Winkelmann, et al. paper, which makes a valuable contribution. It suggests that the climate response is substantially sub-linear at higher levels of warming.
The notes that are currently posted above about how warm it would get if we burned all the fossil fuels were back-of-the-envelope calculations that I did in this slides’ notes, and I wouldn’t trust them much. They assume a linear model which isn’t reliable at these temperatures. I didn’t end up including them in the talk as I didn’t think they were robust enough. I’ll ask Louis about removing them.
Thanks for flagging this Linch!
Great question. I’m afraid I only have a vague answer: I would guess that the chance of climate change directly making Earth uninhabitable in the next few centuries is much smaller than 1 in 10,000. (That’s ignoring the contribution of climate change to other risks.) I don’t know how likely the LHC is to cause a black hole, but I would speculate with little knowledge that the climate habitability risk is greater than that.
As I mentioned in the talk, I think there are other emerging tech risks that are more likely and more pressing than this. But I would also encourage more folks with a background in climate science to focus on these tail risks if they were excited by questions in this space.
- Dec 28, 2019, 3:12 AM; 19 points) 's comment on 8 things I believe about climate change by (
What is you high-level on take on social justice in relation to EA?
Hi Lauren, this is Niel from 80,000 Hours. We’ve already discussed this over email, but I’m excited that new organisations are being set up in this space. 80,000 Hours has limited resources and is not planning on increasing the amount we invest in improving our advice for animal advocates in the near term. I’m hopeful that Animal Advocacy Careers will be able to better serve the animal advocacy community than we can. Best of luck with the project!
In the current regime (i.e. for increases of less than ~4 degrees C), warming is roughly linear with cumulative carbon emissions (which is different from CO2 concentrations). Atmospheric forcing (the net energy flux at the top of the atmosphere due to changes in CO2 concentrations) is roughly logarithmic with CO2 concentrations.
How temperatures will change with cumulative carbon emissions at temperatures exceeding ~4 degrees C above pre-industrial is unknown, but will probably be somewhere between super-linear and logarithmic depending on what sorts of feedback mechanisms we end up seeing. I discuss this briefly in at this point in this talk: https://youtu.be/xsQgDwXmsyg?t=520
Btw, your link to FAO feedback on Indonesian broiler chickens leads to a discussion about Latvian egg-laying hens instead.
I think working on AI policy in an EU context is also likely to be valuable, however few (if any) of the world’s very top AI companies are based in the EU (except DeepMind, which will soon be outside the EU after Brexit). Nonetheless, I think it would be very helpful to more AI policy expertise within an EU context, and if you can contribute to that it could be very valuable. It’s worth mentioning that for UK citizens it might be better to focus on British AI policy.
I was meaning to say “3 or more minutes”.