Last August Stijn wrote a post titled The extreme cost-effectiveness of cell-based meat R&D about this subject.Let me quote the bottom line (emphasis mine):
This means one euro extra funding spares 100 vertebrate land animals. Including captured and aquaculture fish (also fish used for fish meal for farm animals), the number becomes an order 10 higher: 1000 vertebrate animals saved per euro....Used as carbon offsetting, cell-based meat R&D has a price around 0,1 euro per ton CO2e averted.
In addition, as I wrote in a comment, I also did a back of the envelope guesstimate model to estimate the cost-effectiveness of donations to GFI, and arrived at $1.4 per ton CO2e (and $0.05-$5.42 for 90% CI).
It is important to mention that our methods are not nearly as thorough as the work done by Giving Green or Founders Pledge about climate change, and I wouldn’t take it too seriously. Nevertheless, I think that it at least hints the order of magnitude of the true numbers.
Edit: I just realized that Brian’s comment refers to a newer post by Stijn, which I assume reflects his broader opinions. However I think that the discussion in the comments on Stijn’s older post that I linked to is also interesting to read.
Thanks for linking this, this looks really interesting!
If anyone is aware of other similar lists, or of more information about those fields and their importance (whether positive or negative), I would be interested in that.
Thanks for detailing your thoughts on these issues!
I’m glad to hear that you are aware of the different problems and tensions, and made informed decisions about them, and I look forward to seeing the changed you mentioned being implemented.
I want to add one comment about to the How to plan your career article, if it’s already mentioned. I think it’s really great, but it might be a little bit too long for many readers’ first exposure. I just realized that you have a summary on the Career planning page, which is good, but I think it might be too short. I found the (older) How to make tough career decisions article very helpful and I think it offers a great balance of information and length, and I personally still refer people to it for their first exposure. I think it will be very useful to have a version of this page (i.e. of similar length), reflecting the process described in the new article.
With regards to longtermism (and expected values), I think that indeed I disagree with the views taken by most of 80,000 hours’ team, and that’s ok. I do wish you offered a more balanced take on these matters, and maybe even separate the parts which are pretty much a consensus in EA from more specific views you take so that people can make their own informed decisions, but I know that it might be too much to ask and the lines are very blurred in any case.
Thanks for publishing negative results.
I think that it is important to do so in general, and especially given that many other group may have relied on your previous recommendations.
If possible, I think you should edit the previous post to reflect your new findings and link to this post.
Thanks to Aaron for updating us, and thanks guzey for adding the clarification in the head of the post.
Thank you for writing this post Brian. I appreciate your choices and would be interested to hear in the future (say in a year, and even after) how things worked out, how excited will you be about your work, and if you will be able to sustain this financially.
I also appreciate the fact that you took the time to explicitly write those caveats.
I meant the difference between using the two, I don’t doubt that you understand the difference between autism and (lack of) leadership.
In any case, this was not main point, which is that the word autistic in the title does not help your post in any way, and spreads misinformation.
I do find the rest of the post insightful, and I don’t think you are intentionally trying to start a controversy.
If you really believe that this helps your post, please explain why (you haven’t so far).
I don’t understand how you can seriously not understand that difference between the two.
Autism is a developmental disorder, which manifests itself in many ways, most of which are completely irrelevant to your post.
Whereas being a “terrible leader”, as you call them, is a personal trait which does not resemble autism in almost any way.
Furthermore, the word autistic in the title is not only completely speculative, but also does not help your case at all.
I think that by using that term so explicitly in your title, you spread misinformation, and with no good reason.
I ask you to change the title, or let the forum moderators handle this situation.
Hey Arden, thanks for asking about that.
Let me start by also thanking you for all the good work you do at 80,000 Hours, and in particular for the various pieces you wrote that I linked to at 8. General Helpful Resources.
Regarding the key ideas vs old career guide, I have several thoughts which I have written below.
Because 80,000 Hours’ content is so central to EA, I think that this discussion is extremely important.
I would love to hear your thoughts about this Arden, and I will be glad if others could share their views as well, or even have a separate discussion somewhere else just about this topic.
I think that two important aspects of the old career guide are much less emphasized in the key ideas page:
the first is general advice on how to have a successful career, and the second is how to make a plan and get a job.
Generally speaking, I felt like the old career guide gave more tools to the reader, rather than only information.
Of course, the key ideas page also discusses these issues to some extent, but much less so than the previous career guide.
I think that these were very good career advice which could potentially have a large effect on your readers’ careers.
Another important point is that I don’t like, and disagree with the choice of, the emphasis on longtermism and AI safety.
Personally, I am not completely persuaded by the arguments for choosing a career by a longtermist view, and even less by the arguments for AI safety.
More importantly, I had several conversations with people in the Israeli EA community and with people I gave career consultation to, who were alienated by this emphasis.
A minority of them felt like me, and the majority understood it as “all you can meaningfully do in EA is AI safety”, which was very discouraging for them.
I understand that this is not your only focus, but people whose first exposure to your website is the key ideas page might get that feeling, if they are not explicitly told otherwise.
Another point is that the “Global priorities” section takes a completely top-to-bottom approach.
I do agree that it is sometimes a good approach, but I think that many times it is not.
One reason is the tension between opportunities and cause areas which I already wrote about.
The other is that some people might already have their career going, or are particularly interested in a specific path.
In these situations, while it is true that they can change their careers or realize that they can enjoy a broader collection of careers, it is somewhat irrelevant and discouraging to read about rethinking all of your basic choices.
Instead, in these situations it would be much better to help people to optimize their current path towards more important goals.
Just to give an example, someone who studies law might get the impression that his choice is wrong and not beneficial, while I believe that if they tried they could find highly impactful opportunities (for example the recently established Legal Priorities Project looks very promising).
I think that these are my major points, but I do have some other smaller reservations about the content (for example I disagree with the principle of maximizing expected value, and definitely don’t think that this is the way it should be phrased as part of the “the big picture”).
I really liked the structure of the previous career guide.
It was very straightforward to know what you are about to read and where you can find something, since it was so clearly separated into different pages with clear titles and summaries.
Furthermore, its modularity made it very easy to read the parts you are interested in.
The key ideas page is much more convoluted, it is very hard to navigate and all of the expandable boxes are not making it easier.
Thanks for spelling out your thoughts, these are good points and questions!
With regards to potentially impactful problems in health.
First, you mentioned anti-aging, and I wish to emphasize that I didn’t try to assess it at any point (I am saying this because I recently wrote a post linking to a new Nature journal dedicated to anti-aging).
Second, I feel that I am still too new to this domain to really have anything serious to say, and I hope to learn more myself as I progress in my PhD and work at KSM institute.
That said, my impression (which is mostly based on conversations with my new advisor) is that there are many areas in health which are much more neglected compared to others, and in particular receive much less attention from the AI and ML community. From my very limited experience, it seems to me that AI and ML techniques are just starting to be applied to problems in public health and related fields, at least in research institutes outside of the for-profit startup scene.
I wish I had something more specific to say, and hopefully I will have in a year or two from now.
I completely agree with your view on AI for good being “a robustly good career path in many ways”. I would like mention once more that in order to have a really large impact in it though, one needs to really optimize for that and avoid the trap of lower counterfactual impact (at least in later stages of the career, after they have enough experience and credentials).
It is very hard for me to say where the highest impact position are, and this is somewhat related to the view that I express at the subsection Opportunities and Cause Areas.
I imagine that the best opportunities for someone in this field highly depend on their location, connections and experience.
For example, in my case it seemed that joining the floods predictions efforts at Google, and the computational healthcare PhD, are significantly better options than the next options in the AI and ML world.
With regards to entering the field, I am super new to this, so I can’t really answer. In any case, I think that entering to the fields of AI, ML and data science is no different for people in EA than others, so I would follow the general recommendations.
In my situation, I had enough other credentials (background in math and in programming/cyber-security) to make people believe that I can become productive in ML after a relatively short time (though at least one place did reject me for not having background in ML), so I jumped right in to working on real-world problems rather than dedicating time to studying.
As to estimating impact of a specific role or project, I think it is sometimes fairly straightforward (when the problem is well-defined and the probabilities are fairly high, you can “just do the math” [don’t forget to account for counterfactuals!]), while in other cases it might be difficult (for example more basic research or things having more indirect effects).
In the latter case, I think it is helpful to have a rough estimate—understand how large the scope is (how many people have a certain disease or die from it every year?), figure out who is working on the problem and which techniques they use, try to estimate how much of the problem you imagine you can solve (e.g. can we eliminate the disease? [probably not.] how many people can we realistically reach? how expensive is the solution going to be?).
All of this together can help you in figuring out the orders of magnitudes you are talking about. Let me give a very rough example of an outcomes of these estimates:
A project will take roughly 1-3 years, seems likely to succeed, and if successful, will significantly improve the lives of 200-800 people suffering from some disease every year, and there’s only one other team working on the exact same problem. This sounds great! Changing the variables a little might make it seem much less attractive, for example if only 4 people will be able to pay for the solution (or suffer from it to being with), or if there are 15 other teams working on exactly the same problem, in which case your impact will probably be much lower.
One can also imagine projects with lower chances of success, which if successful will have a much larger effect. I tend to be cautious in these cases, because I think that it is much easier to be wrong about small probabilities (I can say more about this).
Let me also mention that it possible to work on multiple projects at the same time, or over a few years, especially if each one consist of several steps in which gain more information and you can re-evaluate them along the way.
In such cases, you’d expect some of the projects to succeed, and learn how to calibrate your estimates over time.
Lastly, with regards to your description of my views, that’s almost right, except that I also see opportunities for high impact not only on particularly important problems but also on smaller problems which are neglected for some reason (e.g. things that are less prestigious or don’t have economic incentives).
I’d also add that at least in my case in computational healthcare I also intend to apply other techniques from computer science besides AI and ML (but that’s really a different story than AI for good).
This comment already becomes way too long, so I will stop here.
I hope that it is somewhat useful, and, again, if someone wants me to write more about a specific aspect, I will gladly do so.
Thanks for your comment Michelle!
If you have any other comments to make on my process (both positive and negative), I think that would be very valuable for me and for other readers as well.
Important Edit: Everything I wrote below refers only to technical cyber-security (and formal verification) roles.
I don’t have strong views on whether governance, advocacy or other types of work related to those fields could be impactful. My intuition is that these are indeed more promising than technical roles.
I don’t see any particularly important problem that can be addressed using cyber-security or formal verification (now or in the future), which is not already being addressed by the private or public sector.
Surely these areas are important for the world, and therefore are utilized and researched outside of EA.
For example, (too) many cyber-security companies provide solutions for other organizations (including critical organizations such as hospitals and electricity providers) to protect their data and computer systems. Another example is be governments using cyber-security tools for intelligence operations and surveillance. Both examples are obviously important, but not at all neglected.
One could argue that EA organizations need to protect their data and computer systems as well, which is definitely true, but can easily be solved by purchasing the appropriate products or hiring infosec officers, just like in any other organization.
Other than that I didn’t find any place where cyber-security can be meaningfully applied to assist EA goals.
As for formal verification, I believe that the case is similar—these kinds of tools are useful for certain (and very limited) problems in the software and hardware industry, but I am unaware of any interesting applications for EA causes.
One caveat is that I believe that it is plausible (but not very probable) that formal verification can be used for AI alignment, as I outlined in this comment.
My conclusion is that, right now, I wouldn’t recommended people in EA to build skills in any of these areas for the sake of having direct impact (of course cyber-security is a great industry for EtG).
To convince me otherwise, someone would have to come up with a reasonable suggestions where these tools could be applied.
If anyone has any such ideas (even rough ideas), I would love to hear them!
This is a very good question and I have some thoughts about it.
Let me begin by answering about my specific situation.
As I said, I have many years of experience in programming and cyber security. Given my background and connections (mostly from the army) it was fairly easy for me to find multiple companies I could work for as a contractor/part-time employee. In particular, in the past 3 years I have worked part-time in cyber security and had a lot of flexibility in my hours.
Furthermore, I am certain that it is also possible to find such positions in more standard software development areas. In fact, just before I finished high school, I took a part-time front-end development position in some Israeli startup.
As for other people, it is harder for me to say.
I imagine that it will not be so easy for someone who just graduated to find a high paying part-time job, but that highly depends on location, domain, and past experience.
Generally, I believe that this path mostly suits people who already have some experience in their field or are willing to work as freelancers and face a slower progress in this part of their careers. For example, it can work very well for people already pursuing (ordinary) EtG or who are at later stages of their careers and want to switch to a different career path.
Edit—If this is something people are interested in, I can write a more detailed post about this idea specifically, where we can also have a longer discussion in the comments.
Thanks for cross-posting this, I probably wouldn’t hear about this otherwise.
I am very interested in Open Phil’s model regarding the best time to donate for such causes. If anyone is aware of similar models for large donors, I would love to hear about them.
Thanks for sharing that, that sounds like an interesting plan.
A while ago I was trying to think about potential ways to have large impact via formal verification (after reading this post). I didn’t give it much attention, but it looks like others and I don’t see a case for this career path to be highly impactful, but I’d to love be proven wrong. I would appreciate it if you could elaborate on your perspective on this.
I should probably mention that I couldn’t find a reference to formal verification at agent foundations (but I didn’t really read it), and Vanessa seemed to reference it as a tangential point, but I might be wrong about both.
I’m interested in formal verification from a purely mathematical point of view. That is, I think it’s important for math (but I don’t think that formalizing [mainstream] math is likely to be very impactful outside of math). Additionally, I am interested in ideas developed in homotopy type theory, because of their connections to homotopy theory, rather than because I think it is impactful.
With regards to FIRE, I myself still haven’t figured out how this fits with my donations. In any case, I think that giving money to beggars sums up to less than $5 per month in my case (and probably even less on average), but I guess that also depends on where you live etc.
I would like to reiterate Edo’s answer, and add my perspective.
First and foremost, I believe that one can follow EA perspectives (e.g. donate effectively) AND be kind and helpful to strangers, rather than OR (repeating an argument I made before in another context).In particular, I personally don’t write giving a couple of dollars in my donation sheet, and it does not affect my EA-related giving (at least not intentionally).
Additionally, they constitute such a little fraction of my other spending, that I don’t notice them financially.Despite that, I truly believe that being kind to strangers, giving a few coins, or trying to help in other ways, can meaningfully help the other person (even if not as cost-effectively as donating to, say, GiveWell).
I don’t view this and my other donations as means to achieve the exact same goal, but rather as two distinct and non-competing ways to achieve the purpose of making the world better.
Thank you for following up and clarifying that.