Thanks for spelling out your thoughts, these are good points and questions!
With regards to potentially impactful problems in health.
First, you mentioned anti-aging, and I wish to emphasize that I didn’t try to assess it at any point (I am saying this because I recently wrote a post linking to a new Nature journal dedicated to anti-aging).
Second, I feel that I am still too new to this domain to really have anything serious to say, and I hope to learn more myself as I progress in my PhD and work at KSM institute.
That said, my impression (which is mostly based on conversations with my new advisor) is that there are many areas in health which are much more neglected compared to others, and in particular receive much less attention from the AI and ML community. From my very limited experience, it seems to me that AI and ML techniques are just starting to be applied to problems in public health and related fields, at least in research institutes outside of the for-profit startup scene.
I wish I had something more specific to say, and hopefully I will have in a year or two from now.
I completely agree with your view on AI for good being “a robustly good career path in many ways”. I would like mention once more that in order to have a really large impact in it though, one needs to really optimize for that and avoid the trap of lower counterfactual impact (at least in later stages of the career, after they have enough experience and credentials).
It is very hard for me to say where the highest impact position are, and this is somewhat related to the view that I express at the subsection Opportunities and Cause Areas.
I imagine that the best opportunities for someone in this field highly depend on their location, connections and experience.
For example, in my case it seemed that joining the floods predictions efforts at Google, and the computational healthcare PhD, are significantly better options than the next options in the AI and ML world.
With regards to entering the field, I am super new to this, so I can’t really answer. In any case, I think that entering to the fields of AI, ML and data science is no different for people in EA than others, so I would follow the general recommendations.
In my situation, I had enough other credentials (background in math and in programming/cyber-security) to make people believe that I can become productive in ML after a relatively short time (though at least one place did reject me for not having background in ML), so I jumped right in to working on real-world problems rather than dedicating time to studying.
As to estimating impact of a specific role or project, I think it is sometimes fairly straightforward (when the problem is well-defined and the probabilities are fairly high, you can “just do the math” [don’t forget to account for counterfactuals!]), while in other cases it might be difficult (for example more basic research or things having more indirect effects).
In the latter case, I think it is helpful to have a rough estimate—understand how large the scope is (how many people have a certain disease or die from it every year?), figure out who is working on the problem and which techniques they use, try to estimate how much of the problem you imagine you can solve (e.g. can we eliminate the disease? [probably not.] how many people can we realistically reach? how expensive is the solution going to be?).
All of this together can help you in figuring out the orders of magnitudes you are talking about. Let me give a very rough example of an outcomes of these estimates:
A project will take roughly 1-3 years, seems likely to succeed, and if successful, will significantly improve the lives of 200-800 people suffering from some disease every year, and there’s only one other team working on the exact same problem. This sounds great! Changing the variables a little might make it seem much less attractive, for example if only 4 people will be able to pay for the solution (or suffer from it to being with), or if there are 15 other teams working on exactly the same problem, in which case your impact will probably be much lower.
One can also imagine projects with lower chances of success, which if successful will have a much larger effect. I tend to be cautious in these cases, because I think that it is much easier to be wrong about small probabilities (I can say more about this).
Let me also mention that it possible to work on multiple projects at the same time, or over a few years, especially if each one consist of several steps in which gain more information and you can re-evaluate them along the way.
In such cases, you’d expect some of the projects to succeed, and learn how to calibrate your estimates over time.
Lastly, with regards to your description of my views, that’s almost right, except that I also see opportunities for high impact not only on particularly important problems but also on smaller problems which are neglected for some reason (e.g. things that are less prestigious or don’t have economic incentives).
I’d also add that at least in my case in computational healthcare I also intend to apply other techniques from computer science besides AI and ML (but that’s really a different story than AI for good).
This comment already becomes way too long, so I will stop here.
I hope that it is somewhat useful, and, again, if someone wants me to write more about a specific aspect, I will gladly do so.
Thanks for spelling out your thoughts, these are good points and questions!
With regards to potentially impactful problems in health. First, you mentioned anti-aging, and I wish to emphasize that I didn’t try to assess it at any point (I am saying this because I recently wrote a post linking to a new Nature journal dedicated to anti-aging). Second, I feel that I am still too new to this domain to really have anything serious to say, and I hope to learn more myself as I progress in my PhD and work at KSM institute. That said, my impression (which is mostly based on conversations with my new advisor) is that there are many areas in health which are much more neglected compared to others, and in particular receive much less attention from the AI and ML community. From my very limited experience, it seems to me that AI and ML techniques are just starting to be applied to problems in public health and related fields, at least in research institutes outside of the for-profit startup scene. I wish I had something more specific to say, and hopefully I will have in a year or two from now.
I completely agree with your view on AI for good being “a robustly good career path in many ways”. I would like mention once more that in order to have a really large impact in it though, one needs to really optimize for that and avoid the trap of lower counterfactual impact (at least in later stages of the career, after they have enough experience and credentials).
It is very hard for me to say where the highest impact position are, and this is somewhat related to the view that I express at the subsection Opportunities and Cause Areas. I imagine that the best opportunities for someone in this field highly depend on their location, connections and experience. For example, in my case it seemed that joining the floods predictions efforts at Google, and the computational healthcare PhD, are significantly better options than the next options in the AI and ML world.
With regards to entering the field, I am super new to this, so I can’t really answer. In any case, I think that entering to the fields of AI, ML and data science is no different for people in EA than others, so I would follow the general recommendations. In my situation, I had enough other credentials (background in math and in programming/cyber-security) to make people believe that I can become productive in ML after a relatively short time (though at least one place did reject me for not having background in ML), so I jumped right in to working on real-world problems rather than dedicating time to studying.
As to estimating impact of a specific role or project, I think it is sometimes fairly straightforward (when the problem is well-defined and the probabilities are fairly high, you can “just do the math” [don’t forget to account for counterfactuals!]), while in other cases it might be difficult (for example more basic research or things having more indirect effects). In the latter case, I think it is helpful to have a rough estimate—understand how large the scope is (how many people have a certain disease or die from it every year?), figure out who is working on the problem and which techniques they use, try to estimate how much of the problem you imagine you can solve (e.g. can we eliminate the disease? [probably not.] how many people can we realistically reach? how expensive is the solution going to be?). All of this together can help you in figuring out the orders of magnitudes you are talking about. Let me give a very rough example of an outcomes of these estimates: A project will take roughly 1-3 years, seems likely to succeed, and if successful, will significantly improve the lives of 200-800 people suffering from some disease every year, and there’s only one other team working on the exact same problem. This sounds great! Changing the variables a little might make it seem much less attractive, for example if only 4 people will be able to pay for the solution (or suffer from it to being with), or if there are 15 other teams working on exactly the same problem, in which case your impact will probably be much lower. One can also imagine projects with lower chances of success, which if successful will have a much larger effect. I tend to be cautious in these cases, because I think that it is much easier to be wrong about small probabilities (I can say more about this).
Let me also mention that it possible to work on multiple projects at the same time, or over a few years, especially if each one consist of several steps in which gain more information and you can re-evaluate them along the way. In such cases, you’d expect some of the projects to succeed, and learn how to calibrate your estimates over time.
Lastly, with regards to your description of my views, that’s almost right, except that I also see opportunities for high impact not only on particularly important problems but also on smaller problems which are neglected for some reason (e.g. things that are less prestigious or don’t have economic incentives). I’d also add that at least in my case in computational healthcare I also intend to apply other techniques from computer science besides AI and ML (but that’s really a different story than AI for good).
This comment already becomes way too long, so I will stop here. I hope that it is somewhat useful, and, again, if someone wants me to write more about a specific aspect, I will gladly do so.