Currently I’m working on doing cause prio, finding my key uncertainties, trying to figure out what the most important problem is and how I can help solve it. Every time I feel I’m getting somewhere in my thinking I come up with 10 new things to consider. Although I enjoy this as an exercise it does take up a lot of time and its hard to know how “worth it” doing this is. I‘m now wondering were a good stopping point is / what proportion of time is useful to spend on thinking about these types of questions (especially if you’re unlikely to contribute to research). Part of me thinks that I should just defer to a few people who seem to know what they’re talking about then from there start putting my skills to use rather than spending a bunch of time philosphising about who matters and whether I’m a negative utilitarian. Does anyone have any (strong) thoughts about these two approaches and if it is helpful/necessary for everyone within EA to spend significant amounts of time doing cause prio work?
My quick boring take is that you should do roughly whatever level of cause prio you enjoy doing / remain curious about / etc. I’d roughly guess that at a community level this will lead to a healthy balance in the community of deferring vs. developing inside views / critiquing / etc. (I do think it’s quite important that at least a significant fraction spend a bunch of time on cause prio to avoid deferral cascades and allow more perspectives to be heard) and plus it generally seems good for people to do whatever they enjoy. :)
Thanks for writing this up! I’m also in the midst of Working Things Out and a lot of what you’ve said hits home. My bottom line here is something like: I completely agree that there comes a point in most people’s decisions about their lives and what to prioritise, where even though they’ve done all the homework and counted all the utils on each side, they mostly make the final decision based on intuition—because you ultimately can’t prove most of this stuff for certain. One thing that could help you structure your cause prio is by focusing more on a key decision that it has to help you decide on, and using your sureness about that decision as a barometer for when you’ve caused enough prios.
> On “I come up with 10 new things to consider”—you’re right that it feels like battling an intellectual hydra of crucial considerations sometimes. Have you got the sense so far that, of the 10 new things to consider, there’s at least one or two that could substantially reshape your opinion? For me, even when that’s not the case, having a more detailed picture can still be really good. This seems especially important for situations/roles where you’ll probably end up communicating about EA to people with less context than you.
> On when to stop: Cause prio thinking and building models of different fields of research / work is definitely something you could spend literally forever on. I roughly think that this wave of EAs are stopping just a bit too early, and are jumping into trying to do useful work too quickly. I elaborate more in the next bit.
> Against lots of deferring: An argument here that motivates me is that in most EA/LTist roles you’ll want to go into, it seems like time spent investing in your cause prio saves time. Specifically, it’s likely to save time that your colleagues would otherwise have to spend giving you context, explaining how they orient towards the problem, etc. The more you’ve nailed what your view is, the better you can make (increasingly) autonomous decisions about how the projects you work on should look, etc. I think that this applies in basically any field of EA work: knowing in great detail why you care about a given cause area helps you identify which empirical facts about the world matter to your aims. This I think helps you a lot with strategy and design decisions. It also means that your team benefits more from having you on it—because your perspective is likely to be distinct in useful ways from other people’s!
(I’m quite uncertain about the above and I think this sort of thing differs a lot between individuals)
If everyone focused on working in prioritized causes then conditions in the majority of wealthy or economically stable-ish countries would rapidly deteriorate. One odd impact is that the EA movement would die, because EAists would start dying or be forced into poverty. The things we have today are mostly from people who worked on issues that weren’t the most important. Assuming you are from a wealthy country, there have probably been times in your life you needed help that was expensive or inefficient. Healthcare, transportation, education, etc.
What would be better is if people adopted EA mindsets in the work they already do. Ex, I was at a community mental health meeting. They wanted to fund some employees to navigate mental health services. It’d be over the phone. Something they could easily do is publish the results of their searches in the form of a directory, so anyone in the same situation could find it online and spare god knows how many hours researching and cold calling. Could they instead fund work in AI? Yes. But spending money to train an AI worker is meaningless if their mental health gets too bad to work or they have to leave the job to care for or grieve a family member who had mental illness.
If everyone focused on working in prioritized causes then conditions in the majority of wealthy or economically stable-ish countries would rapidly deteriorate.
EA prioritization is about the best use of additional resources (“at the margin”) given how existing resources are currently allocated, and priorities change as more money or workers get directed to any given cause. Once we reached the ideal number of people working on global poverty reduction or AI safety, for example, the next best thing for someone to work on would become the best thing for them to work on. Eventually, you’d reach a point where working on altruistic projects is no more beneficial for the world than taking an ordinary job. Let me know if this explanation makes sense.
It makes sense but on a practical level I disagree. There would be no way that would happen fast enough for it to work. When people change careers, they have to re-educate themselves on some level. It would also quickly turn into a too-many-hands-in-the-kitchen scenario from so many people joining neglected causes at the exact same time.
Then there’s the issue of there being more problems than people. Many problems become irrelevant over time and the long term ones rise to the top. With billions of problems and EA only focusing on very few at a time, many long term problems would never get solved because they’re too far down the list.
Prioritization falls into the same issue as time management. In the book Algorithms to Live By, (about using math to solve everyday problems) they found no scheduling method to be superior. The best way to be the most productive isn’t by putting time into making a great calendar-the most productive way is to just do it. EA is spending excessive amounts of time deciding what to work on, when the most effective method is to just work on things even if it’s not perfect. If everyone agonized over what the perfect cause to work on is, (their “calenders”) so much would collapse due to decisions taking longer and less work getting done.
Do we all need to do intense cause prio thinking?
Some off the cuff thoughts:
Currently I’m working on doing cause prio, finding my key uncertainties, trying to figure out what the most important problem is and how I can help solve it. Every time I feel I’m getting somewhere in my thinking I come up with 10 new things to consider. Although I enjoy this as an exercise it does take up a lot of time and its hard to know how “worth it” doing this is. I‘m now wondering were a good stopping point is / what proportion of time is useful to spend on thinking about these types of questions (especially if you’re unlikely to contribute to research). Part of me thinks that I should just defer to a few people who seem to know what they’re talking about then from there start putting my skills to use rather than spending a bunch of time philosphising about who matters and whether I’m a negative utilitarian. Does anyone have any (strong) thoughts about these two approaches and if it is helpful/necessary for everyone within EA to spend significant amounts of time doing cause prio work?
My quick boring take is that you should do roughly whatever level of cause prio you enjoy doing / remain curious about / etc. I’d roughly guess that at a community level this will lead to a healthy balance in the community of deferring vs. developing inside views / critiquing / etc. (I do think it’s quite important that at least a significant fraction spend a bunch of time on cause prio to avoid deferral cascades and allow more perspectives to be heard) and plus it generally seems good for people to do whatever they enjoy. :)
Thanks for writing this up! I’m also in the midst of Working Things Out and a lot of what you’ve said hits home. My bottom line here is something like: I completely agree that there comes a point in most people’s decisions about their lives and what to prioritise, where even though they’ve done all the homework and counted all the utils on each side, they mostly make the final decision based on intuition—because you ultimately can’t prove most of this stuff for certain. One thing that could help you structure your cause prio is by focusing more on a key decision that it has to help you decide on, and using your sureness about that decision as a barometer for when you’ve caused enough prios.
> On “I come up with 10 new things to consider”—you’re right that it feels like battling an intellectual hydra of crucial considerations sometimes. Have you got the sense so far that, of the 10 new things to consider, there’s at least one or two that could substantially reshape your opinion? For me, even when that’s not the case, having a more detailed picture can still be really good. This seems especially important for situations/roles where you’ll probably end up communicating about EA to people with less context than you.
> On when to stop: Cause prio thinking and building models of different fields of research / work is definitely something you could spend literally forever on. I roughly think that this wave of EAs are stopping just a bit too early, and are jumping into trying to do useful work too quickly. I elaborate more in the next bit.
> Against lots of deferring: An argument here that motivates me is that in most EA/LTist roles you’ll want to go into, it seems like time spent investing in your cause prio saves time. Specifically, it’s likely to save time that your colleagues would otherwise have to spend giving you context, explaining how they orient towards the problem, etc. The more you’ve nailed what your view is, the better you can make (increasingly) autonomous decisions about how the projects you work on should look, etc. I think that this applies in basically any field of EA work: knowing in great detail why you care about a given cause area helps you identify which empirical facts about the world matter to your aims. This I think helps you a lot with strategy and design decisions. It also means that your team benefits more from having you on it—because your perspective is likely to be distinct in useful ways from other people’s!
(I’m quite uncertain about the above and I think this sort of thing differs a lot between individuals)
If everyone focused on working in prioritized causes then conditions in the majority of wealthy or economically stable-ish countries would rapidly deteriorate. One odd impact is that the EA movement would die, because EAists would start dying or be forced into poverty. The things we have today are mostly from people who worked on issues that weren’t the most important. Assuming you are from a wealthy country, there have probably been times in your life you needed help that was expensive or inefficient. Healthcare, transportation, education, etc.
What would be better is if people adopted EA mindsets in the work they already do. Ex, I was at a community mental health meeting. They wanted to fund some employees to navigate mental health services. It’d be over the phone. Something they could easily do is publish the results of their searches in the form of a directory, so anyone in the same situation could find it online and spare god knows how many hours researching and cold calling. Could they instead fund work in AI? Yes. But spending money to train an AI worker is meaningless if their mental health gets too bad to work or they have to leave the job to care for or grieve a family member who had mental illness.
EA prioritization is about the best use of additional resources (“at the margin”) given how existing resources are currently allocated, and priorities change as more money or workers get directed to any given cause. Once we reached the ideal number of people working on global poverty reduction or AI safety, for example, the next best thing for someone to work on would become the best thing for them to work on. Eventually, you’d reach a point where working on altruistic projects is no more beneficial for the world than taking an ordinary job. Let me know if this explanation makes sense.
It makes sense but on a practical level I disagree. There would be no way that would happen fast enough for it to work. When people change careers, they have to re-educate themselves on some level. It would also quickly turn into a too-many-hands-in-the-kitchen scenario from so many people joining neglected causes at the exact same time.
Then there’s the issue of there being more problems than people. Many problems become irrelevant over time and the long term ones rise to the top. With billions of problems and EA only focusing on very few at a time, many long term problems would never get solved because they’re too far down the list.
Prioritization falls into the same issue as time management. In the book Algorithms to Live By, (about using math to solve everyday problems) they found no scheduling method to be superior. The best way to be the most productive isn’t by putting time into making a great calendar-the most productive way is to just do it. EA is spending excessive amounts of time deciding what to work on, when the most effective method is to just work on things even if it’s not perfect. If everyone agonized over what the perfect cause to work on is, (their “calenders”) so much would collapse due to decisions taking longer and less work getting done.