Iāve been tying myself up in knots about what causes to prioritize. I originally came back to effective altruism because I realized I had gotten interested in 23 different causes and needed to prioritize them. But looking at the 80K problem profile page (I am fairly aligned with their worldview), I see at least 17 relatively unexplored causes that they say could be as pressing as the top causes theyāve created profiles for. Iāve taken a stab at one of them: making surveillance compatible with privacy, civil liberties, and public oversight.
Iām sympathetic to this proposal for how to prioritize given cluelessness. But Iām not sure it should dominate my decision making. It also stops feeling like altruism when itās too abstracted away from the object-level problems (other than x-risk and governance).
Iāve been seriously considering just picking causes from the 80K list āat random.ā
By this, I mean could just pick a cause from the list that seems more neglected, āspeaks to meā in some meaningful way, and that I have a good personal fit for. Many of the more unexplored causes on the 80K list seem more neglected, like one person worked on it just long enough to write one forum post (e.g. risks from malevolent actors).
It feels inherently icky because itās not really taking into account knowledge of the scale of impact, and itās the exact thing that EA tells you not to do. But: MIRI calls it quantilizing, or picking an action at random from the top x% of actions one could do. They think itās a promising alternative to expected utility maximization for AI agents, which makes me more confident that it might be a good strategy for clueless altruists too.
Some analogies that I think support this line of thinking:
In 2013, the British newspaper The Observer ran a contest between professional investment managers andā¦ a cat throwing a toy at a dartboard to pick stocks. The cat won. According to the efficient market hypothesis, investors are clueless about what investing opportunities will outperform the pack, so theyāre unlikely to outperform an index fund or a stock-picking cat. If weāre similarly clueless about whatās effective in the long term, then maybe the stochastic approach is fine.
One strategy for dimensionality reduction in machine learning and statistics is to compress a high-dimensional dataset into a lower-dimensional space thatās easier to compute with by creating a random projection. Even though the random projection doesnāt take into account any information in the dataset (like PCA does), it still preserves most of the information in the dataset most of the time.
Iāve also been thinking about going into EA community building activities (such as setting up an EA/āpublic interest tech hackathon) so I can delegate, in expectation, the process of thinking about which causes are promising to other people who are better suited to doing it. If I did this, I would most likely still be thinking about cause prioritization, but it would allow me to stretch that thinking over a longer time scale than if I had to do it all at once before deciding on an object-level cause to work on.
Even though I think AI safety is a potentially pressing problem, I donāt emphasize it as much because it doesnāt seem constrained by CS talent. The EA community currently encourages people with CS skills to go into either AI technical safety or earning to give. Direct work applying CS to other pressing causes seems more neglected, and itās the path Iām exploring.
Some rough thoughts on cause prioritization
Iāve been tying myself up in knots about what causes to prioritize. I originally came back to effective altruism because I realized I had gotten interested in 23 different causes and needed to prioritize them. But looking at the 80K problem profile page (I am fairly aligned with their worldview), I see at least 17 relatively unexplored causes that they say could be as pressing as the top causes theyāve created profiles for. Iāve taken a stab at one of them: making surveillance compatible with privacy, civil liberties, and public oversight.
Iām sympathetic to this proposal for how to prioritize given cluelessness. But Iām not sure it should dominate my decision making. It also stops feeling like altruism when itās too abstracted away from the object-level problems (other than x-risk and governance).
Iāve been seriously considering just picking causes from the 80K list āat random.ā
By this, I mean could just pick a cause from the list that seems more neglected, āspeaks to meā in some meaningful way, and that I have a good personal fit for. Many of the more unexplored causes on the 80K list seem more neglected, like one person worked on it just long enough to write one forum post (e.g. risks from malevolent actors).
It feels inherently icky because itās not really taking into account knowledge of the scale of impact, and itās the exact thing that EA tells you not to do. But: MIRI calls it quantilizing, or picking an action at random from the top x% of actions one could do. They think itās a promising alternative to expected utility maximization for AI agents, which makes me more confident that it might be a good strategy for clueless altruists too.
Some analogies that I think support this line of thinking:
In 2013, the British newspaper The Observer ran a contest between professional investment managers andā¦ a cat throwing a toy at a dartboard to pick stocks. The cat won. According to the efficient market hypothesis, investors are clueless about what investing opportunities will outperform the pack, so theyāre unlikely to outperform an index fund or a stock-picking cat. If weāre similarly clueless about whatās effective in the long term, then maybe the stochastic approach is fine.
One strategy for dimensionality reduction in machine learning and statistics is to compress a high-dimensional dataset into a lower-dimensional space thatās easier to compute with by creating a random projection. Even though the random projection doesnāt take into account any information in the dataset (like PCA does), it still preserves most of the information in the dataset most of the time.
Iāve also been thinking about going into EA community building activities (such as setting up an EA/āpublic interest tech hackathon) so I can delegate, in expectation, the process of thinking about which causes are promising to other people who are better suited to doing it. If I did this, I would most likely still be thinking about cause prioritization, but it would allow me to stretch that thinking over a longer time scale than if I had to do it all at once before deciding on an object-level cause to work on.
Even though I think AI safety is a potentially pressing problem, I donāt emphasize it as much because it doesnāt seem constrained by CS talent. The EA community currently encourages people with CS skills to go into either AI technical safety or earning to give. Direct work applying CS to other pressing causes seems more neglected, and itās the path Iām exploring.