Once upon a time, there was an EA named Alice. EA made a lot of sense to Alice, and she believed that some niche problems/causes were astronomically bigger than others. But she eventually decided that (1) the theories of change were confusing/suspicious and (2) there’s substantial evidence that a bunch of EA work is net-negative. So she decided to become a teacher or doctor or something.
II.
Alice made a mistake! If she thinks that some problems/causes are astronomically bigger than others, and she’s skeptical of certain approaches, she should look for better approaches, not give up on those problems/causes! For example, she could:
Find an intervention (in the great problems/causes) that she believes in, and do that
Defer to people who she really respects on the topic
Try to understand the problem and possible interventions; do strategy/prioritization/deconfusion work (for herself or maybe benefitting the whole community)
Develop relevant skills and/or save up money, and set herself up to notice if there’s more clarity or great opportunities in the future
Accept sign-uncertainty and do positive-EV stuff
III.
This is actually about my friend Bob who’s sometimes like I work on AI safety but I feel clueless about whether we’re actually helping, and I see that farmed animal suffering is a huge problem, and I want to go work on farmed animal welfare. If Bob still believes that the AI stuff is astronomically more important than the animal stuff, Bob is making the same mistake as Alice!
Hmm, I’m not confident that Bob is wrong here. It seems to me that there’s a quite plausible argument that EA’s involvement in AI has been net-negative, possibly so net-negative as to cancel out all of the rest of EA. You seem to assume that this was knowable in advance, but that’s not necessarily so.
Your argument seems to assume that one should “shut up and multiply” and then run with that estimated EV number; but there have been many arguments on this forum and elsewhere about why we shouldn’t trust naive EV estimates.
My claim is that if you’re worried, the correct response is to actually try to make the astronomical problem/cause go better, not to give up on it. I think if you’re savvy you will probably find a way to make the astronomical thing go better—such as doing strategy/prioritization/deconfusion work, or working on robustly good intermediate desiderata, or building skills/money in case there’s more clarity in the future—rather than ultimately thinking there’s nothing I can do to make the thing go better.
I think if you’re savvy you will probably find a way to make the astronomical thing go better—such as doing strategy/prioritization/deconfusion work, or working on robustly good intermediate desiderata, or building skills/money in case there’s more clarity in the future
What do you think about the arguments for cluelessness from imprecision, e.g., here? (I explain more why I think we’re clueless even about the things you list, here.)
(I’m biased since I’ve mostly donated to animal welfare / digital minds. I’m also super busy now so it’s possible I just haven’t thought your argument through sufficiently.)
If you’re a pure EV maximizer I agree with your implicit claim that it’s probably best to prioritize AI safety and/or helping steer AI for the benefit of neglected groups (animals and digital minds).
If like most people you have risk aversion, like wanting high confidence you’ve made a positive difference, or wanting to make sure a greater % of EA community resources are devoted to interventions which maximally reduce near-term suffering, I think animal welfare presents by far the best value option, dwarfing global health and especially an option like becoming a doctor.
So I feel like perhaps the crux of your discussion with Bob should be whether he’s a pure EV maximizer or if he has the types of risk aversion which make animal welfare look good. There are also options of working in AI safety and donating to animal welfare—no need to fully commit to one or the other! But I don’t think the Alice analogy goes through because becoming a teacher or doctor doesn’t really make sense under any optimizing view, whereas I think animal welfare makes sense under many such views.
I.
Once upon a time, there was an EA named Alice. EA made a lot of sense to Alice, and she believed that some niche problems/causes were astronomically bigger than others. But she eventually decided that (1) the theories of change were confusing/suspicious and (2) there’s substantial evidence that a bunch of EA work is net-negative. So she decided to become a teacher or doctor or something.
II.
Alice made a mistake! If she thinks that some problems/causes are astronomically bigger than others, and she’s skeptical of certain approaches, she should look for better approaches, not give up on those problems/causes! For example, she could:
Find an intervention (in the great problems/causes) that she believes in, and do that
Defer to people who she really respects on the topic
Try to understand the problem and possible interventions; do strategy/prioritization/deconfusion work (for herself or maybe benefitting the whole community)
Develop relevant skills and/or save up money, and set herself up to notice if there’s more clarity or great opportunities in the future
Accept sign-uncertainty and do positive-EV stuff
III.
This is actually about my friend Bob who’s sometimes like I work on AI safety but I feel clueless about whether we’re actually helping, and I see that farmed animal suffering is a huge problem, and I want to go work on farmed animal welfare. If Bob still believes that the AI stuff is astronomically more important than the animal stuff, Bob is making the same mistake as Alice!
Hmm, I’m not confident that Bob is wrong here. It seems to me that there’s a quite plausible argument that EA’s involvement in AI has been net-negative, possibly so net-negative as to cancel out all of the rest of EA. You seem to assume that this was knowable in advance, but that’s not necessarily so.
Your argument seems to assume that one should “shut up and multiply” and then run with that estimated EV number; but there have been many arguments on this forum and elsewhere about why we shouldn’t trust naive EV estimates.
My claim is that if you’re worried, the correct response is to actually try to make the astronomical problem/cause go better, not to give up on it. I think if you’re savvy you will probably find a way to make the astronomical thing go better—such as doing strategy/prioritization/deconfusion work, or working on robustly good intermediate desiderata, or building skills/money in case there’s more clarity in the future—rather than ultimately thinking there’s nothing I can do to make the thing go better.
What do you think about the arguments for cluelessness from imprecision, e.g., here? (I explain more why I think we’re clueless even about the things you list, here.)
I haven’t engaged with your posts and so don’t know the arguments.
I respect that you and a few others legitimately feel deeply clueless. Alice and Bob are just whining about how not everything is clear-cut.
(I’m biased since I’ve mostly donated to animal welfare / digital minds. I’m also super busy now so it’s possible I just haven’t thought your argument through sufficiently.)
If you’re a pure EV maximizer I agree with your implicit claim that it’s probably best to prioritize AI safety and/or helping steer AI for the benefit of neglected groups (animals and digital minds).
If like most people you have risk aversion, like wanting high confidence you’ve made a positive difference, or wanting to make sure a greater % of EA community resources are devoted to interventions which maximally reduce near-term suffering, I think animal welfare presents by far the best value option, dwarfing global health and especially an option like becoming a doctor.
So I feel like perhaps the crux of your discussion with Bob should be whether he’s a pure EV maximizer or if he has the types of risk aversion which make animal welfare look good. There are also options of working in AI safety and donating to animal welfare—no need to fully commit to one or the other! But I don’t think the Alice analogy goes through because becoming a teacher or doctor doesn’t really make sense under any optimizing view, whereas I think animal welfare makes sense under many such views.
To clarify, is Bob’s mistake:
Continuing to work on AI Safety?
Wanting to move to farmed animal welfare?
(I’m 90% sure you think the mistake is 2, but the phrasing of the sentence isn’t fully clear)