This doesn’t exactly answer your question, but it answers your prompt to babble, and substituting “what is the new EA question” with “what have I been thinking about recently that sounds similar?” seems like a nice babble. So: “How do we do the most good” could now be:
“How do we get expected value calculations in practice?”
“Once we have expected value calculations in place, what is the highest expected value thing to do?”
“What do we do in the meantime?”
If we start out trying to optimize money directed to global health, we can sort of get expected value calculations, by doing the hard research that GiveWell does, and hoping that nothing too suboptimal happens if we take their expected life saved numbers literally, or at least if our decision is to fund charities in decreasing order of estimated cost-effectiveness.
But once we accept that we could get higher expected impact by moving to more speculative cause areas, the number of different actions we can take becomes really large, and the way to choose between them becomes much more fuzzy, and relies much more on human judgment.
Like, it’s not like we can tell that “going into AI alignment improves all that is of value by 0.0001% (in expectation), whereas going into EA movement building improves all that is of value by 0.003% (in expectation), so you should definitely try both, see if the difference in fit is more than 30x, and if not, choose the second.”
As aside, I think that the central, or meaningful split in EA right now is not exactly between near- and long- termists. It’s rather between looking for, in a sense, more certain promises of impact, and being willing to take more uncertain gambles if we think that they’re worth it in expectation. But that seems mostly distinct from inter-temporal preferences.
I mostly believe that being willing to take speculative gambles is the correct thing to do. But this doesn’t map neatly to the near/longtermist split. E.g., the work of OPIS on reducing the most intense forms of suffering today (e.g., cluster headaches using psilocybin) feels somewhat out-there in terms of speculativeness, but it is also pretty near-termist. Charity Entrepreneurship, and in particular creating a charity through them, rather than donating, might also be another good example of something which is both speculative and near-term.
So anyways, once we lose the ability to rely on GiveWell, which optimizes global health and development donations, because we just want to generally optimize actions, we can have a few types of answers:
Both of these are tricky. I’m working on the second. I don’t expect total success, and partial success seems like it would still require much more human judgment than GiveWell.
I don’t think that not having EV calculations means that “anything goes because we don’t have expected value calculations”. For instance, we can do sanity checks to determine that one option generally dominates another, and as we get better models of the world, we can do more such sanity checks.
So one answer to “What keeps us morally and epistemically honest?” can be “evaluations”. Evaluations that cannot approach GiveWell in all their glory, but which can still point out missing or invalid parts of a pathway to impact, or options that are dominated by others, and still be really biting.
This feels like it is a somewhat personal answer. It reframes EA as kind of a research project (I’m a researcher), and paints quantitative methods as one possible solution to that research project (I like quantitative methods.) I’m curious to compare it to what others say.
This doesn’t exactly answer your question, but it answers your prompt to babble, and substituting “what is the new EA question” with “what have I been thinking about recently that sounds similar?” seems like a nice babble. So: “How do we do the most good” could now be:
“How do we get expected value calculations in practice?”
“Once we have expected value calculations in place, what is the highest expected value thing to do?”
“What do we do in the meantime?”
If we start out trying to optimize money directed to global health, we can sort of get expected value calculations, by doing the hard research that GiveWell does, and hoping that nothing too suboptimal happens if we take their expected life saved numbers literally, or at least if our decision is to fund charities in decreasing order of estimated cost-effectiveness.
But once we accept that we could get higher expected impact by moving to more speculative cause areas, the number of different actions we can take becomes really large, and the way to choose between them becomes much more fuzzy, and relies much more on human judgment.
Like, it’s not like we can tell that “going into AI alignment improves all that is of value by 0.0001% (in expectation), whereas going into EA movement building improves all that is of value by 0.003% (in expectation), so you should definitely try both, see if the difference in fit is more than 30x, and if not, choose the second.”
As aside, I think that the central, or meaningful split in EA right now is not exactly between near- and long- termists. It’s rather between looking for, in a sense, more certain promises of impact, and being willing to take more uncertain gambles if we think that they’re worth it in expectation. But that seems mostly distinct from inter-temporal preferences.
I mostly believe that being willing to take speculative gambles is the correct thing to do. But this doesn’t map neatly to the near/longtermist split. E.g., the work of OPIS on reducing the most intense forms of suffering today (e.g., cluster headaches using psilocybin) feels somewhat out-there in terms of speculativeness, but it is also pretty near-termist. Charity Entrepreneurship, and in particular creating a charity through them, rather than donating, might also be another good example of something which is both speculative and near-term.
So anyways, once we lose the ability to rely on GiveWell, which optimizes global health and development donations, because we just want to generally optimize actions, we can have a few types of answers:
Look for the best alternative processes
Try to recover expected value calculations somehow
Both of these are tricky. I’m working on the second. I don’t expect total success, and partial success seems like it would still require much more human judgment than GiveWell.
I don’t think that not having EV calculations means that “anything goes because we don’t have expected value calculations”. For instance, we can do sanity checks to determine that one option generally dominates another, and as we get better models of the world, we can do more such sanity checks.
So one answer to “What keeps us morally and epistemically honest?” can be “evaluations”. Evaluations that cannot approach GiveWell in all their glory, but which can still point out missing or invalid parts of a pathway to impact, or options that are dominated by others, and still be really biting.
This feels like it is a somewhat personal answer. It reframes EA as kind of a research project (I’m a researcher), and paints quantitative methods as one possible solution to that research project (I like quantitative methods.) I’m curious to compare it to what others say.