Another way to look at this. What do you think is the probability that everyone will go extinct tomorrow? If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff.
If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff
Why? GiveWell charities have developed theories about the effects of various interventions. The theories have been tested and, typically, found to be relatively robust. Of course, there is always more to know, and always ways we could improve the theories.
I don’t see how this relates to not being able to develop a statistical estimate of the probability we go extinct tomorrow. (Of course, I can just give you a number and call it “my belief that we’ll go extinct tomorrow,” but this doesn’t get us anywhere. The question is whether it’s accurate—and what accuracy means in this case.) What would be the parameters of such a model? There are uncountably many things—most of them unknowable—which could affect such an outcome.
The benefits of GiveWell’s charities are worked out as health or economic benefits which are realised in the future. e.g. AMF is meant to be good because it allows people who would have otherwise died to live for a few more years. If you are agnostic about whether everyone will go extinct tomorrow, then you must be agnostic about whether people will actually get these extra years of life.
What is your probability distribution across the size of the future population, provided there is not an existential catastrophe?
Do you for example think there is a more than 50% chance that it is greater than 10 billion?
I don’t have a probability distribution across the size of the future population. That said, I’m happy to interpret the question in the colloquial, non-formal sense, and just take >50% to mean “likely”. In that case, sure, I think it’s likely that the population will exceed 10 billion. Credences shouldn’t be taken any more seriously than that—epistemologically equivalent to survey questions where the respondent is asked to tick a very unlikely, unlikely, unsure, likely, very likely box.
Do you for example think there is a more than 50% chance that it is greater than 10 billion?
Another way to look at this. What do you think is the probability that everyone will go extinct tomorrow? If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff.
Why? GiveWell charities have developed theories about the effects of various interventions. The theories have been tested and, typically, found to be relatively robust. Of course, there is always more to know, and always ways we could improve the theories.
I don’t see how this relates to not being able to develop a statistical estimate of the probability we go extinct tomorrow. (Of course, I can just give you a number and call it “my belief that we’ll go extinct tomorrow,” but this doesn’t get us anywhere. The question is whether it’s accurate—and what accuracy means in this case.) What would be the parameters of such a model? There are uncountably many things—most of them unknowable—which could affect such an outcome.
The benefits of GiveWell’s charities are worked out as health or economic benefits which are realised in the future. e.g. AMF is meant to be good because it allows people who would have otherwise died to live for a few more years. If you are agnostic about whether everyone will go extinct tomorrow, then you must be agnostic about whether people will actually get these extra years of life.
I don’t have a probability distribution across the size of the future population. That said, I’m happy to interpret the question in the colloquial, non-formal sense, and just take >50% to mean “likely”. In that case, sure, I think it’s likely that the population will exceed 10 billion. Credences shouldn’t be taken any more seriously than that—epistemologically equivalent to survey questions where the respondent is asked to tick a very unlikely, unlikely, unsure, likely, very likely box.