I really enjoyed the article! But in the end, rather than persuading me that the odds of alien presence is higher than I thought, it has instead further persuaded me that bayesian estimates (as used in EA) are pretty much useless for this type of question, and are likely to lead people astray.
You give a prior of 1 in a hundred that aliens have a presence on earth. Where did this number come from? Well, if you wanted to break it down, you’d look at the number of habitable planets, the chance that life evolved on each one, the chance that each life would develop into advanced civilisation without dying, the estimated time since they developed advanced civilization, the estimated speed of travel weighted by the distance to us, the chance they would all “hide” from us, the chance they would decide to spy on us, etc. One of these has a roughly concrete answer, but all the others are just further speculative questions, with an utterly miniscule amount of evidence to go on for how hypothetical unobserved aliens would act. I think the uncertainty for most of these questions would range over many, many orders of magnitude, and that uncertainty will carry on into your final “prior”.
Assigning a single number to such a prior, as if it means anything, seems utterly absurd. It seems like it would be more reasonable, at the end of the analysis, to end up with something like a confidence interval. Ie: “I have a 95% interval that the probability of alien presence is between 1 in a quadrillion and 1 in 2”.
Assigning a single number to such a prior, as if it means anything, seems utterly absurd.
I don’t agree that it’s meaningless or absurd. A straightforward meaning of the number is “my subjective probability estimate if I had to put a number on it” — and I’d agree that one shouldn’t take it for more than that.
I also don’t think it’s useless, since numbers like these can at least help give a very rough quantitative representation of beliefs (as imperfectly estimated from the inside), which can in turn allow subjective ballpark updates based on explicit calculations. I agree that such simple estimates and calculations should not necessarily be given much weight, let alone dictate our thinking, but I still think they can provide some useful information and provoke further thought. I think they can add to purely qualitative reasoning, even if there are more refined quantitative approaches that are better still.
I guess my point of view is that for certain questions, we shouldstop forcing people to reduce their beliefs to a single number.
Say you told me to guess the number of advanced civilisations in our galaxy (other than humans), and, after meticulous research, I answered “1 million”. Does this singular number actually represent my belief? Would I be kicking myself and feeling like an idiot if the actual answer turned out to be 100?
Of course not. It’s a hugely uncertain, unboundedly speculative question. My actual beliefs are a spread of probabilities over a huge range of magnitudes, possibly quite unevenly spread (there would be a large bump at “0”). “1 million” would just be a snapshot of the median, and leave all that other information out.
The reason this matters is that EA frequently decides to make decisions, including funding decisions, based on these ridiculously uncertain estimates. You yourself are advocating for this in your article.
By reducing everything to that one number, we start influencing the next persons estimates, who influences the next person, and so on. Soon we end up with surveys of “alien experts” on the existential risk from aliens, asking them to estimate (P|aliens) as one number, which they inevitably anchor to the last estimate they saw, compounding everything until eventually you get treated like an absurdity for having a low estimate of Alien x-risk. All based on wildly, ridiculously uncertain initial guesses that someone made up once.
In summary, people should either start stating their uncertainty explicitly, or they should start saying “I don’t know”. This “1 number” status quo is just making things worse.
The reason this matters is that EA frequently decides to make decisions, including funding decisions, based on these ridiculously uncertain estimates. You yourself are advocating for this in your article.
I think that misrepresents what I write and “advocate” in the essay. Among various other qualifications, I write the following (emphases added):
I should also clarify that the decision-related implications that I here speculate on are not meant as anything like decisive or overriding considerations. Rather, I think they would mostly count as weak to modest considerations in our assessments of how to act, all things considered.
My claims about how I think these would be “weak to modest considerations in our assessments of how to act” are not predicated on the exact manner in which I represent my beliefs: I’d say the same regardless of whether I’m speaking in purely qualitative terms or in terms of ranges of probabilities.
In summary, people should either start stating their uncertainty explicitly, or they should start saying “I don’t know”.
FWIW, I do state uncertainty multiple times, except in qualitative rather than quantitative terms. A few examples:
This essay contains a lot of speculation and loose probability estimates. It would be tiresome if I constantly repeated caveats like “this is extremely speculative” and “this is just a very loose estimate that I am highly uncertain about”. So rather than making this essay unreadable with constant such remarks, I instead say it once from the outset: many of the claims I make here are rather speculative and they mostly do not imply a high level of confidence. … I hope that readers will keep this key qualification in mind.
As with all the numbers I give in this essay, the following are just rough numbers that I am not adamant about defending …
Of course, this is a rather crude and preliminary analysis.
To be clear, I think you included all the necessary disclaimers, your article was well written, well argued, and the use of probability was well within the standard for how probability is used in EA.
My issue is that I think the way probability is presented in EA is bad, misleading, and likely to lead to errors. I think this is the exact type of problem (speculative, unbounded estimates) where the EA method fails.
My specific issue here is how uncertainty is taken out of the equation and placed into preambles, and how a highly complex belief is reduced to a single number. This is typical on this forum and in EA (see P|doom). When bayes is used for science, on the other hand, the prior will be a distribution. (See the pdf of the first result here).
My concern is that EA is making decisions based on these point estimates, rather than on peoples true distributions, which is likely to lead people astray.
I’m curious: When you say that your prior for alien presence is 1%, what is your distribution? Is 1% your median estimate? How shocked would you be if the “true value” was 0.001%?
If probabilities of probabilities is confusing, do the same thing for “how many civilisations are there in the galaxy”.
You give a prior of 1 in a hundred that aliens have a presence on earth. Where did this number come from?
It was in large part based on the considerations reviewed in the section “I. An extremely low prior in near aliens”. The following sub-section provides a summary with some attempted sanity checks and qualifications (in addition to the general qualifications made at the outset):
All-things-considered probability estimates: Priors on near aliens
Where do all these considerations leave us? In my view, they overall suggest a fairly ignorant prior. Specifically, in light of the (interrelated) panspermia, pseudo-panspermia, and large-scale Goldilocks hypotheses, as well as the possibility of near aliens originating from another galaxy, I might assign something like a 10 percent prior probability to the existence of at least one advanced alien civilization that could have reached us by now if it had decided to. (Note that I am here using the word “civilization” in a rather liberal sense; for example, a distributed web of highly advanced probes would count as a civilization in this context.) Furthermore, I might assign a probability not too far from that — maybe around 1 percent — to the possibility that any such civilization currently has a presence around Earth (again, as a prior).
Why do I have something like a 10 percent prior on there being an alien presence around Earth conditional on the existence of at least one advanced alien civilization that could have reached us? In short, the main reason is the info gain motive that I explore at greater length below. Moreover, as a sanity check on this conditional probability, we can ask how likely it is that humanity would send and maintain probes around other life-supporting planets assuming that we became technologically capable of doing this; roughly 10 percent seems quite sane to me.
At an intuitive level, I would agree with critics who object that a ~1 percent prior probability in any kind of alien presence around Earth seems extremely high. However, on reflection, I think the basic premises that get me to this estimate look quite reasonable, namely the two conjunctive 10-percent probabilities in “the existence of at least one advanced alien civilization that could have reached us by now if it had decided to” and “an alien presence around Earth conditional on the existence of at least one advanced alien civilization that could have reached us”.
Note also that there are others who seem to defend considerably higher priors regarding near aliens (see e.g. thesecomments by Jacob Cannell; I agree with some of the points Cannell makes, though I would frame them in more uncertain and probabilistic terms).
I can see how substantially lower priors than mine could be defensible, even a few orders of magnitude lower, depending on how one weighs the relevant arguments. Yet I have a hard time seeing how one could defend an extremely low prior that practically rules out the existence of near aliens. (Robin Hanson has likewise argued against an extremely low prior in near aliens.)
I really enjoyed the article! But in the end, rather than persuading me that the odds of alien presence is higher than I thought, it has instead further persuaded me that bayesian estimates (as used in EA) are pretty much useless for this type of question, and are likely to lead people astray.
You give a prior of 1 in a hundred that aliens have a presence on earth. Where did this number come from? Well, if you wanted to break it down, you’d look at the number of habitable planets, the chance that life evolved on each one, the chance that each life would develop into advanced civilisation without dying, the estimated time since they developed advanced civilization, the estimated speed of travel weighted by the distance to us, the chance they would all “hide” from us, the chance they would decide to spy on us, etc. One of these has a roughly concrete answer, but all the others are just further speculative questions, with an utterly miniscule amount of evidence to go on for how hypothetical unobserved aliens would act. I think the uncertainty for most of these questions would range over many, many orders of magnitude, and that uncertainty will carry on into your final “prior”.
Assigning a single number to such a prior, as if it means anything, seems utterly absurd. It seems like it would be more reasonable, at the end of the analysis, to end up with something like a confidence interval. Ie: “I have a 95% interval that the probability of alien presence is between 1 in a quadrillion and 1 in 2”.
Thanks! :)
I don’t agree that it’s meaningless or absurd. A straightforward meaning of the number is “my subjective probability estimate if I had to put a number on it” — and I’d agree that one shouldn’t take it for more than that.
I also don’t think it’s useless, since numbers like these can at least help give a very rough quantitative representation of beliefs (as imperfectly estimated from the inside), which can in turn allow subjective ballpark updates based on explicit calculations. I agree that such simple estimates and calculations should not necessarily be given much weight, let alone dictate our thinking, but I still think they can provide some useful information and provoke further thought. I think they can add to purely qualitative reasoning, even if there are more refined quantitative approaches that are better still.
I guess my point of view is that for certain questions, we should stop forcing people to reduce their beliefs to a single number.
Say you told me to guess the number of advanced civilisations in our galaxy (other than humans), and, after meticulous research, I answered “1 million”. Does this singular number actually represent my belief? Would I be kicking myself and feeling like an idiot if the actual answer turned out to be 100?
Of course not. It’s a hugely uncertain, unboundedly speculative question. My actual beliefs are a spread of probabilities over a huge range of magnitudes, possibly quite unevenly spread (there would be a large bump at “0”). “1 million” would just be a snapshot of the median, and leave all that other information out.
The reason this matters is that EA frequently decides to make decisions, including funding decisions, based on these ridiculously uncertain estimates. You yourself are advocating for this in your article.
By reducing everything to that one number, we start influencing the next persons estimates, who influences the next person, and so on. Soon we end up with surveys of “alien experts” on the existential risk from aliens, asking them to estimate (P|aliens) as one number, which they inevitably anchor to the last estimate they saw, compounding everything until eventually you get treated like an absurdity for having a low estimate of Alien x-risk. All based on wildly, ridiculously uncertain initial guesses that someone made up once.
In summary, people should either start stating their uncertainty explicitly, or they should start saying “I don’t know”. This “1 number” status quo is just making things worse.
I think that misrepresents what I write and “advocate” in the essay. Among various other qualifications, I write the following (emphases added):
My claims about how I think these would be “weak to modest considerations in our assessments of how to act” are not predicated on the exact manner in which I represent my beliefs: I’d say the same regardless of whether I’m speaking in purely qualitative terms or in terms of ranges of probabilities.
FWIW, I do state uncertainty multiple times, except in qualitative rather than quantitative terms. A few examples:
To be clear, I think you included all the necessary disclaimers, your article was well written, well argued, and the use of probability was well within the standard for how probability is used in EA.
My issue is that I think the way probability is presented in EA is bad, misleading, and likely to lead to errors. I think this is the exact type of problem (speculative, unbounded estimates) where the EA method fails.
My specific issue here is how uncertainty is taken out of the equation and placed into preambles, and how a highly complex belief is reduced to a single number. This is typical on this forum and in EA (see P|doom). When bayes is used for science, on the other hand, the prior will be a distribution. (See the pdf of the first result here).
My concern is that EA is making decisions based on these point estimates, rather than on peoples true distributions, which is likely to lead people astray.
I’m curious: When you say that your prior for alien presence is 1%, what is your distribution? Is 1% your median estimate? How shocked would you be if the “true value” was 0.001%?
If probabilities of probabilities is confusing, do the same thing for “how many civilisations are there in the galaxy”.
It was in large part based on the considerations reviewed in the section “I. An extremely low prior in near aliens”. The following sub-section provides a summary with some attempted sanity checks and qualifications (in addition to the general qualifications made at the outset):