I think you are short selling Matthews on Pascal’s Mugging. I don’t think his point was that you must throw up your hands because of the uncertainty, but that he believes friendly AI researchers have approximately the same amount of evidence that AI research done today will have a 10^-15 chance of saving the existence of future humanity as any infinitesimal but positive chance.
Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn’t arbitrarily just include spitting the difference between 10^-15 or 10^-50 but spreading your belief over all positive outcomes below some reasonable barrier and (potentially) above another* (and this isn’t taking into account the non-zero, even if unlikely, probability that despite caution AI research is indeed speeding up our doom). What those numbers are is very difficult to tell but if the estimation of those boundaries is off, and given the record of future predictions of technology it’s not implausible, then all current donations could end up doing basically nothing. In other words, his critique is not that we must give up in the face of uncertainty but that the the justification of AI risk reduction being valuable right now depends on a number of assumptions with rather large error bars.
Despite what appeared to him to be this large uncertainty, he seemed to encounter many people who brushed aside, or seemingly belittled, all other possible cause areas and this rubbed him the wrong way. I believe that was his point about Pascal’s Mugging. And while you criticized him for not acknowledging MIRI does not support Pascal’s Mugging reasoning to support AI research, he never said they did in the article. He said many people at the conference replied to him with that type of reasoning (and as a fellow attendee, I can attest to a similar experience).
*Normally, I believe, it would be all logically possible outcomes but obviously it’s unreasonable to believe a $1000 donation, which was his example, has, say, a 25% chance of success given everything we know about how much such work costs, etc. However, where the lower bound is on this estimation is far less clear.
Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn’t arbitrarily just include spitting the difference between 10^-15 or 10^-50 but spreading your belief over all positive outcomes below some reasonable barrier and (potentially) above another* (and this isn’t taking into account the non-zero, even if unlikely, probability that despite caution AI research is indeed speeding up our doom).
It’s complicated, but I don’t think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes. We choose our prior estimate for chance of success based on other cases of people attempting to make safer tech.
Despite what appeared to him to be this large uncertainty, he seemed to encounter many people who brushed aside, or seemingly belittled, all other possible cause areas and this rubbed him the wrong way.
In fairness, for people who adhere to expected value thinking to the fullest extent (some of whom would have turned out to the conference), arguments purely on the basis of scope of potential impact would be persuasive. But if it’s even annoying folks at EA Global, then probably people ought to stop using them.
It’s complicated, but I don’t think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes.
I did mean over outcomes. I was referring to this:
If we’re uncertain about Matthews propositions, we ought to place our guesses somewhere closer to 50%. To do otherwise would be to mistake our deep uncertainty deep scepticism.
That seems mistaken to me but it could be because I’m misinterpreting it. I was reading it as saying we should split the difference between the two probabilities of success Matthews proposed. However, I thought he was suggesting, and believe it is correct, that we shouldn’t just pick the median between the two because the smaller number was just an example. His real point being that any tiny probability of success seems equally as reasonable from the vantage point of now. If true we’d then have to split our prior evenly over that range instead of picking the median between 10^-15 and 10^-50. And given it’s very difficult to put a lower bound on the reasonable range but a $1000 donation being a good investment depends on a specific lower bound higher than he believes can be justified with evidence, some people came across as unduly confident.
But if it’s even annoying folks at EA Global, then probably people ought to stop using them.
Let me be very clear, I was not annoyed by them, even if I disagree, but people definitely used this reasoning. However, as I often point out, extrapolating from me to other humans is not a good idea even within the EA community.
I think you are short selling Matthews on Pascal’s Mugging. I don’t think his point was that you must throw up your hands because of the uncertainty, but that he believes friendly AI researchers have approximately the same amount of evidence that AI research done today will have a 10^-15 chance of saving the existence of future humanity as any infinitesimal but positive chance.
Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn’t arbitrarily just include spitting the difference between 10^-15 or 10^-50 but spreading your belief over all positive outcomes below some reasonable barrier and (potentially) above another* (and this isn’t taking into account the non-zero, even if unlikely, probability that despite caution AI research is indeed speeding up our doom). What those numbers are is very difficult to tell but if the estimation of those boundaries is off, and given the record of future predictions of technology it’s not implausible, then all current donations could end up doing basically nothing. In other words, his critique is not that we must give up in the face of uncertainty but that the the justification of AI risk reduction being valuable right now depends on a number of assumptions with rather large error bars.
Despite what appeared to him to be this large uncertainty, he seemed to encounter many people who brushed aside, or seemingly belittled, all other possible cause areas and this rubbed him the wrong way. I believe that was his point about Pascal’s Mugging. And while you criticized him for not acknowledging MIRI does not support Pascal’s Mugging reasoning to support AI research, he never said they did in the article. He said many people at the conference replied to him with that type of reasoning (and as a fellow attendee, I can attest to a similar experience).
*Normally, I believe, it would be all logically possible outcomes but obviously it’s unreasonable to believe a $1000 donation, which was his example, has, say, a 25% chance of success given everything we know about how much such work costs, etc. However, where the lower bound is on this estimation is far less clear.
It’s complicated, but I don’t think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes. We choose our prior estimate for chance of success based on other cases of people attempting to make safer tech.
In fairness, for people who adhere to expected value thinking to the fullest extent (some of whom would have turned out to the conference), arguments purely on the basis of scope of potential impact would be persuasive. But if it’s even annoying folks at EA Global, then probably people ought to stop using them.
I did mean over outcomes. I was referring to this:
That seems mistaken to me but it could be because I’m misinterpreting it. I was reading it as saying we should split the difference between the two probabilities of success Matthews proposed. However, I thought he was suggesting, and believe it is correct, that we shouldn’t just pick the median between the two because the smaller number was just an example. His real point being that any tiny probability of success seems equally as reasonable from the vantage point of now. If true we’d then have to split our prior evenly over that range instead of picking the median between 10^-15 and 10^-50. And given it’s very difficult to put a lower bound on the reasonable range but a $1000 donation being a good investment depends on a specific lower bound higher than he believes can be justified with evidence, some people came across as unduly confident.
Let me be very clear, I was not annoyed by them, even if I disagree, but people definitely used this reasoning. However, as I often point out, extrapolating from me to other humans is not a good idea even within the EA community.