1. SHA256 is a hashing-algorithm. Its security is well-vetted for certain kinds of applications and certain kinds of attacks, but “randomly distribute the first 10 hex-digits” is not one of those applications. The post does not include so much as a graph of the distribution of what the past drawing results would have been with this method, so CEA hasn’t really justified why the result would be uniformly distributed.
2. The least-significant digits in the IRIS data are probably fungible by adversaries. It is hard to check them, and IRIS has no reason to secure their data pipeline against attacks that might cost tens of thousands of dollars, because there are normally no stakes whatsoever attached to those bits.
Random.org is exactly in the business that we’re looking for, so they’d be a good option for their own institutional guarantee. Otherwise, any big lottery in any country will work as a source of randomness: the prizes there are bigger, which means that, even if these lotteries could be corrupted, nobody would waste that ability on rigging the donor lottery.
Re 1, this is less of a worry to me. You’re right that this isn’t something that SHA256 has been specifically vetted for, but my understanding is that the SHA-2 family of algorithms should have uniformly-distributed outputs. In fact, the NIST beacon values are all just SHA-512 hashes (of a random seed plus the previous beacon’s value and some other info), so this method vs the NIST method shouldn’t have different properties (although, as you note, we didn’t do a specific analysis of this particular set of inputs — noted, and mea culpa).
However, the point re 2 is definitely a fair concern, and I think that this is the biggest defeater here. As such, (and given the NIST Beacon is back online) we’re reverting to the original NIST method.
Thanks for raising the concerns.
ETA: On further reflection, you’re right that it’s problematic knowing whether the first 10 hex digits will be uniformly distributed given that we don’t have a full-entropy source (which is a significant difference between this method and the NIST beacon — we just made sure that the method had greater entropy than the 40 bits we needed to cover all the possible ticket values). So, your point about testing sample values in advance is well-made.
My troubles with this method are two-fold.
1. SHA256 is a hashing-algorithm. Its security is well-vetted for certain kinds of applications and certain kinds of attacks, but “randomly distribute the first 10 hex-digits” is not one of those applications. The post does not include so much as a graph of the distribution of what the past drawing results would have been with this method, so CEA hasn’t really justified why the result would be uniformly distributed.
2. The least-significant digits in the IRIS data are probably fungible by adversaries. It is hard to check them, and IRIS has no reason to secure their data pipeline against attacks that might cost tens of thousands of dollars, because there are normally no stakes whatsoever attached to those bits.
Random.org is exactly in the business that we’re looking for, so they’d be a good option for their own institutional guarantee. Otherwise, any big lottery in any country will work as a source of randomness: the prizes there are bigger, which means that, even if these lotteries could be corrupted, nobody would waste that ability on rigging the donor lottery.
Re 1, this is less of a worry to me. You’re right that this isn’t something that SHA256 has been specifically vetted for, but my understanding is that the SHA-2 family of algorithms should have uniformly-distributed outputs. In fact, the NIST beacon values are all just SHA-512 hashes (of a random seed plus the previous beacon’s value and some other info), so this method vs the NIST method shouldn’t have different properties (although, as you note, we didn’t do a specific analysis of this particular set of inputs — noted, and mea culpa).
However, the point re 2 is definitely a fair concern, and I think that this is the biggest defeater here. As such, (and given the NIST Beacon is back online) we’re reverting to the original NIST method.
Thanks for raising the concerns.
ETA: On further reflection, you’re right that it’s problematic knowing whether the first 10 hex digits will be uniformly distributed given that we don’t have a full-entropy source (which is a significant difference between this method and the NIST beacon — we just made sure that the method had greater entropy than the 40 bits we needed to cover all the possible ticket values). So, your point about testing sample values in advance is well-made.