we don’t use superintelligent singletons and probably won’t, I hope. We instead create context limited model instances of a larger model and tell it only about our task and the model doesn’t retain information.
FYI, current cutting-edge large language models are trained on a massive amount of text on the internet (in the case of GPT-4, likely approximately all the text OpenAI could get their hands on). So they certainly have tons of information about stuff other than the task at hand.
Who are the 16 people you surveyed to determine the disvalue of reducing people’s freedom to buy sugary drinks?
[EDITED TO ADD: this comment burys the lede a bit, in a great-grandchild I do some napkin math that, if correct, indicates that the survey results mean that the reduced value of freedom entirely negates the health benefits]