Reposting an insightful/valuable response from @finm on Substack! See below:
“Thanks for writing this! Lots of interesting points. A few thoughts while reading:
>What happens if superintelligence discovers that our fundamental assumptions about causality, consciousness, or even logic are wrong?
I’m actually not sure this is worth worrying about. Our understandings of causality and consciousness indeed are changing and highly disputed, but in most contexts (e.g. understanding US politics) this isn’t very relevant. I don’t know what it would look like to discover that our fundamental assumptions about logic are wrong (some have argued against obvious-seeming axioms, e.g. dialetheism, but those people live their lives much like the rest of us).
>How do you select for “truthfulness” when the nature of truth itself is being revised monthly?
Similarly, I’m not fully sure what this means, but it sounds a bit too dramatic to me. Again consider that people trying to figure things out in other epistemic domains rarely care to ask which theory of truth is correct.
>I think there’s a much stronger case for automated forecasting working, but it too has a critical weakness: trust […] if people’s entire worldviews are crumbling monthly, why would they trust anything, even an AI with a perfect prediction record?
Here I’m just not sure I see the positive case for why you and I will lose all trust in every source of information. Why would you personally decide not to listen to “an AI with a perfect prediction record?”. Another angle on this is that it will always be possible (if painstaking) to scrutinise the reasoning / sources of your favoured source, and verify if they seem sensible. If they seem like bullshit, you can tell others, and that source will fall out of favour, and vice versa.
>They [non-enhanced people] would functionally become children (or, even newborns if the intelligence explosion gets really crazy) in a world run by incomprehensible adults.
I do think this is a good and worrying point. But a couple thoughts. One is that already, some people in the world are in a far stronger epistemic position than others. Some are lucky enough to have learned a lot about the world, know how to competently access credible sources, etc. Some, as you point out, have crazy views about the world (e.g. young Earth creationism). Why isn’t this already a disastrous situation? I think one reason is that we’re most free to form crazy (wrong) views on issues which don’t materially affect our lives. Our beliefs about the age of the Earth don’t matter much for how our lives go; our beliefs about e.g. which side of the road people drive on do matter, so we get them right more often (and those few people who are not able to understand which side of the road to drive on are typically not going to be frequent drivers, i.e. there is often a happy coincidence between epistemic competence and the consequences of making errors).
A second thought is that all of us are in a position of deference-by-default on a huge range of questions. I have not personally recreated the experiments to verify whether the Earth is flat, or revolves around the Sun, but I trust the community that figured these things out, scrutinised the results, and disseminated the results.
Incidentally, I really recommend Dan William’s Substack, it shaped my views on a lot of these questions — https://www.conspicuouscognition.com/
Thanks again!”
My response to his response:
”Thank you for your detailed comment! I appreciate you taking the time to write this out.
> On fundamental assumptions about reality changing due to ASI
This is a fair point – I agree it makes sense that most people wouldn’t worry about these discoveries in their day to day lives. However, the part I’d be most worried about would be the downstream effects from certain discoveries. For example, if it turns out our model of consciousness is wrong, I could see this causing social disruption/fragmentation. I don’t think this would be due to the discovery itself, but rather the way it was publicised, the factions that formed around it, whether it gets politicised, etc. If the discoveries really are way more shattering than anything we (as humans) have adapted to so far, I could see this being a big issue. Obviously this is very hard to predict/reason about though!
>How do you select for “truthfulness” when the nature of truth itself is being revised monthly?
Yeah, in retrospect this does seem overly dramatic. I think the point I was trying to make was more that the way people perceive what is fundamentally true would be changing at unprecedented speeds (which I assume would be a possibility during an IE).
>On automating forecasting and trust
Personally I’d place a lot of credibility on the automated AI forecasters (along with deferring to people I trust’s views on its accuracy). But I think there’s still a high enough chance that large parts of the population wouldn’t place this amount of trust on it. E.g. if conspiracy theorists claim (and gain traction) that the AI being biased towards some particular actor/group – or just another “tool from the elites to manipulate us”. I think this could get polarising especially if the change is rapid, similar to what we saw with trusting COVID advice. I’m not super confident about how likely this would be though, I’d need to look more into it.
>They [non-enhanced people] would functionally become children (or, even newborns if the intelligence explosion gets really crazy) in a world run by incomprehensible adults.
I think your “happy coincidence” point is very good. It is definitely right that Young Earth creationists can be terribly wrong while still functioning pretty well in society. But I think more extreme versions of cognitive enhancement would probably break this coincidence. Current epistemic inequality is about what people believe, whereas I’d expect future enhancement would be about how people think. If enhanced humans are thinking in fundamentally different ways (e.g. maybe through neural interfaces, expanded working memory, direct AI integration), they might design systems that require enhanced cognition just to interact with. I don’t know what else could be said here other than trying to advocate for keeping society “understandable” to everyone.
On deference—yes, this is very true that we already defer constantly. Though I think an issue could be that current deference assumes stable reference points, whereas in an intelligence explosion, how do we know which people/communities to trust when they might not exist long enough to build track records? This wouldn’t be an issue if people trusted the AI forecasters, but I think it would be for those who didn’t.
A lot of these questions are very hard for me to think through given how complicated and messy large scale human interactions are (and a lot could be wrong given this). I really hope AI can help with all this!
Lastly, thank you for recommending Dan William’s Substack! I looked at the recent posts and they seem very interesting/relevant, so I will definitely read more and see if it updates my views on this topic :)”