Re substantive vs procedural rationality, procedural rationality just seems roughly like instrumental rationality. For the reasons I explain, I’d expect AI to be rational in general, not just instrumentally so. Do you think ignorance of the modal facts would be possible for an arbitrarily smart agent? I’d think the moral facts would be like the modal facts in that they’d figure them out. I think that when we are smart we can figure things out and they are more likely to be true. The reason I believe modal rationalism, for example, is that there is some sense in which I feel I’ve grasped it, which wouldn’t be possible if I were much less smart.
Depends whether procedural rationality suffices for modal knowledge (e.g. if false modal views are ultimately incoherent; false moral views certainly don’t seem incoherent).
Smartness might be necessary for substantive insights, but doesn’t seem sufficient. There are plenty of smart philosophers with substantively misguided views, after all.
A metaphor: think of belief space as a giant spider web, with no single center, but instead a large number of such “central” clusters, each representing a maximally internally coherent and defensible set of beliefs. We start off somewhere in this web. Reasoning leads us along a strand, typically in the direction of greater coherence—i.e., towards a cluster. But if the clusters are not differentiated in any neutrally-recognizable way—the truths do not glow in a way that sets them apart from ideally coherent falsehoods—then there’s no guarantee that philosophical reasoning (or “intelligence”) will lead you to the truth. All it can do is lead you towards greater coherence.
That’s still worth pursuing, because the truth sure isn’t going to be somewhere incoherent. But it seems likely that from most possible starting points (e.g. if chosen arbitrarily), the truth would be forever inaccessible.
I think I just disagree about what reasoning is. I think that reasoning does not just make our existing beliefs more coherent, but allows us to grasp new deep truths. For example, I think that an anti-realist who didn’t originally have the FTI irrational intuition could grasp it by reflection, and that one can, over time, discover that some things are just not worth pursuing and others are.
Re substantive vs procedural rationality, procedural rationality just seems roughly like instrumental rationality. For the reasons I explain, I’d expect AI to be rational in general, not just instrumentally so. Do you think ignorance of the modal facts would be possible for an arbitrarily smart agent? I’d think the moral facts would be like the modal facts in that they’d figure them out. I think that when we are smart we can figure things out and they are more likely to be true. The reason I believe modal rationalism, for example, is that there is some sense in which I feel I’ve grasped it, which wouldn’t be possible if I were much less smart.
Depends whether procedural rationality suffices for modal knowledge (e.g. if false modal views are ultimately incoherent; false moral views certainly don’t seem incoherent).
Smartness might be necessary for substantive insights, but doesn’t seem sufficient. There are plenty of smart philosophers with substantively misguided views, after all.
A metaphor: think of belief space as a giant spider web, with no single center, but instead a large number of such “central” clusters, each representing a maximally internally coherent and defensible set of beliefs. We start off somewhere in this web. Reasoning leads us along a strand, typically in the direction of greater coherence—i.e., towards a cluster. But if the clusters are not differentiated in any neutrally-recognizable way—the truths do not glow in a way that sets them apart from ideally coherent falsehoods—then there’s no guarantee that philosophical reasoning (or “intelligence”) will lead you to the truth. All it can do is lead you towards greater coherence.
That’s still worth pursuing, because the truth sure isn’t going to be somewhere incoherent. But it seems likely that from most possible starting points (e.g. if chosen arbitrarily), the truth would be forever inaccessible.
I think I just disagree about what reasoning is. I think that reasoning does not just make our existing beliefs more coherent, but allows us to grasp new deep truths. For example, I think that an anti-realist who didn’t originally have the FTI irrational intuition could grasp it by reflection, and that one can, over time, discover that some things are just not worth pursuing and others are.