Distinguish substantive vs procedural rationality. Procedural rationality = following neutrally describable processes like āconsidering competing evidenceā, āavoiding incoherenceā, etc. -- roughly corresponding to intelligence. Substantive rationality = responding correctly to normative reasonsāroughly corresponding to wisdom.
The Parfitian view is that the two come apart (i.e., supporting the orthogonality thesis). Future Tuesday Indifference may be procedurally rational, or compatible with perfect intelligence, and yet still objectively crazy (unwise). More generally, thereās nothing in non-naturalist moral realism that implies that intelligent agents per se are likely to converge on the normative truth. (I discuss this more in Knowing What Matters. I think we can reasonably take ourselves to be on the right track, but thatās because of our substantive starting points, not the mere fact of our general intelligence.)
Re substantive vs procedural rationality, procedural rationality just seems roughly like instrumental rationality. For the reasons I explain, Iād expect AI to be rational in general, not just instrumentally so. Do you think ignorance of the modal facts would be possible for an arbitrarily smart agent? Iād think the moral facts would be like the modal facts in that theyād figure them out. I think that when we are smart we can figure things out and they are more likely to be true. The reason I believe modal rationalism, for example, is that there is some sense in which I feel Iāve grasped it, which wouldnāt be possible if I were much less smart.
Depends whether procedural rationality suffices for modal knowledge (e.g. if false modal views are ultimately incoherent; false moral views certainly donāt seem incoherent).
Smartness might be necessary for substantive insights, but doesnāt seem sufficient. There are plenty of smart philosophers with substantively misguided views, after all.
A metaphor: think of belief space as a giant spider web, with no single center, but instead a large number of such ācentralā clusters, each representing a maximally internally coherent and defensible set of beliefs. We start off somewhere in this web. Reasoning leads us along a strand, typically in the direction of greater coherenceāi.e., towards a cluster. But if the clusters are not differentiated in any neutrally-recognizable wayāthe truths do not glow in a way that sets them apart from ideally coherent falsehoodsāthen thereās no guarantee that philosophical reasoning (or āintelligenceā) will lead you to the truth. All it can do is lead you towards greater coherence.
Thatās still worth pursuing, because the truth sure isnāt going to be somewhere incoherent. But it seems likely that from most possible starting points (e.g. if chosen arbitrarily), the truth would be forever inaccessible.
I think I just disagree about what reasoning is. I think that reasoning does not just make our existing beliefs more coherent, but allows us to grasp new deep truths. For example, I think that an anti-realist who didnāt originally have the FTI irrational intuition could grasp it by reflection, and that one can, over time, discover that some things are just not worth pursuing and others are.
Distinguish substantive vs procedural rationality. Procedural rationality = following neutrally describable processes like āconsidering competing evidenceā, āavoiding incoherenceā, etc. -- roughly corresponding to intelligence. Substantive rationality = responding correctly to normative reasonsāroughly corresponding to wisdom.
The Parfitian view is that the two come apart (i.e., supporting the orthogonality thesis). Future Tuesday Indifference may be procedurally rational, or compatible with perfect intelligence, and yet still objectively crazy (unwise). More generally, thereās nothing in non-naturalist moral realism that implies that intelligent agents per se are likely to converge on the normative truth. (I discuss this more in Knowing What Matters. I think we can reasonably take ourselves to be on the right track, but thatās because of our substantive starting points, not the mere fact of our general intelligence.)
Re substantive vs procedural rationality, procedural rationality just seems roughly like instrumental rationality. For the reasons I explain, Iād expect AI to be rational in general, not just instrumentally so. Do you think ignorance of the modal facts would be possible for an arbitrarily smart agent? Iād think the moral facts would be like the modal facts in that theyād figure them out. I think that when we are smart we can figure things out and they are more likely to be true. The reason I believe modal rationalism, for example, is that there is some sense in which I feel Iāve grasped it, which wouldnāt be possible if I were much less smart.
Depends whether procedural rationality suffices for modal knowledge (e.g. if false modal views are ultimately incoherent; false moral views certainly donāt seem incoherent).
Smartness might be necessary for substantive insights, but doesnāt seem sufficient. There are plenty of smart philosophers with substantively misguided views, after all.
A metaphor: think of belief space as a giant spider web, with no single center, but instead a large number of such ācentralā clusters, each representing a maximally internally coherent and defensible set of beliefs. We start off somewhere in this web. Reasoning leads us along a strand, typically in the direction of greater coherenceāi.e., towards a cluster. But if the clusters are not differentiated in any neutrally-recognizable wayāthe truths do not glow in a way that sets them apart from ideally coherent falsehoodsāthen thereās no guarantee that philosophical reasoning (or āintelligenceā) will lead you to the truth. All it can do is lead you towards greater coherence.
Thatās still worth pursuing, because the truth sure isnāt going to be somewhere incoherent. But it seems likely that from most possible starting points (e.g. if chosen arbitrarily), the truth would be forever inaccessible.
I think I just disagree about what reasoning is. I think that reasoning does not just make our existing beliefs more coherent, but allows us to grasp new deep truths. For example, I think that an anti-realist who didnāt originally have the FTI irrational intuition could grasp it by reflection, and that one can, over time, discover that some things are just not worth pursuing and others are.