Trying to prove the matter of metaphysical free will in either direction via neuroscience would likely be either practically impossible, or at least very expensive and drawn out. True, perhaps the introduction of artificial general intelligence (AGI) could answer the question satisfactorily. But at that point, such matters as minimising sentient suffering may be as well within short reach. While I may agree that obtaining more data on human behavioural causation is a worthwhile end for other reasons, so far I cannot see the rationality of the proposed or implied need for this data in order to enable or facilitate “freely willed optimisation”.
The arguments presented seem to suggest that there must be an agent, of perhaps moral responsibility, to own the choice for, or results of, optimisation.
Was there an argument given as to why optimisation requires, or even would benefit from, personal ownership?
Do complex systems not have natural tendencies toward certain states?
Might the personal narrative be an after-effect, or add-on, to otherwise natural tendencies of the system?
From the perspective of embedded agency, the agent exists as a conceptually convenient cut-out from its broader system. This enables heuristically modelling and selecting between epistemic possibilities, even if there is only one actual possibility. Imposing personal moral responsibility upon that agent is one way of modelling the environment. It has some downsides, however, such as placing the burden of correction on the agent. This may work for trivial behavioural matters, but it breaks down when the causal factors involved are difficult to reach or modify for the agent.
Generally speaking, optimisation occurs naturally and automatically upon updating the world model or self-model to a more overall accurate and efficacious state. That is, when an agent encounters information that brings causal insight about important matters, the agent’s behaviour optimises automatically as a result. Such information need not be owned by anyone. I, for example, need not be said to own these words. They, after all, are the result of an unbearably long and complex web of events. Per the butterfly effect, if anything were otherwise in the distant past, everything personally meaningful, including one’s genes and conditioning, simply would not be as they are. Indeed, without the assumption of an incorporeal agent, technically the existing agent ceases to be at every instant. The seeming subjective “person” that goes on could easily be explained as the comparison of a mental object of memory with its ever modified self. Obviously the “same” essence appears to persist when comparing the modified memory to itself, perhaps separated merely by iterations of the perceptual memory loop.
On the topic of felt “energy”—as would drive the sensation of self-determination—one might note that ego alone is sufficient to provide such “energy”, or impetus. This makes perfect sense if we recall that ego drive is the social-symbolic aspect of self-preservation, aka. fear. Hence, ego drive, as amplified, for example, in narcissism, can indeed heighten motivation. But this effect comes at a huge moral cost, in that fear triggers shortening of one’s causal inference chains, making one both short-sighted, and self-interested. Thus, if we should intentionally or inadvertently increase ego drive, we may well create more suffering than we relieve. So pushing ideas of metaphysical free will without proper evidence and specificity could easily have net negative societal consequences.
Trying to prove the matter of metaphysical free will in either direction via neuroscience would likely be either practically impossible, or at least very expensive and drawn out. True, perhaps the introduction of artificial general intelligence (AGI) could answer the question satisfactorily. But at that point, such matters as minimising sentient suffering may be as well within short reach. While I may agree that obtaining more data on human behavioural causation is a worthwhile end for other reasons, so far I cannot see the rationality of the proposed or implied need for this data in order to enable or facilitate “freely willed optimisation”.
The arguments presented seem to suggest that there must be an agent, of perhaps moral responsibility, to own the choice for, or results of, optimisation.
Was there an argument given as to why optimisation requires, or even would benefit from, personal ownership?
Do complex systems not have natural tendencies toward certain states?
Might the personal narrative be an after-effect, or add-on, to otherwise natural tendencies of the system?
From the perspective of embedded agency, the agent exists as a conceptually convenient cut-out from its broader system. This enables heuristically modelling and selecting between epistemic possibilities, even if there is only one actual possibility. Imposing personal moral responsibility upon that agent is one way of modelling the environment. It has some downsides, however, such as placing the burden of correction on the agent. This may work for trivial behavioural matters, but it breaks down when the causal factors involved are difficult to reach or modify for the agent.
Generally speaking, optimisation occurs naturally and automatically upon updating the world model or self-model to a more overall accurate and efficacious state. That is, when an agent encounters information that brings causal insight about important matters, the agent’s behaviour optimises automatically as a result. Such information need not be owned by anyone. I, for example, need not be said to own these words. They, after all, are the result of an unbearably long and complex web of events. Per the butterfly effect, if anything were otherwise in the distant past, everything personally meaningful, including one’s genes and conditioning, simply would not be as they are. Indeed, without the assumption of an incorporeal agent, technically the existing agent ceases to be at every instant. The seeming subjective “person” that goes on could easily be explained as the comparison of a mental object of memory with its ever modified self. Obviously the “same” essence appears to persist when comparing the modified memory to itself, perhaps separated merely by iterations of the perceptual memory loop.
On the topic of felt “energy”—as would drive the sensation of self-determination—one might note that ego alone is sufficient to provide such “energy”, or impetus. This makes perfect sense if we recall that ego drive is the social-symbolic aspect of self-preservation, aka. fear. Hence, ego drive, as amplified, for example, in narcissism, can indeed heighten motivation. But this effect comes at a huge moral cost, in that fear triggers shortening of one’s causal inference chains, making one both short-sighted, and self-interested. Thus, if we should intentionally or inadvertently increase ego drive, we may well create more suffering than we relieve. So pushing ideas of metaphysical free will without proper evidence and specificity could easily have net negative societal consequences.