This post notes few common fallacies in our reasoning. I observed myself making some of them and realising that I am making them! The post is slightly biased towards college experience examples (because I’m a student) but the ideas should apply generally.
I reproduce the content below.
The human mind is a wonderful example of emergent behaviour. A certain number of neurons fire together in coordination to give rise to conscious optimisers that are able to take independent decisions. And yet, the human mind is a morally and rationally flawed apparatus. One constantly encounters situations in day-to-day life when they are faced with a moral crisis—a dead-end in the decision making process if one is assumed to be guided by a set of principles.
In this post, I’ll explore a few logical fallacies that if not overlooked and corrected for can lead to a more enriching living experience. The fallacies I note here can be explored from a sociological, economical, or psychological lens and would ideally lead to different conclusions.
## Disordered causality
1. Man plants seeds in his backyard.
2. After some time, saplings sprout in his backyard.
2 is caused by 1. This is a pretty simple one-directional causal relation. But identifying causal relationships in real life is messy and complicated. It is important to perform this identification because it allows us to attribute importance to actions correctly and hence optimise our behaviour. In the above example, what if the man also performs a shamanic dance whenever he plants seeds in his backyard for whatever reason?
1. Man performs shamanic dance in his backyard.
2. After some time, saplings sprout in his backyard.
This is, of course, absurd to us _because we have out-of-context knowledge_ (namely the scientifically proven theory of flora growth). But as you can imagine, dismissing a complex relationship as absurd and to even consider the existence of a secondary cause requires external knowledge which is not generally available for most humans.
A very common example is the fake sense of entitlement that some IITian derive from _being IITians_. Their reasoning is based on the institute’s reputation which is based on the long history of talented and skilled engineers who have proved themselves on their own merit around the globe. This reputation has nothing to do with their own merit. IIT is a filter that brings together the smartest and sincere students together. It never promised to be a ticket to heaven.
## Heuristic inference
In the face of complex computation, our brain makes “reasonable” approximations to quickly find answers and make decisions. These heuristical inferences must have served us well because evolution has ingrained us with it. But they are not always right—especially in an environment evolving at a mismatched rate to the evolution of the mind.
Age and seniority is considered a mark of wisdom. Older people have lived for longer and have more experience. Therefore, they must be wiser. This may not always be true and can lead to devastating results if left unchallenged. What we forget to take into account is the _quality of experiences_; the quality of experiences can differ wildly in the modern context and people at different stages of life can have disproportionately varying quality of experiences and by extension, wisdom<sup>1</sup>.
Alternatively, which is better: spending 25 years of one’s life honing one particular skill and becoming 99.9% good at it or to spend 8 years of one’s life to become 80% good at 5 different things? Opportunity costs forces us to rethink our basic model of evaluating humans and human values.
One possible solution to the seniority-wisdom disparity problem is to move towards a different indicator. I lay more emphasis on personal projects done by a person than their resumes (which I think fails to serve its purpose anyway<sup>2</sup>).
## Popular value systems
It is extremely easy to fool oneself into thinking that they need things that others do. Or are capable of doing things that others do. Every person behaves according to their own internal value system either consciously or subconsciously (the “set of principles” I mentioned before). The problem arises when value systems tend to converge onto “popular” ones.
What do I even mean? Let’s say your friend winds down by partying on weekends. But that doesn’t necessarily imply that partying on weekends would help you release your stress. Their value system and yours are not identical. In the absence of sufficient awareness, one’s mind latches onto the first value system it finds and tries to [“rationalise” it][0]. This microscopic behaviour leads to the macroscopic phenomenon of gradual convergence of value systems for majority people. This leads to development of human institutions and social structures that rewards people with such value systems and penalise everyone else.
This is bad news because we want varied value systems but more importantly, the incentive for people to experiment with new value systems without being heavily penalised. An appallingly nasty example is the concreted career paths available to youths in India.
## End-goal misalignment
Sufficiently rational agents would tend to optimise the signal that measures a metric rather than the metric itself. One might learn for learning’s sake but once they start to notice that learning _for exams_ is enough to get them marks they will shift their worldview from “study for life” to “study for exam”. This is probably known to a lot of smarter people than me. But the real problem arises out of the ignorance and disregard to acknowledge this as a socio-economic problem. I have heard professors and teachers sermonise about the merits of a good education, one that values learning over evaluation and asking us to “try”. What most people fail to assimilate is the fact that deviating from an optimal strategy is a long-term loss game and no rational agent would take that path. Thus, we have engineered a social system where the optimal strategy is a long-term undesirable one.
End-goal misalignment is closely tied to the [principal-agent problem][1] which is unsolved as of yet. I’ll talk about this in detail in a future post.
## Temporal goal discounting
Let us assume that you are a rational agent trying to live your life according to your own value system. You may have some goals that you wish to achieve in the future. You may then proceed to define smaller subgoals (so-called [“instrumental goals”][2]) that you think will help you achieve your goal.
However, more often than not, you will make the mistake of discounting the set of skills and indeed all experiences that you will gather along the way to your goal temporally. This may not seem like a big issue but if you are constantly updating your world view based on new information, you may develop better subgoals to achieve your goal or perhaps realise your goal is actually another instrumental goal for something else entirely.
This is bad news because it deeply skewed a rational agent’s agenda to achieve any goal. Moreover, it presents a real difficulty to train a sufficiently general and intelligent system that updates an internal worldview based on new information. As soon as it updates its worldview, the system must recalculate its trajectory to achieve its goal and so on. In principle, we must recalibrate our trajectories every moment (which, of course, is absurd). Thus, we are stuck with a new worldview with a stale set of trajectories to achieve our goals.
## Corrective measures
What can we do to combat these fallacies (and possibly many more)? Here I present a few higher level ways that I find most appealing. Be warned that they heavily deviate from the “optimal” or rational and are scientifically untested although backed by much empirical data.
A recurrent shortcoming that seems to have fueled or augmented the above fallacies arises from a constrained worldview. This is the internal mental picture of how the world works. Thus, to get better at making decisions would require a deliberate attempt to internalise new ideas. To be constantly vigilant and wary of the internal world state and keep in mind the _unknown_ unknowns out there. A concrete step which has been empirically proven to work is by reading. It is a good habit.
I’ll stop here and will hopefully update this post with more fallacies as I encounter them in my life.
---
### Footnotes
1: Why? This can be a topic of discussion for another post but I can broadly attribute this change to the democratisation of information.
2: Resumes have become avenues for people to selectively present their best versions which incentivises large populations to converge on the notion of a “best” resume over time, thereby defeating the purpose of revealing one’s true identity.
Common fallacies in human reasoning
Link post
This post notes few common fallacies in our reasoning. I observed myself making some of them and realising that I am making them! The post is slightly biased towards college experience examples (because I’m a student) but the ideas should apply generally.
I reproduce the content below.
The human mind is a wonderful example of emergent behaviour. A certain number of neurons fire together in coordination to give rise to conscious optimisers that are able to take independent decisions. And yet, the human mind is a morally and rationally flawed apparatus. One constantly encounters situations in day-to-day life when they are faced with a moral crisis—a dead-end in the decision making process if one is assumed to be guided by a set of principles.
In this post, I’ll explore a few logical fallacies that if not overlooked and corrected for can lead to a more enriching living experience. The fallacies I note here can be explored from a sociological, economical, or psychological lens and would ideally lead to different conclusions.
## Disordered causality
1. Man plants seeds in his backyard.
2. After some time, saplings sprout in his backyard.
2 is caused by 1. This is a pretty simple one-directional causal relation. But identifying causal relationships in real life is messy and complicated. It is important to perform this identification because it allows us to attribute importance to actions correctly and hence optimise our behaviour. In the above example, what if the man also performs a shamanic dance whenever he plants seeds in his backyard for whatever reason?
1. Man performs shamanic dance in his backyard.
2. After some time, saplings sprout in his backyard.
This is, of course, absurd to us _because we have out-of-context knowledge_ (namely the scientifically proven theory of flora growth). But as you can imagine, dismissing a complex relationship as absurd and to even consider the existence of a secondary cause requires external knowledge which is not generally available for most humans.
A very common example is the fake sense of entitlement that some IITian derive from _being IITians_. Their reasoning is based on the institute’s reputation which is based on the long history of talented and skilled engineers who have proved themselves on their own merit around the globe. This reputation has nothing to do with their own merit. IIT is a filter that brings together the smartest and sincere students together. It never promised to be a ticket to heaven.
## Heuristic inference
In the face of complex computation, our brain makes “reasonable” approximations to quickly find answers and make decisions. These heuristical inferences must have served us well because evolution has ingrained us with it. But they are not always right—especially in an environment evolving at a mismatched rate to the evolution of the mind.
Age and seniority is considered a mark of wisdom. Older people have lived for longer and have more experience. Therefore, they must be wiser. This may not always be true and can lead to devastating results if left unchallenged. What we forget to take into account is the _quality of experiences_; the quality of experiences can differ wildly in the modern context and people at different stages of life can have disproportionately varying quality of experiences and by extension, wisdom<sup>1</sup>.
Alternatively, which is better: spending 25 years of one’s life honing one particular skill and becoming 99.9% good at it or to spend 8 years of one’s life to become 80% good at 5 different things? Opportunity costs forces us to rethink our basic model of evaluating humans and human values.
One possible solution to the seniority-wisdom disparity problem is to move towards a different indicator. I lay more emphasis on personal projects done by a person than their resumes (which I think fails to serve its purpose anyway<sup>2</sup>).
## Popular value systems
It is extremely easy to fool oneself into thinking that they need things that others do. Or are capable of doing things that others do. Every person behaves according to their own internal value system either consciously or subconsciously (the “set of principles” I mentioned before). The problem arises when value systems tend to converge onto “popular” ones.
What do I even mean? Let’s say your friend winds down by partying on weekends. But that doesn’t necessarily imply that partying on weekends would help you release your stress. Their value system and yours are not identical. In the absence of sufficient awareness, one’s mind latches onto the first value system it finds and tries to [“rationalise” it][0]. This microscopic behaviour leads to the macroscopic phenomenon of gradual convergence of value systems for majority people. This leads to development of human institutions and social structures that rewards people with such value systems and penalise everyone else.
This is bad news because we want varied value systems but more importantly, the incentive for people to experiment with new value systems without being heavily penalised. An appallingly nasty example is the concreted career paths available to youths in India.
## End-goal misalignment
Sufficiently rational agents would tend to optimise the signal that measures a metric rather than the metric itself. One might learn for learning’s sake but once they start to notice that learning _for exams_ is enough to get them marks they will shift their worldview from “study for life” to “study for exam”. This is probably known to a lot of smarter people than me. But the real problem arises out of the ignorance and disregard to acknowledge this as a socio-economic problem. I have heard professors and teachers sermonise about the merits of a good education, one that values learning over evaluation and asking us to “try”. What most people fail to assimilate is the fact that deviating from an optimal strategy is a long-term loss game and no rational agent would take that path. Thus, we have engineered a social system where the optimal strategy is a long-term undesirable one.
End-goal misalignment is closely tied to the [principal-agent problem][1] which is unsolved as of yet. I’ll talk about this in detail in a future post.
## Temporal goal discounting
Let us assume that you are a rational agent trying to live your life according to your own value system. You may have some goals that you wish to achieve in the future. You may then proceed to define smaller subgoals (so-called [“instrumental goals”][2]) that you think will help you achieve your goal.
However, more often than not, you will make the mistake of discounting the set of skills and indeed all experiences that you will gather along the way to your goal temporally. This may not seem like a big issue but if you are constantly updating your world view based on new information, you may develop better subgoals to achieve your goal or perhaps realise your goal is actually another instrumental goal for something else entirely.
This is bad news because it deeply skewed a rational agent’s agenda to achieve any goal. Moreover, it presents a real difficulty to train a sufficiently general and intelligent system that updates an internal worldview based on new information. As soon as it updates its worldview, the system must recalculate its trajectory to achieve its goal and so on. In principle, we must recalibrate our trajectories every moment (which, of course, is absurd). Thus, we are stuck with a new worldview with a stale set of trajectories to achieve our goals.
## Corrective measures
What can we do to combat these fallacies (and possibly many more)? Here I present a few higher level ways that I find most appealing. Be warned that they heavily deviate from the “optimal” or rational and are scientifically untested although backed by much empirical data.
A recurrent shortcoming that seems to have fueled or augmented the above fallacies arises from a constrained worldview. This is the internal mental picture of how the world works. Thus, to get better at making decisions would require a deliberate attempt to internalise new ideas. To be constantly vigilant and wary of the internal world state and keep in mind the _unknown_ unknowns out there. A concrete step which has been empirically proven to work is by reading. It is a good habit.
I’ll stop here and will hopefully update this post with more fallacies as I encounter them in my life.
---
### Footnotes
1: Why? This can be a topic of discussion for another post but I can broadly attribute this change to the democratisation of information.
2: Resumes have become avenues for people to selectively present their best versions which incentivises large populations to converge on the notion of a “best” resume over time, thereby defeating the purpose of revealing one’s true identity.
[0]: https://www.lesswrong.com/posts/SFZoEBpLo9frSJGkc/rationalization
[1]: https://www.lesswrong.com/tag/principal-agent-problems
[2]: https://arbital.com/p/terminal_vs_instrumental/