Thank you for the thoughtful follow-up. I fully agree that laws and formal rules work only to the extent that people actually believe in them. If a regulation lacks internal assent, it quickly turns into costly policing or quiet non-compliance. So the external layer must rest on genuine, internalised conviction.
Regarding the prospect of a new “behavior-first” ideology: I don’t dismiss the idea at all, but I think such an ideology would need to meet three demanding criteria to avoid repeating the over-promising grand narratives of the past:
Maximally inclusive and evidence-based – It should speak a language that diverse groups can recognise as their own, while remaining anchored in empirically verifiable claims (no promise of a metaphysical paradise).
Backed by socio-technical trust mechanisms – Cryptographically auditable processes, open metrics, and transparent feedback loops so that participants can see that principles are applied uniformly and can verify claims for themselves.
A truthful, pragmatic beacon rather than a utopian slogan – A positive horizon that is achievable in increments, with clear milestones and a built-in capacity for course correction. In other words, a lighthouse—bright, but firmly bolted to the rocks—rather than a mirage.
You mentioned the possibility of viable formulas that have never been tried. I would be very interested to hear your ideas: what practical steps or pilot designs do you think could meet the inclusivity, transparency, and truthfulness tests outlined above?
Thank you very much for your interest in my proposal.
My idea of an “ideology of behavior” seems to me to be the logical conclusion of the civilizing process that led certain moralistic religions (the so-called “compassionate religions”) to end up prioritizing conceptions of moral motivation with behavioral implications (benevolent and altruistic behavior) that always involved the internalization of emotions associated with prosocial symbolism: individual soul, charity, grace, compassion… this is the Christian terminology, but the compassionate religious cultures of the East have their own terms.
The goal is always moral evolution. Use certain symbolic stimuli associated with non-aggressive, empathetic, benevolent, and altruistic behavioral motivations. “Producing saints.”
I can’t think of a better way to produce effective altruism.
Historically, the emergence of cohesive subcultural minorities has always had great power to influence lifestyle changes from a moral perspective.
Monasticism was an invention of Buddhism, although it later gained great importance in the West. The puritanical subcultures of Reformed Christianity also played a role.
In my opinion, the creation of a morally influential minority that promotes an extremely prosocial lifestyle and, for the first time, develops based on principles of rationality could have a profound impact on the conditions of today’s society. Many 19th-century thinkers already realized that if astrology evolved into astronomy, and alchemy into chemistry, why couldn’t the religions of the past have a functional and coherent equivalent in the enlightened world?
The idea of an “influential minority,” by the way, is not foreign to “Effective Altruism.” It appears, for example, in Schuber and Caviola’s book “Effective Altruism and the Human Mind.”
Instead of trying to reach out indiscriminately to the population at large, outreach efforts could be specifically targeted at those who are more open to effective altruism. Who are these people who find effective altruism appealing? What psychological traits make people more positively inclined toward effective altruism? (p. 120)
I think we can be ambitious and set a bigger goal: if we can locate individuals with a greater propensity to perform altruistic acts, we can also locate individuals with a propensity to improve their behavior to the limits of extreme prosociality. These would be the “believers” in the behavioral ideology. Individuals rationally motivated to correct their behavior in order to achieve a clear goal (extreme prosociality, “saintliness”). Didn’t the people in “Alcoholics Anonymous” do something similar about changing behavior almost a hundred years ago? And they certainly didn’t need professional psychologists to do so. They relied on clear motivation, clarity of objectives… and a lucid process of development through trial and error.
I mentioned another example from the past: the Tolstoyan movement. It failed because it was poorly conceived and poorly organized, but it demonstrated that it was possible to create a non-political social movement based on principles of extreme prosocial behavior and not necessarily linked to any belief on the supernatural.
What are the motivations for altruistic action? What are the mechanisms for internalizing prosocial behavioral values? What psychological incentives and rewards do those who undertake a process of change and renunciation based on an altruistic ideal receive? How are ideologies created, cultivated, and flourished?
In our time, we have historical evidence of all kinds. We already know many things, and although science can advise us, this should be a matter of individual motivation and shared wisdom.
As an initial formula, I would suggest a “monastic” organization for the rational pursuit of an altruistic lifestyle. An altruistic lifestyle implies controlling aggression, cultivating rationality and empathy, scientific curiosity, and, above all, developing benevolence in behavior. In my view, such a development would not necessarily be less attractive to many young people today than monasticism was in the Middle Ages.
And, above all, keep in mind: unlike political ideologies or mass religions, a monastic structure only seeks to attract a minority. One person in a thousand? We would then be talking about eight million people with 100% active altruistic behavior!
Thank you for elaborating — your vision of creating a rational ‘moral elite’ is truly fascinating! You’re absolutely right about the core issue: today’s hierarchy, centered on financial achievement and consumption, stifles moral development. Your proposed alternative — a system where status derives from prosocial behavior (‘saintliness without dogma’) — strikes at the heart of the problem.
However, I see two practical challenges:
Systemic dependency: Such a transformation requires overhauling economic incentives and institutions, not just adopting new norms. As your own examples show (Tolstoyans, AA), local communities can create pockets of alternative ethics, but scaling this to a societal level clashes with systems built on competing principles (e.g., market competition). This doesn’t invalidate the idea — it simply means implementation must be evolutionary, not revolutionary.
Fragmentation risk: Replacing one hierarchy (financial) with another (moral) could spark new conflicts, especially with religious communities for whom ‘saintliness’ is central. For global impact, any framework must be inclusive — complementing existing paths (religious/secular) rather than rejecting them.
This is where EA’s evolutionary approach — and your own work — shines:
We operate by gradually ‘embedding’ high-moral norms (δ↑, w↑) into the basic layer (ρ↑) through evidence, institutions, and cultural narratives.
Your ideas about intentionally shaping prosocial norms through communities aren’t an alternative but a powerful complement! They’re tools to accelerate shifting norms (e.g., long-term AI ethics or planetary stewardship) from ‘high’ to ‘basic’.
A timely synthesis: I’m currently drafting a post applying Time × Scope to AI alignment. It explores how a technologically mediated moral hierarchy (not sermons or propaganda) could act as a sociotechnical solution by:
Rewarding verified contributions to common good (e.g., AI safety research, disaster resilience) via transparent metrics.
Creating status pathways based on moral impact — not wealth.
Evolving existing systems: No economic upheaval or religious conflict; integrates with markets/institutions.
Inclusivity: Offers a neutral ‘language of moral contribution’ accessible to all worldviews.
Your insights are invaluable here! If you’d like to deepen this discussion:
Let’s connect via DM to explore your models for motivation/community design.
I’d welcome your input on my AI alignment framework (especially how to ‘operationalize’ moral growth).
Your focus on inner transformation is key to ensuring technology augments human morality — it’s worth building together.
Perhaps the ‘lighthouse’ we need isn’t a utopian ideology, but a practical, scalable approach — anchored in evidence, open to all, and built step by step. Would love your thoughts!
Thank you very much for your attention to my proposal. I know that new ideas are difficult to understand (especially if you’re not very good at explaining them), and particularly when something as unusual as promoting new ideological movements (let’s say, “utopian”).
I just want to make a few brief clarifications:
your vision of creating a rational ‘moral elite’
Moral evolution initiatives in the sense of pacifism, altruism, and benevolence stemming from monastic structures do not seek to create elites, as they are situated outside the conventional world. They can also be referred to as community initiatives of “witness” (for example, the case of Anabaptist communities or Quakers). However, associations such as Freemasonry, Opus Dei, and even initiatives associated with EA, such as “80,000 Hours,” are initiatives to create elites. They do agree that they are influential minorities, in one way or another (all social change is logically set in motion by minorities).
Your ideas about intentionally shaping prosocial norms through communities
I don’t propose “norms,” but rather styles of behavior based on internalized ethical values. A non-coercive prosociality.
All activities based on altruism can be complementary, although dilemmas about priorities always arise.
I understand the importance given to “long-term” issues and the alarm created by issues related to AI. Unfortunately, not all of us are sufficiently prepared to grasp the magnitude of such threats to the common good.
In my opinion, the essential factor for the progress of civilization is moral progress, and moral progress occurs through social psychological mechanisms that are often more accessible to the understanding of people motivated by empathy and altruism, and that falls more within the realm of “wisdom.”
Thank you for this thoughtful exchange—it’s helped clarify important nuances. I genuinely admire your commitment to ethical transformation. You’re right: the future will need not just technological solutions, but new forms of human solidarity rooted in wisdom and compassion.
While our methodologies differ, your ideas inspire deeper thinking about holistic approaches. To keep this thread focused, I suggest we continue this conversation via private messages—particularly if you’d like to explore:
Integrating your vision of organic prosociality into existing systems,
Or designing pilot projects to test these concepts.
For other readers: This discussion vividly illustrates how the Time × Scope framework operates in practice—‘high-moral’ ideals (long-term δ, wide-scope w) must demonstrate implementability (↑ρ) before becoming foundational norms. I’d love to hear: What examples of such moral transitions do you see emerging today?
Thank you for the thoughtful follow-up. I fully agree that laws and formal rules work only to the extent that people actually believe in them. If a regulation lacks internal assent, it quickly turns into costly policing or quiet non-compliance. So the external layer must rest on genuine, internalised conviction.
Regarding the prospect of a new “behavior-first” ideology: I don’t dismiss the idea at all, but I think such an ideology would need to meet three demanding criteria to avoid repeating the over-promising grand narratives of the past:
Maximally inclusive and evidence-based
– It should speak a language that diverse groups can recognise as their own, while remaining anchored in empirically verifiable claims (no promise of a metaphysical paradise).
Backed by socio-technical trust mechanisms
– Cryptographically auditable processes, open metrics, and transparent feedback loops so that participants can see that principles are applied uniformly and can verify claims for themselves.
A truthful, pragmatic beacon rather than a utopian slogan
– A positive horizon that is achievable in increments, with clear milestones and a built-in capacity for course correction. In other words, a lighthouse—bright, but firmly bolted to the rocks—rather than a mirage.
You mentioned the possibility of viable formulas that have never been tried. I would be very interested to hear your ideas: what practical steps or pilot designs do you think could meet the inclusivity, transparency, and truthfulness tests outlined above?
Thank you very much for your interest in my proposal.
My idea of an “ideology of behavior” seems to me to be the logical conclusion of the civilizing process that led certain moralistic religions (the so-called “compassionate religions”) to end up prioritizing conceptions of moral motivation with behavioral implications (benevolent and altruistic behavior) that always involved the internalization of emotions associated with prosocial symbolism: individual soul, charity, grace, compassion… this is the Christian terminology, but the compassionate religious cultures of the East have their own terms.
The goal is always moral evolution. Use certain symbolic stimuli associated with non-aggressive, empathetic, benevolent, and altruistic behavioral motivations. “Producing saints.”
I can’t think of a better way to produce effective altruism.
Historically, the emergence of cohesive subcultural minorities has always had great power to influence lifestyle changes from a moral perspective.
Monasticism was an invention of Buddhism, although it later gained great importance in the West. The puritanical subcultures of Reformed Christianity also played a role.
In my opinion, the creation of a morally influential minority that promotes an extremely prosocial lifestyle and, for the first time, develops based on principles of rationality could have a profound impact on the conditions of today’s society. Many 19th-century thinkers already realized that if astrology evolved into astronomy, and alchemy into chemistry, why couldn’t the religions of the past have a functional and coherent equivalent in the enlightened world?
The idea of an “influential minority,” by the way, is not foreign to “Effective Altruism.” It appears, for example, in Schuber and Caviola’s book “Effective Altruism and the Human Mind.”
I think we can be ambitious and set a bigger goal: if we can locate individuals with a greater propensity to perform altruistic acts, we can also locate individuals with a propensity to improve their behavior to the limits of extreme prosociality. These would be the “believers” in the behavioral ideology. Individuals rationally motivated to correct their behavior in order to achieve a clear goal (extreme prosociality, “saintliness”). Didn’t the people in “Alcoholics Anonymous” do something similar about changing behavior almost a hundred years ago? And they certainly didn’t need professional psychologists to do so. They relied on clear motivation, clarity of objectives… and a lucid process of development through trial and error.
I mentioned another example from the past: the Tolstoyan movement. It failed because it was poorly conceived and poorly organized, but it demonstrated that it was possible to create a non-political social movement based on principles of extreme prosocial behavior and not necessarily linked to any belief on the supernatural.
What are the motivations for altruistic action? What are the mechanisms for internalizing prosocial behavioral values? What psychological incentives and rewards do those who undertake a process of change and renunciation based on an altruistic ideal receive? How are ideologies created, cultivated, and flourished?
In our time, we have historical evidence of all kinds. We already know many things, and although science can advise us, this should be a matter of individual motivation and shared wisdom.
As an initial formula, I would suggest a “monastic” organization for the rational pursuit of an altruistic lifestyle. An altruistic lifestyle implies controlling aggression, cultivating rationality and empathy, scientific curiosity, and, above all, developing benevolence in behavior. In my view, such a development would not necessarily be less attractive to many young people today than monasticism was in the Middle Ages.
And, above all, keep in mind: unlike political ideologies or mass religions, a monastic structure only seeks to attract a minority. One person in a thousand? We would then be talking about eight million people with 100% active altruistic behavior!
Thank you for elaborating — your vision of creating a rational ‘moral elite’ is truly fascinating! You’re absolutely right about the core issue: today’s hierarchy, centered on financial achievement and consumption, stifles moral development. Your proposed alternative — a system where status derives from prosocial behavior (‘saintliness without dogma’) — strikes at the heart of the problem.
However, I see two practical challenges:
Systemic dependency: Such a transformation requires overhauling economic incentives and institutions, not just adopting new norms. As your own examples show (Tolstoyans, AA), local communities can create pockets of alternative ethics, but scaling this to a societal level clashes with systems built on competing principles (e.g., market competition). This doesn’t invalidate the idea — it simply means implementation must be evolutionary, not revolutionary.
Fragmentation risk: Replacing one hierarchy (financial) with another (moral) could spark new conflicts, especially with religious communities for whom ‘saintliness’ is central. For global impact, any framework must be inclusive — complementing existing paths (religious/secular) rather than rejecting them.
This is where EA’s evolutionary approach — and your own work — shines:
We operate by gradually ‘embedding’ high-moral norms (δ↑, w↑) into the basic layer (ρ↑) through evidence, institutions, and cultural narratives.
Your ideas about intentionally shaping prosocial norms through communities aren’t an alternative but a powerful complement! They’re tools to accelerate shifting norms (e.g., long-term AI ethics or planetary stewardship) from ‘high’ to ‘basic’.
A timely synthesis: I’m currently drafting a post applying Time × Scope to AI alignment. It explores how a technologically mediated moral hierarchy (not sermons or propaganda) could act as a sociotechnical solution by:
Rewarding verified contributions to common good (e.g., AI safety research, disaster resilience) via transparent metrics.
Creating status pathways based on moral impact — not wealth.
Evolving existing systems: No economic upheaval or religious conflict; integrates with markets/institutions.
Inclusivity: Offers a neutral ‘language of moral contribution’ accessible to all worldviews.
Your insights are invaluable here! If you’d like to deepen this discussion:
Let’s connect via DM to explore your models for motivation/community design.
I’d welcome your input on my AI alignment framework (especially how to ‘operationalize’ moral growth).
Your focus on inner transformation is key to ensuring technology augments human morality — it’s worth building together.
Perhaps the ‘lighthouse’ we need isn’t a utopian ideology, but a practical, scalable approach — anchored in evidence, open to all, and built step by step. Would love your thoughts!
Thank you very much for your attention to my proposal. I know that new ideas are difficult to understand (especially if you’re not very good at explaining them), and particularly when something as unusual as promoting new ideological movements (let’s say, “utopian”).
I just want to make a few brief clarifications:
Moral evolution initiatives in the sense of pacifism, altruism, and benevolence stemming from monastic structures do not seek to create elites, as they are situated outside the conventional world. They can also be referred to as community initiatives of “witness” (for example, the case of Anabaptist communities or Quakers). However, associations such as Freemasonry, Opus Dei, and even initiatives associated with EA, such as “80,000 Hours,” are initiatives to create elites. They do agree that they are influential minorities, in one way or another (all social change is logically set in motion by minorities).
I don’t propose “norms,” but rather styles of behavior based on internalized ethical values. A non-coercive prosociality.
All activities based on altruism can be complementary, although dilemmas about priorities always arise.
I understand the importance given to “long-term” issues and the alarm created by issues related to AI. Unfortunately, not all of us are sufficiently prepared to grasp the magnitude of such threats to the common good.
In my opinion, the essential factor for the progress of civilization is moral progress, and moral progress occurs through social psychological mechanisms that are often more accessible to the understanding of people motivated by empathy and altruism, and that falls more within the realm of “wisdom.”
Thank you for this thoughtful exchange—it’s helped clarify important nuances. I genuinely admire your commitment to ethical transformation. You’re right: the future will need not just technological solutions, but new forms of human solidarity rooted in wisdom and compassion.
While our methodologies differ, your ideas inspire deeper thinking about holistic approaches. To keep this thread focused, I suggest we continue this conversation via private messages—particularly if you’d like to explore:
Integrating your vision of organic prosociality into existing systems,
Or designing pilot projects to test these concepts.
For other readers: This discussion vividly illustrates how the Time × Scope framework operates in practice—‘high-moral’ ideals (long-term δ, wide-scope w) must demonstrate implementability (↑ρ) before becoming foundational norms. I’d love to hear: What examples of such moral transitions do you see emerging today?