Executive summary: The author argues that people can shift society toward a stable “cooperative equilibrium” by publicly rewarding altruistic actions, even if it requires initial sacrifice, because others will adapt and reinforce the norm over time.
Key points:
The author contrasts “Selfishland,” where individually rational selfish behavior leads to worse collective outcomes, with “Altruisticland,” where people reward altruism and achieve higher cumulative utility.
In Altruisticland, people financially reward actions that benefit others, creating incentives to act altruistically when benefits exceed personal costs.
The current world is between these extremes, with some incentives (markets, laws) but persistent under-rewarding of public goods, knowledge creation, and risk mitigation.
The main barrier is equilibrium: if others act selfishly, individuals lack incentive to act altruistically, creating a stable but suboptimal state.
The author claims more advanced game theory (e.g., reputation dynamics, Bayesian learning) implies equilibria can shift if enough մարդիկ change strategies and others update in response.
Early adopters must bear an “altruistic sacrifice,” but the author argues this can pay off if the cooperative equilibrium is reached and sustained.
The expected value of switching increases if there is a non-trivial chance of very long lifespans (e.g., via LEV), since long-term benefits dominate short-term costs.
To reduce risk, individuals can gradually increase altruism (e.g., slightly above average), limiting downside if others do not follow.
Imperfect observability and attribution can be mitigated with partial knowledge, decentralized funding mechanisms, and potentially future tools like prediction markets.
The system should remain decentralized to avoid power concentration, and individuals are encouraged to publicly reward good work, repeat this behavior, and promote the norm to build trust that altruism is rewarded.
Epistemic status: This is a speculative, normative proposal relying on assumptions about behavioral adaptation, future technology, and long-term incentives; key uncertainties include whether coordination dynamics will shift as described and whether sufficient adoption can occur.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
It omits the most important mechanism that I mentioned in my post. The mechanism is to reward rewarding itself. This is really the most important part of my post that has been completely omitted from the summary.
“Epistemic status: This is a speculative, normative proposal relying on assumptions about behavioral adaptation, future technology, and long-term incentives; key uncertainties include whether coordination dynamics will shift as described and whether sufficient adoption can occur.”
This is not speculative, it’s based on a theoretical justification that makes logical sense. If it doesn’t make sense to someone else, please point out why.
It’s definitely not normative, in a sense that it doesn’t tell people what to do, but it offers a solution.
Key uncertainty, in my opinion:
Whether people understand the reasoning behind the solution.
Whether people have sufficient observability of people’s actions for that to work.
Executive summary: The author argues that people can shift society toward a stable “cooperative equilibrium” by publicly rewarding altruistic actions, even if it requires initial sacrifice, because others will adapt and reinforce the norm over time.
Key points:
The author contrasts “Selfishland,” where individually rational selfish behavior leads to worse collective outcomes, with “Altruisticland,” where people reward altruism and achieve higher cumulative utility.
In Altruisticland, people financially reward actions that benefit others, creating incentives to act altruistically when benefits exceed personal costs.
The current world is between these extremes, with some incentives (markets, laws) but persistent under-rewarding of public goods, knowledge creation, and risk mitigation.
The main barrier is equilibrium: if others act selfishly, individuals lack incentive to act altruistically, creating a stable but suboptimal state.
The author claims more advanced game theory (e.g., reputation dynamics, Bayesian learning) implies equilibria can shift if enough մարդիկ change strategies and others update in response.
Early adopters must bear an “altruistic sacrifice,” but the author argues this can pay off if the cooperative equilibrium is reached and sustained.
The expected value of switching increases if there is a non-trivial chance of very long lifespans (e.g., via LEV), since long-term benefits dominate short-term costs.
To reduce risk, individuals can gradually increase altruism (e.g., slightly above average), limiting downside if others do not follow.
Imperfect observability and attribution can be mitigated with partial knowledge, decentralized funding mechanisms, and potentially future tools like prediction markets.
The system should remain decentralized to avoid power concentration, and individuals are encouraged to publicly reward good work, repeat this behavior, and promote the norm to build trust that altruism is rewarded.
Epistemic status: This is a speculative, normative proposal relying on assumptions about behavioral adaptation, future technology, and long-term incentives; key uncertainties include whether coordination dynamics will shift as described and whether sufficient adoption can occur.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
This is not an accurate summary.
It omits the most important mechanism that I mentioned in my post. The mechanism is to reward rewarding itself. This is really the most important part of my post that has been completely omitted from the summary.
“Epistemic status: This is a speculative, normative proposal relying on assumptions about behavioral adaptation, future technology, and long-term incentives; key uncertainties include whether coordination dynamics will shift as described and whether sufficient adoption can occur.”
This is not speculative, it’s based on a theoretical justification that makes logical sense. If it doesn’t make sense to someone else, please point out why.
It’s definitely not normative, in a sense that it doesn’t tell people what to do, but it offers a solution.
Key uncertainty, in my opinion:
Whether people understand the reasoning behind the solution.
Whether people have sufficient observability of people’s actions for that to work.