In a recent episode of The Diary of a CEO podcast, guest Bryan Johnson, founder of Kernel and the Blueprint project, laid out a thought-provoking perspective on what he sees as the most important challenge and opportunity of our time: aligning human and artificial intelligence towards the shared goal of maximizing existence and cooperation.
Johnson argues that with the rapid advancement of AI capabilities, we likely have less time than most people think before superintelligent AI systems emerge that will “run this planet.” The existential risk is that without the right goal structures, the risk profiles of unaligned AI and uncontrolled human intelligence are the same—they tend towards self-destruction and jeopardizing the future.
His proposed solution is for humanity to unite around the simple but powerful goal of “Don’t Die”—both at the individual level and civilizational level. Using himself as an “n of 1″ experiment, Johnson’s Blueprint project aims to maximize his own healthspan and lifespan, serving as a model for how humanity’s complex systems could align around the singular goal of self-preservation and radical life extension.
The implication is that if we can get all humans on board with “Don’t Die” as our species-level terminal goal, we would then have a clear basis for aligning AI systems with human values and ensuring a cooperative, positive future. Of course, this is an immense challenge that would require reorienting many core human values, behaviors and social structures. But Johnson is optimistic it’s possible, and believes we have an incredible opportunity to bridge to “the most extraordinary existence that we are aware of in the galaxy.”
This framing resonated with me and seems relevant to a number of active debates in EA, from longtermism and existential risk to AI alignment and transhumanism. While there are many open questions and potential flaws to interrogate, I’m curious what others think about the philosophical and practical implications of “Don’t Die” as a unifying goal for human and AI thriving. At the very least, I found it to be a unique perspective.
Here’s a summary of the transcript from Claude, when asked “Summarize what the guest, Bryan, is trying to achieve and what he thinks of AI”
Based on the conversation, Bryan Johnson is on a mission to help humanity survive and thrive. He believes the most important thing we need to focus on in the 21st century is figuring out how to cooperate and align our goals, both as individuals and with the artificial intelligence systems that will inevitably supersede human intelligence in the near future.
Some key points about what Bryan is trying to achieve:
He is using himself as an “n of 1” experiment to solve problems like existence, cooperation, and goal alignment within his own body’s 35 trillion cells. His “Blueprint” project aims to maximize his lifespan and healthspan.
He believes if he can get his own complex system of 35 trillion cells to cooperate towards the singular goal of “don’t die”, it provides a model for how humanity as a species could align around the same objective.
The purpose is to find a way for all forms of intelligence on Earth—human, artificial, and the biosphere itself—to coexist and cooperate rather than self-destruct. He sees this as the most urgent problem facing humanity.
Regarding AI, Bryan believes:
Artificial intelligence has already surpassed humans and will inevitably “run this planet” and all forms of cooperation. We have less time than most people think before superintelligent AI emerges.
The risk profiles of unaligned AI and uncontrolled human intelligence are the same—they tend toward self-destruction without the right goal structures.
To have any chance of a positive future, we need to figure out how to align the goals of AI systems with human goals. But first we need to resolve our own internal conflicts and agree upon a common goal as a species, which he proposes should be “don’t die.”
In summary, Bryan sees the alignment of human and artificial intelligence towards the shared goal of maximizing existence and cooperation as the most important challenge and opportunity of our time. His personal health optimization is an attempt to solve that challenge within himself as a model for humanity. While he recognizes the immense difficulty, he is optimistic it’s possible if we reorient our values and behaviors.
Bryan Johnson seems more EA aligned than I expected
Link post
Written by Claude, and very lightly edited.
In a recent episode of The Diary of a CEO podcast, guest Bryan Johnson, founder of Kernel and the Blueprint project, laid out a thought-provoking perspective on what he sees as the most important challenge and opportunity of our time: aligning human and artificial intelligence towards the shared goal of maximizing existence and cooperation.
Johnson argues that with the rapid advancement of AI capabilities, we likely have less time than most people think before superintelligent AI systems emerge that will “run this planet.” The existential risk is that without the right goal structures, the risk profiles of unaligned AI and uncontrolled human intelligence are the same—they tend towards self-destruction and jeopardizing the future.
His proposed solution is for humanity to unite around the simple but powerful goal of “Don’t Die”—both at the individual level and civilizational level. Using himself as an “n of 1″ experiment, Johnson’s Blueprint project aims to maximize his own healthspan and lifespan, serving as a model for how humanity’s complex systems could align around the singular goal of self-preservation and radical life extension.
The implication is that if we can get all humans on board with “Don’t Die” as our species-level terminal goal, we would then have a clear basis for aligning AI systems with human values and ensuring a cooperative, positive future. Of course, this is an immense challenge that would require reorienting many core human values, behaviors and social structures. But Johnson is optimistic it’s possible, and believes we have an incredible opportunity to bridge to “the most extraordinary existence that we are aware of in the galaxy.”
This framing resonated with me and seems relevant to a number of active debates in EA, from longtermism and existential risk to AI alignment and transhumanism. While there are many open questions and potential flaws to interrogate, I’m curious what others think about the philosophical and practical implications of “Don’t Die” as a unifying goal for human and AI thriving. At the very least, I found it to be a unique perspective.
Here’s a summary of the transcript from Claude, when asked “Summarize what the guest, Bryan, is trying to achieve and what he thinks of AI”
Based on the conversation, Bryan Johnson is on a mission to help humanity survive and thrive. He believes the most important thing we need to focus on in the 21st century is figuring out how to cooperate and align our goals, both as individuals and with the artificial intelligence systems that will inevitably supersede human intelligence in the near future.
Some key points about what Bryan is trying to achieve:
He is using himself as an “n of 1” experiment to solve problems like existence, cooperation, and goal alignment within his own body’s 35 trillion cells. His “Blueprint” project aims to maximize his lifespan and healthspan.
He believes if he can get his own complex system of 35 trillion cells to cooperate towards the singular goal of “don’t die”, it provides a model for how humanity as a species could align around the same objective.
The purpose is to find a way for all forms of intelligence on Earth—human, artificial, and the biosphere itself—to coexist and cooperate rather than self-destruct. He sees this as the most urgent problem facing humanity.
Regarding AI, Bryan believes:
Artificial intelligence has already surpassed humans and will inevitably “run this planet” and all forms of cooperation. We have less time than most people think before superintelligent AI emerges.
The risk profiles of unaligned AI and uncontrolled human intelligence are the same—they tend toward self-destruction without the right goal structures.
To have any chance of a positive future, we need to figure out how to align the goals of AI systems with human goals. But first we need to resolve our own internal conflicts and agree upon a common goal as a species, which he proposes should be “don’t die.”
In summary, Bryan sees the alignment of human and artificial intelligence towards the shared goal of maximizing existence and cooperation as the most important challenge and opportunity of our time. His personal health optimization is an attempt to solve that challenge within himself as a model for humanity. While he recognizes the immense difficulty, he is optimistic it’s possible if we reorient our values and behaviors.