Written by Claude, and very lightly edited.
In a recent episode of The Diary of a CEO podcast, guest Bryan Johnson, founder of Kernel and the Blueprint project, laid out a thought-provoking perspective on what he sees as the most important challenge and opportunity of our time: aligning human and artificial intelligence towards the shared goal of maximizing existence and cooperation.
Johnson argues that with the rapid advancement of AI capabilities, we likely have less time than most people think before superintelligent AI systems emerge that will “run this planet.” The existential risk is that without the right goal structures, the risk profiles of unaligned AI and uncontrolled human intelligence are the same—they tend towards self-destruction and jeopardizing the future.
His proposed solution is for humanity to unite around the simple but powerful goal of “Don’t Die”—both at the individual level and civilizational level. Using himself as an “n of 1″ experiment, Johnson’s Blueprint project aims to maximize his own healthspan and lifespan, serving as a model for how humanity’s complex systems could align around the singular goal of self-preservation and radical life extension.
The implication is that if we can get all humans on board with “Don’t Die” as our species-level terminal goal, we would then have a clear basis for aligning AI systems with human values and ensuring a cooperative, positive future. Of course, this is an immense challenge that would require reorienting many core human values, behaviors and social structures. But Johnson is optimistic it’s possible, and believes we have an incredible opportunity to bridge to “the most extraordinary existence that we are aware of in the galaxy.”
This framing resonated with me and seems relevant to a number of active debates in EA, from longtermism and existential risk to AI alignment and transhumanism. While there are many open questions and potential flaws to interrogate, I’m curious what others think about the philosophical and practical implications of “Don’t Die” as a unifying goal for human and AI thriving. At the very least, I found it to be a unique perspective.
Here’s a summary of the transcript from Claude, when asked “Summarize what the guest, Bryan, is trying to achieve and what he thinks of AI”
Based on the conversation, Bryan Johnson is on a mission to help humanity survive and thrive. He believes the most important thing we need to focus on in the 21st century is figuring out how to cooperate and align our goals, both as individuals and with the artificial intelligence systems that will inevitably supersede human intelligence in the near future.
Some key points about what Bryan is trying to achieve:
He is using himself as an “n of 1” experiment to solve problems like existence, cooperation, and goal alignment within his own body’s 35 trillion cells. His “Blueprint” project aims to maximize his lifespan and healthspan.
He believes if he can get his own complex system of 35 trillion cells to cooperate towards the singular goal of “don’t die”, it provides a model for how humanity as a species could align around the same objective.
The purpose is to find a way for all forms of intelligence on Earth—human, artificial, and the biosphere itself—to coexist and cooperate rather than self-destruct. He sees this as the most urgent problem facing humanity.
Regarding AI, Bryan believes:
Artificial intelligence has already surpassed humans and will inevitably “run this planet” and all forms of cooperation. We have less time than most people think before superintelligent AI emerges.
The risk profiles of unaligned AI and uncontrolled human intelligence are the same—they tend toward self-destruction without the right goal structures.
To have any chance of a positive future, we need to figure out how to align the goals of AI systems with human goals. But first we need to resolve our own internal conflicts and agree upon a common goal as a species, which he proposes should be “don’t die.”
In summary, Bryan sees the alignment of human and artificial intelligence towards the shared goal of maximizing existence and cooperation as the most important challenge and opportunity of our time. His personal health optimization is an attempt to solve that challenge within himself as a model for humanity. While he recognizes the immense difficulty, he is optimistic it’s possible if we reorient our values and behaviors.
This is an extremely rich guy who isn’t donating any of his money. I wouldn’t call him ‘aligned’ at all to EA.
I would also just, be careful about reading him on his word. He’s only started talking about this framing recently (I’ve followed him for a while because of a passing interest in Kernel). He may well just be a guy who’s very scared of dying with an incomprehensible amount of money to spend on it, who’s looking for some admirers.
FWIW, I totally don’t consider “donating” a necessary component of taking effective altruistic action. Most charities seem much less effective than the most effective for-profit organizations, and most of the good in the world seems achieved by for-profit companies.
I don’t have a particularly strong take on Bryan Johnson, but using “donations” as a proxy seems pretty bad to me.
Hmm, I think having the mindset behind effective altruistic action basically requires you to feel the force of donating. It’s often correct to not donate because of some combination of expecting {better information/deconfusion, better donation opportunities, excellent non-donation spending opportunities, high returns, etc.} in the future. But if you haven’t really considered large donations or don’t get that donating can be great, I fail to imagine how you could be taking effective altruistic action. (For extremely rich people.) (Related indicator of non-EA-ness: not strongly considering causes outside the one you’re most passionate about.)
(I don’t have context on Bryan Johnson.)
“Most charities seem much less effective than the most effective for-profit organizations”
This is a big discussion but I would be interested to see you justify this. I would say many of the biggest GHD achievements and much important work is driven by not for profit organizations like charities and government (global vaccine alliance, university research institutions etc) but obviously it’s a complicated discussion.
Obviously a market economy drives much of it, but I consider this more the water we swim in rather than the capitalist system doing the good itself.
I would be interested to hear the for profit businesses which you think are counterfactually doing the most good on the margins
I take a very longtermist and technology-development focused view on things, so the GHD achievements weigh a lot less in my calculus.
The vast majority of world-changing technology was developed or distributed through for-profit companies. My sense is nonprofits are also more likely to cause harm than for-profits (for reasons that would require its own essay to go into, but are related to their lack of feedback loops).
On a separate claim, I find it really hard to discount the rough period since ~1800 where a huge amount of new technological development took place in academic or other non-profit contexts (including militaries). When you add pre-production research to that, I think you’d be hard-pressed to find a single world-changing technology since the enlightenment that doesn’t owe a lot of its existence to non-profit research. Am I misunderstanding your claim?
Academia pre the mid-20th-century was a for-profit enterprise. It did not receive substantial government grants and indeed was often very tightly intertwined with the development of industry (much more so than today).
Indeed, the degree to which modern academia is operating on a grant basis and has adopted more of the trappings of the nonprofit space is one of the primary factors in my model of its modern dysfunctions.
Separately, I think the contribution of militaries to industrial and scientific development is overrated, though that also would require a whole essay to go into.
I disagree-voted because the latter sounds like a very extraordinary claim. I know you don’t have the time to go into an essay on this, but do you mind sketching the rough logic?
“Most charities seem much less effective than the most effective for-profit organizations, and most of the good in the world seems achieved by for-profit companies.”
I disagree but even I did agree, per dollar of investment, I think the best charities far outpeform the best for-profit companies in terms of social impact, and we can do a reasonable job of identifying the best charities, such that donating a lot of money to these charities should be seen as a necessary component of being EA-aligned if you’re rich.
I think Peter might be hoping people read this as “a rich and influential guy might be persuadable!” rather than “let’s discuss the minutiae of what constitutes an EA”. I’ve watched quite a few of Bryan’s videos and honestly I could see this guy swing either way on this (could be SBF, could be Dustin, honestly can’t tell how this shakes out).
Yeah I think that’s part of it. I also thought it was very interesting how he justified what he was doing as being important for the long term future given the expected emergence of superhuman AI. E.g., he is running his life by an algorithm in expectation that society might be run in a similar way.
I will definitely say that he does come across as hyper rational and low empathy in general but there’s also some touching moments here where he clearly has a lot of care for his family and really doesn’t want to lose them. Could all be an act of course.
Thanks for the input!
I think of EA as a cluster of values and related actions that people can hold/practice to different extents. For instance, caring about social impact, seeking comparative advantage, thinking about long term positive impacts, and being concerned about existential risks including AI. He touched on all of those.
It’s true that he doesn’t mention donations. I don’t think that discounts his alignment in other ways.
Useful to know he might not be genuine though.
But cf. the “stages of change” in the transtheoretical model of behavior change. A lack of action suggests he has not reached the action stage, but could be in the contemplation or preparation stages.
Yeah, he could be planning to donate money once his attempt to reduce our overcome mortality is resolved.
He said several times that what he’s doing now is only part one of the plants so I guess there is a opportunity to withhold judgment and see what he does later.
Having said all that I don’t want to come across as trusting him. I just heard the interview and was really surprised by all the EA themes which emerged and the narrative he proposed for why what he’s doing is important
That’s not falsifiable
Edit: I stand by this; it was a quick way to explain the problems with Jason’s comment. I don’t think we should be too mean to people for not donating (in order to not dissuade them from doing it in the future), but this particular model could be used to excuse basically any behaviour as ‘they might be a potential EA one day’. I don’t think it’s a good defence and wouldn’t want to see it trotted out more often.
Definitely not all of them, but most EAs are extremely rich guys who aren’t donating any of their money.
Thanks for sharing your opinion. What’s your evidence for this claim?
https://forum.effectivealtruism.org/posts/nb6tQ5MRRpXydJQFq/ea-survey-2020-series-donation-data#Donation_and_income_for_recent_years, and personal conversations which make me suspect the assumption of non-respondents donating as much as respondents is excessively generous.
Not donating any of their money is definitely an exaggeration, but it’s not more than the median rich person https://www.philanthropyroundtable.org/almanac/statistics-on-u-s-generosity/
Thanks for following up! This evidence you offer doesn’t persuade me that most EAs are extremely rich guys because it’s not arguing that. Did you mean to claim that most EAs who are rich guys are not donating any of their money or more than the median rich person?
I also don’t feel particularly persuaded by that claim based on the evidence shared. What are the specific points that are persuasive in the links—I couldn’t see anything particularly relevant from scanning them. As in nothing that I could use to make an easy comparison between EA donors and median rich people.
I see that “Mean share of total (imputed) income donated was 9.44% (imputing income where below 5k or missing) or 12.5% without imputation.” for EAs and “around 2-3 percent of income” for US households” which seems opposed to your position. But I haven’t checked carefully and I am not the kind of person who makes these sorts of careful comparisons very well.
I don’t have evidence to link to here, or time to search for it, but my current beliefs are that most of EAs funding comes from rich and extremely rich people (often men) donating their money.
Also someone messaged me about a recent controversy that Bryan was involved in. I thought he had been exonerated but this person thought that he had still done bad things.
See: https://www.dailymail.co.uk/news/article-11692609/Anti-aging-biotech-tycoon-accused-dumping-fianc-e-breast-cancer-diagnosis.html
And his response https://twitter.com/bryan_johnson/status/1734257098119356900?t=DHcSxlZ5PkxhREVJkAdXag&s=19
Worth knowing about when judging his character.
I thought this summary by TracingWoodgrains was good (in terms of being a summary. I don’t know enough about the object-level to know if it was true). If roughly accurate, it paints an extremely unflattering picture of Johnson.
I appreciate you putting out a support post of someone who might have some EA leanings that would be good to pick up on. I may or may not have done so in the past and then removed the post because people absolutely shat on it on the forum 😅 so respect.
Thanks, Jonas! I appreciate the support :P
FWIW I think this kind of post is extremely valuable. I may not see him as very EA-aligned but identifying very rich people who might be a bit EA-aligned is very good because the movement could seek to engage with them more and potentially get funding for some impactful stuff.
He said he was on a panel at EA Global and mentions PlayPumps, a favourite EA example in this 2015 post. Here’s the YouTube video of the EA Global panel discussion. EDIT: He’s the guy farthest left, next to the panel host.
Thanks! His post definitely suggests awareness and interest in EA.
I wonder what happened with the panel. He said he would be on it, l but from what I can see in that video, he wasn’t. I imagine that someone could find out what happened there by contacting people involved in organising that event. I don’t care enough to prioritise that effort but I’d appreciate learning more if someone else wants to investigate.
He’s the guy farthest left, next to the panel host. He just looks very different now.