Agree with Geoffrey that it is very hard to understand this post without examples of what is meant by Elon’s “calibration”. What do you mean in your very last sentence: “what, if any, are the reasons specific to Musk as a personality causing him to be so inconsistent in the ways effective altruists should care about most”? Please give some examples—are you implying that buying Twitter in the hopes of making conversation freer and more rational is not a good EA cause area? Or implying that maybe it is a good EA cause area, but Musk is a terrible person to run said project? Or implying that Musk’s other projects, like SpaceX and Tesla, are a waste of effort from an EA perspective? (I would remind you that Elon’s goal has not just been to work on the most important possible cause areas with the money he has, but to found profitable companies that make progress on important-ish causes, such that he can get more money to roll into more important causes in the future. Evidently one can make more lots of money in electric car manufacturing that you can’t make in bednet distribution or lobbying for better pandemic preparedness policy.) Maybe you agree with my parenthetical, but you think that Twitter will not be a moneymaking proposition for Elon, or you think that he should give up on trying to get richer and richer and switch now to working on the most important EA causes.
About twitter, I would note that Elon has been in charge for just a few days—I don’t think it’s clear yet whether Elon had an “uncalibrated” sense of his capabilities and will ruin Twitter through incompetence, or if he will succeed at improving it. Maybe after a few months or a few years, the answer of whether Musk’s ownership has been good or bad for Twitter will be more clear.
More generally, I would think that many attempts to launch billion-dollar companies are subject to “high variance”—that is just an unfortunate fact of life when you are trying to do ambitious things. Many of Elon’s companies have been close to bankruptcy at one point or another, but so far they have made it through. Conversely, nobody doubts that Sam Bankman-Fried is a very smart guy, but FTX (although it may have been very close to succeeding and becoming even bigger than it was) is currently being forced to sell itself to Binance for pennies on the dollar.
Personally, I take pride in the EA community’s enthusiasm for “hits-based giving”, and its willingness to consider low-probability, high-consequence events seriously. Unfortunately, taking action in this complex world requires making decisions under high uncertainty (including uncertainty about one’s own capabilities and strengths/weaknesses). For instance, I aspire to someday found an EA-aligned charitable organization, even though my only previous job experience has been as an aerospace engineer. It’s possible that I am deluded about my personal charity-running capacities, and it’s possible that I’m furthermore deluded such that I’ll never be able to recognize the ways in which I’m deluded about my charity-running capacities. But I think in this situation, it is often reasonable to go ahead and found the charity anyways—otherwise fear and uncertainty will preclude any ambitious action! As Nathan Young says about SBF and the implosion of FTX—“It is unclear if ex-ante this was a bad call from them. There is lots we don’t know.”
None of Musk’s projects are by themselves bad ideas. None of them are obviously a waste of effort either. I agree the impacts of his businesses are mostly greater than the impact of his philanthropy, while the opposite is presumably the case for most philanthropists in EA.
I agree his takeover of Twitter so far doesn’t strongly indicate whether Twitter will be ruined. He has made it much harder for himself to achieve his goals with Twitter, though, through a series of many mistakes he has made during the last year in the course of buying Twitter.
The problem is that he is someone who is able to have an impact that’s neither based strictly in business nor philanthropy. A hits-based approach based on low-probability, high-consequence events will sometimes include a low risk of highly negative consequences. The kind of risk tolerance associated with a hits-based approach doesn’t work when misses could be catastrophic:
His attempts in the last month to intervene in the war in Ukraine and disputes over Taiwan’s sovereignty seem to speak for themselves as at least a yellow flag. That’s enough of a concern even ignoring whatever impacts he has on domestic politics in the United States.
The debacle of whether OpenAI as an organization will be a net positive for AI alignment and the involvement of effective altruism in the organization’s foundation is thought of by some as one of the worst mistakes in the history of AI safety/alignment. Elon Musk played a crucial role in OpenAI’s founding and has acknowledged he made mistakes with OpenAI since he has distanced himself from the organization. In general, the overall impact he has had on AI alignment is ambiguous. He remains one of a small number of individuals who have the most capability to impact public responses to advancing AI other than world leaders, though it’s not clear whether or how much he could be relied on to have a positive impact on AI safety/AI alignment in the future.
These are only a couple examples of the potential impact and risks of the decisions he makes that are unlike anything that any individual in EA has done before. An actor in his position should have a greater deal of fear and uncertainty that should at least inspire someone to be more cautious. My assumption is he isn’t cautious enough. I asked my initial question in the hope the causes of his recklessness can be identified, to aid in formulating adequate protocols for responding to the potentially catastrophic errors he commits in the future.
Agree with Geoffrey that it is very hard to understand this post without examples of what is meant by Elon’s “calibration”. What do you mean in your very last sentence: “what, if any, are the reasons specific to Musk as a personality causing him to be so inconsistent in the ways effective altruists should care about most”? Please give some examples—are you implying that buying Twitter in the hopes of making conversation freer and more rational is not a good EA cause area? Or implying that maybe it is a good EA cause area, but Musk is a terrible person to run said project? Or implying that Musk’s other projects, like SpaceX and Tesla, are a waste of effort from an EA perspective? (I would remind you that Elon’s goal has not just been to work on the most important possible cause areas with the money he has, but to found profitable companies that make progress on important-ish causes, such that he can get more money to roll into more important causes in the future. Evidently one can make more lots of money in electric car manufacturing that you can’t make in bednet distribution or lobbying for better pandemic preparedness policy.) Maybe you agree with my parenthetical, but you think that Twitter will not be a moneymaking proposition for Elon, or you think that he should give up on trying to get richer and richer and switch now to working on the most important EA causes.
About twitter, I would note that Elon has been in charge for just a few days—I don’t think it’s clear yet whether Elon had an “uncalibrated” sense of his capabilities and will ruin Twitter through incompetence, or if he will succeed at improving it. Maybe after a few months or a few years, the answer of whether Musk’s ownership has been good or bad for Twitter will be more clear.
More generally, I would think that many attempts to launch billion-dollar companies are subject to “high variance”—that is just an unfortunate fact of life when you are trying to do ambitious things. Many of Elon’s companies have been close to bankruptcy at one point or another, but so far they have made it through. Conversely, nobody doubts that Sam Bankman-Fried is a very smart guy, but FTX (although it may have been very close to succeeding and becoming even bigger than it was) is currently being forced to sell itself to Binance for pennies on the dollar.
Personally, I take pride in the EA community’s enthusiasm for “hits-based giving”, and its willingness to consider low-probability, high-consequence events seriously. Unfortunately, taking action in this complex world requires making decisions under high uncertainty (including uncertainty about one’s own capabilities and strengths/weaknesses). For instance, I aspire to someday found an EA-aligned charitable organization, even though my only previous job experience has been as an aerospace engineer. It’s possible that I am deluded about my personal charity-running capacities, and it’s possible that I’m furthermore deluded such that I’ll never be able to recognize the ways in which I’m deluded about my charity-running capacities. But I think in this situation, it is often reasonable to go ahead and found the charity anyways—otherwise fear and uncertainty will preclude any ambitious action! As Nathan Young says about SBF and the implosion of FTX—“It is unclear if ex-ante this was a bad call from them. There is lots we don’t know.”
None of Musk’s projects are by themselves bad ideas. None of them are obviously a waste of effort either. I agree the impacts of his businesses are mostly greater than the impact of his philanthropy, while the opposite is presumably the case for most philanthropists in EA.
I agree his takeover of Twitter so far doesn’t strongly indicate whether Twitter will be ruined. He has made it much harder for himself to achieve his goals with Twitter, though, through a series of many mistakes he has made during the last year in the course of buying Twitter.
The problem is that he is someone who is able to have an impact that’s neither based strictly in business nor philanthropy. A hits-based approach based on low-probability, high-consequence events will sometimes include a low risk of highly negative consequences. The kind of risk tolerance associated with a hits-based approach doesn’t work when misses could be catastrophic:
His attempts in the last month to intervene in the war in Ukraine and disputes over Taiwan’s sovereignty seem to speak for themselves as at least a yellow flag. That’s enough of a concern even ignoring whatever impacts he has on domestic politics in the United States.
The debacle of whether OpenAI as an organization will be a net positive for AI alignment and the involvement of effective altruism in the organization’s foundation is thought of by some as one of the worst mistakes in the history of AI safety/alignment. Elon Musk played a crucial role in OpenAI’s founding and has acknowledged he made mistakes with OpenAI since he has distanced himself from the organization. In general, the overall impact he has had on AI alignment is ambiguous. He remains one of a small number of individuals who have the most capability to impact public responses to advancing AI other than world leaders, though it’s not clear whether or how much he could be relied on to have a positive impact on AI safety/AI alignment in the future.
These are only a couple examples of the potential impact and risks of the decisions he makes that are unlike anything that any individual in EA has done before. An actor in his position should have a greater deal of fear and uncertainty that should at least inspire someone to be more cautious. My assumption is he isn’t cautious enough. I asked my initial question in the hope the causes of his recklessness can be identified, to aid in formulating adequate protocols for responding to the potentially catastrophic errors he commits in the future.