There are writing issues and I’m not sure the net value of the post is positive.
But your view seems ungenerous, ideas in paragraphs like this seem relevant:
This isn’t a snide jab at Will MacAskill. He in fact recognized this problem before most and has made the wise choice of not being the CEO of the CEA for a decade now even though he could have kept the job forever if he wanted.
This is a general problem in EA of many academics having to repeatedly learn they have little to no comparative advantage, if not a comparative disadvantage, in people and operations management.
Some of the individuals about who there is the greatest concern that may end up in a personality cult, information silo, or echo chamber, like Holden, are putting in significant effort to avoid becoming out of touch with reality and minimizing any negative, outsized impact of their own biases.
Yet it’s not apparent if Musk makes any similar efforts. So, what, if any, are the reasons specific to Musk as a personality causing him to be so inconsistent in the ways effective altruists should care about most?
I understood the heart of the post to be in the first sentence: “what should be of greater importance to effective altruists anyway is how the impacts of all [Musk’s] various decisions are, for lack of better terms, high-variance, bordering on volatile.” While Evan doesn’t provide examples of what decisions he’s talking about, I think his point is a valid one: Musk is someone who is exceptionally powerful, increasingly interested in how he can use his power to shape the world, and seemingly operating without the kinds of epistemic guardrails that EA leaders try to operate with. This seems like an important development, if for no other reason that Musk’s and EA’s paths seem more likely to collide than diverge as time goes on.
What you said seems valid. However, unfortunately, it seems low EV to talk a lot about this subject. Maybe the new EA comms and senior people are paying attention to the issues, and for a number of reasons that seems best in this situation. If that’s not adequate, it seems valid to push or ask them about it.
I’m thinking of asking people like that about what they’re doing but I’m also intending to request feedback from them and others in EA how to communicate related ideas better. I’ve asked this question to check if there are major factors I might be missing as a prelude to a post with my own views. That’d be high stakes enough that I’d put in the effort to write it better that I didn’t put into this question post. I might title it something like “Effective Altruism Should Proactively Help Allied/Aligned Philanthropists Optimize Their Marginal Impact.”
Other than at the Centre of Effective Altruism, who are the new/senior communications staff it’d be good to contact?
As I write in my answer above, I think high-variance and volatile decisions are kinda just the name of the game when you are trying to make billions of dollars and change industries in a very-competitive world.
Agreed that Musk is “operating without the kinds of epistemic guardrails that EA leaders try to operate with”, and that it would be better if Musk was wiser. But it is always better if people were wiser, stronger versions of themselves! The problem is that people can’t always change their personalities very much, and furthermore it’s not always clear (from the inside) which direction of personality change would be an improvement. The problem of “choosing how epistemically modest I should be”, is itself a deep and unsettled question.
(Devil’s advocate perspective: maybe it’s not Musk that’s being too wild and volatile, but EAs who are being too timid and unambitious—trying to please everyone, fly under the radar, stay apolitical, etc! I don’t actually believe this 100%, but maybe 25%: Musk is more volatile than would be ideal, but EA is also more timid than would be ideal. So I don’t think we can easily say exactly how much more epistemically guard-railed Musk should ideally be, even if we in the EA movement had any influence over him, and even if he had the capability to change his personality that much.)
I agree that Musk should have more epistemic guardrails but also that EA should me more ambitious and not less timid, but more tactful. Trying to always please everyone, be apolitical and fly under the radar can constitute an extreme risk aversion, a risk in itself.
Musk has for years identified that one of the major motivators for most of his endeavours is to ensure civilization is preserved.
From EA convincing Elon Musk to take existential threats from transformative AI seriously almost a decade ago, to his recent endorsement of longtermism and William MacAskill’s What We Owe the Future on Twitter for millions to see, the public will perceive a strong association him and EA.
He also continues to influence the public response to potential existential threats like unaligned AI and the climate crisis, among others. Even if Musk has more hits than misses, his track record is mixed enough that it’s worth trying to notice any real patterns across his mistakes so the negative impact could be mitigated. Given Musk’s enduring respect for EA, the community may be better able than most to inspire him to make better decisions in the future as it relates to having a positive social impact, i.e., become better at calibration.
Thanks for the response. I still do not think the post made it clear what its objective was, and I don’t think it’s really the venue for this kind of discussion.
I meant the initial question literally and sought an answer. I listed some general kinds of answers and clarified that I’m seeking answers to what potential factors may be shaping Musk’s approaches that would not be so obvious. I acknowledge I could have written that better and that the tone makes it ambiguous whether I was trying to slag him disguised as me asking sincere question.
Why is this relevant to the EA forum?
There are writing issues and I’m not sure the net value of the post is positive.
But your view seems ungenerous, ideas in paragraphs like this seem relevant:
I understood the heart of the post to be in the first sentence: “what should be of greater importance to effective altruists anyway is how the impacts of all [Musk’s] various decisions are, for lack of better terms, high-variance, bordering on volatile.” While Evan doesn’t provide examples of what decisions he’s talking about, I think his point is a valid one: Musk is someone who is exceptionally powerful, increasingly interested in how he can use his power to shape the world, and seemingly operating without the kinds of epistemic guardrails that EA leaders try to operate with. This seems like an important development, if for no other reason that Musk’s and EA’s paths seem more likely to collide than diverge as time goes on.
What you said seems valid. However, unfortunately, it seems low EV to talk a lot about this subject. Maybe the new EA comms and senior people are paying attention to the issues, and for a number of reasons that seems best in this situation. If that’s not adequate, it seems valid to push or ask them about it.
I’m thinking of asking people like that about what they’re doing but I’m also intending to request feedback from them and others in EA how to communicate related ideas better. I’ve asked this question to check if there are major factors I might be missing as a prelude to a post with my own views. That’d be high stakes enough that I’d put in the effort to write it better that I didn’t put into this question post. I might title it something like “Effective Altruism Should Proactively Help Allied/Aligned Philanthropists Optimize Their Marginal Impact.”
Other than at the Centre of Effective Altruism, who are the new/senior communications staff it’d be good to contact?
Strongly upvoted. You’ve put my main concern better than I knew how to put it myself.
As I write in my answer above, I think high-variance and volatile decisions are kinda just the name of the game when you are trying to make billions of dollars and change industries in a very-competitive world.
Agreed that Musk is “operating without the kinds of epistemic guardrails that EA leaders try to operate with”, and that it would be better if Musk was wiser. But it is always better if people were wiser, stronger versions of themselves! The problem is that people can’t always change their personalities very much, and furthermore it’s not always clear (from the inside) which direction of personality change would be an improvement. The problem of “choosing how epistemically modest I should be”, is itself a deep and unsettled question.
(Devil’s advocate perspective: maybe it’s not Musk that’s being too wild and volatile, but EAs who are being too timid and unambitious—trying to please everyone, fly under the radar, stay apolitical, etc! I don’t actually believe this 100%, but maybe 25%: Musk is more volatile than would be ideal, but EA is also more timid than would be ideal. So I don’t think we can easily say exactly how much more epistemically guard-railed Musk should ideally be, even if we in the EA movement had any influence over him, and even if he had the capability to change his personality that much.)
I agree that Musk should have more epistemic guardrails but also that EA should me more ambitious and not less timid, but more tactful. Trying to always please everyone, be apolitical and fly under the radar can constitute an extreme risk aversion, a risk in itself.
Musk has for years identified that one of the major motivators for most of his endeavours is to ensure civilization is preserved.
From EA convincing Elon Musk to take existential threats from transformative AI seriously almost a decade ago, to his recent endorsement of longtermism and William MacAskill’s What We Owe the Future on Twitter for millions to see, the public will perceive a strong association him and EA.
He also continues to influence the public response to potential existential threats like unaligned AI and the climate crisis, among others. Even if Musk has more hits than misses, his track record is mixed enough that it’s worth trying to notice any real patterns across his mistakes so the negative impact could be mitigated. Given Musk’s enduring respect for EA, the community may be better able than most to inspire him to make better decisions in the future as it relates to having a positive social impact, i.e., become better at calibration.
Thanks for the response. I still do not think the post made it clear what its objective was, and I don’t think it’s really the venue for this kind of discussion.
I meant the initial question literally and sought an answer. I listed some general kinds of answers and clarified that I’m seeking answers to what potential factors may be shaping Musk’s approaches that would not be so obvious. I acknowledge I could have written that better and that the tone makes it ambiguous whether I was trying to slag him disguised as me asking sincere question.