I actually find it quite easy to believe that Musk’s initiatives are worth more than the whole EA movement—though I’m agnostic on the point. Those ideas exist in a very different space from effective altruism, and if you fail to acknowledge deep philosophical differences and outside view reasons for scepticism about overcommitting to one worldview you risk polarising the communities and destroying future value trades between them. For example:
Where EA starts (roughly) from an assumption that you have a small number of people philosophically committed to maximising expected welfare, Musk’s companies start with a vision a much larger group of people find emotionally inspiring, and a small subset of them find extremely inspiring. Compare the 35-50 hour work weeks of typical EA orgs’ staff vs the 80-90 common among Tesla/SpaceX employees—the latter seem to be far more driven, and I doubt that telling them to go and work on AI policy would a) work or b) inspire them to anywhere near comparable productivity if it did.
Musk’s orgs are driven by a belief that they can one day make a profit from what they do, and that if they can’t, they shouldn’t succeed.
Most EA orgs have no such market mechanism, even in the long term. And EA research has perverse incentives that we rarely seem to recognise—researchers gain prestige for raising ‘interesting questions’ that might minimally if at all affect anyone’s behaviour (eg moral uncertainty, cluelessness, infinite ethics, doomsday arguments etc), and they’re given money and job security for failing to answer them in favour of ending every essay with ‘more research needed’.
In particular they’re incentivised to produce writings that encourage major donors to fund them. One plausible way of doing this for eg, is to foster an ingroup mentality, encouraging the people who take them seriously to think of themselves as custodians of a privileged way of thinking (cf the early rationality movement’s dismissal of outsiders as ‘NPCs’). I don’t know of any meta-level argument that this should lead to more reliable understanding of the world than, say the wisdom of crowds.
As Halffull discussed in detail in another comment, Musk’s initiatives are immensely complicated, and a priori reasoning about them might essentially be worthless. We could spend lifetimes considering them and still not have meaningfully greater confidence in their outcomes—and we’d have marked ourselves as irrelevant in the eyes of people driven to work on them. Or we could work with the people who’re motivated by the development of such technologies and encourage what Halffull’s ‘culture of continuous oversight/thinking about your impact’ - which those companies seem to have, at least compared to other for-profits .
Empirically, the EA movement has a history of ignoring or rejecting certain causes as being not worthy of consideration, then coming to view them as significant after all. See GWWC’s original climate change research which basically dismissed the cause vs Founders Pledge’s more recent research which takes it seriously as one of their top causes, Open Philanthropy Project’s explicit acknowledgement of their increasing concern with ‘minor’ global catastrophic risks, or just see all the causes OPP have supported with their recent grants (how many EAs would have taken you seriously 10 years ago if you’d thought about donating to US criminal justice reform?). I would say we have a much better track record of unearthing important causes that were being neglected than of providing good reasons to neglect causes.
I really didn’t mean for this post to be saying much about effective altruism, and especially, I wasn’t using it to argue that “effective altruism is better than Elon Musk.”
All that said, as to my own opinions, I think Elon Musk is clearly tremendously important. It’s quite possible, based on rough numbers like market value, that he singlehandedly is still a lot more valuable in expectation than effective altruists in total. His net worth is close to $300 Billion, and among all EAs, maybe we’re at $60 Billion. So even if he spent his money 1/4th as effectively, he could still do more good.
However, that doesn’t mean that he doesn’t have at least some things to learn from the effective altruism community.
On planning, I’m not convinced that Elon Musk is necessarily a master strategist, who was playing some 8-dimensional planning on these topics. He clearly has a lot of talents helping him. I’d expect him to be great at some things, and imagine it will take a while before anyone (including him) could really tell which specific things led to his success.
In my experiences with really top-performing people, often they just don’t think all that much about many of these issues. They have a lot of things to think about. What can look like “a genius move of multi-year planning” from the outside, often looks to me like “a pretty good guess made quickly”.
No-one’s saying he’s a master strategist. Quite the opposite—his approach is to try stuff out and see what happens. It’s the EA movement that strongly favours reasoning everything out in advance.
What I’m contesting is the claim that he has ‘at least some things to learn from the effective altruism community’, which is far from obvious, and IMO needs a heavy dose of humility. To be clear, I’m not saying no-one in the community should do a shallow (or even deep) dive into his impact—I’m saying that we shouldn’t treat him or his employees like they’re irrational for not having done so to our satisfaction with our methods, as the OP implies.
Firstly, on the specific issue of whether bunkers are a better safeguard against catastrophe, that seems extremely short termist. Within maybe 30-70 years if SpaceX’s predictions are even faintly right, a colony on Mars could be self-sustaining, which seems much more resilient than bunkers, and likely to have huge economic benefits for humanity as a whole. Also, if bunkers are so much easier to set up, all anyone has to do is found an inspiring for profit bunker-development company and set them up! If no-one has seriously done so at scale, that indicates to me that socially/economically they’re a much harder proposition, and that this might outweigh the engineering differences.
Secondly, there’s the question of what the upside of such research is—as I said, it’s far from clear to me that any amount of a priori research will be more valuable than trying stuff and seeing what happens.
Thirdly I think it’s insulting to suppose these guys haven’t thought about their impact a lot simply because they don’t use QALY-adjacent language. Musk talks thoughtfully about his reasons all the time! If he doesn’t try to quantify the expectation, rather than assuming that’s because he’s never thought to do so, I would assume it’s because he thinks such a priori quantification is very low value (see previous para) - and I would acknowledge that such a view is reasonable. I would also assume something similar is true for very many of his employees, too, partly because they’re legion compared to EAs, partly because the filtering for their intelligence has much tighter feedback mechanisms than that for EA researchers.
If any EAs doing such research don’t recognise the validity of these sorts of concerns, I can imagine it being useless or even harmful.
It seems like we have some pretty different intuitions here. Thanks for sharing!
I was thinking of many of my claims as representing low bars. To me, “at least some things to learn from a community” isn’t saying all that much. I’m sure he, and us, and many others, have at least some things that would be valuable to learn from many communities.
”Thirdly I think it’s insulting to suppose these guys haven’t thought about their impact a lot simply because they don’t use QALY-adjacent language” → A lot of the people I knew, in the field (including the person I mentioned), pretty clearly hadn’t thought about the impact a whole lot. It’s not just that they weren’t using QALYs, it’s just that they weren’t really comparing it to similar things. That’s not unusual, most people in most fields don’t seem to be trying hard to optimize the impact globally, in my experience.
I really don’t mean to be insulting to them, I’m just describing my impression. These people have lots of other great qualities.
One thing that would clearly prove me wrong would be some lengthy documents outlining the net benefit, compared to things like bunkers, in the long-term. And, it would be nice if it were clear that lots of SpaceX people paid attention to these documents.
A lot of the people I knew, in the field (including the person I mentioned), pretty clearly hadn’t thought about the impact a whole lot. It’s not just that they weren’t using QALYs, it’s just that they weren’t really comparing it to similar things.
Re this particular example, after you had the conversation did the person agree with you that they clearly hadn’t thought about it? If not, can you account for their disagreement other than claiming that they were basically irrational?
I seem to have quite strongly differing intuitions from most people active in central EA roles, and quite similar ones (at least about the limitations to EA-style research) to many people I’ve spoken to who believe the motte of EA but are sceptical of the bailey (ie of actual EA orgs and methodology). I worry that EA has very strong echo chamber effects reflected in eg the OP, in Linch’s comment below, and Hauke’s about Bill Gates, in various other comments in this thread suggesting ‘almost no-one’ thinks about these questions with clarity and in countless of other such casual dismissals I’ve heard by EAs of smart people taking positions not couched in sufficiently EA terms.
FWIW I also don’t think claiming someone has lots of other great qualities is inconsistent with being insulting to them.
I don’t disagree that it’s plausible we can bring something. I just think that assuming we can do so is extremely arrogant (not by you in particular, but as a generalised attitude among EAs). We need to respect the views of intelligent people who think this stuff is important, even if they can’t or don’t explain why in the terms we would typically use. For PR reasons alone, this stuff is important—I can only point to anecdotes, but so many intelligent people I’ve spoken to find EAs collectively insufferable because of this sort of attitude, and so end up not engaging with ideas that might otherwise have appealed to them. Maybe someone could run a Mechanical Turk study on how such messaging affects reception of theoretically unrelated EA ideas.
Also we still, as a community seem confused over what ‘neglectedness’ does in the ITN framework—whether it’s a heuristic or a multiplier, and if the latter how to separate it from tractability and how to account for the size of the problem in question (bigger, less absolutely neglected problems might still benefit more from marginal resources than smaller problems on which we’ve made more progress with fewer resources, yet I haven’t seen a definition of the framework that accounts for this). Yet anecdotally I still hear ‘it’s not very neglected’ used to casually dismiss concerns on everything from climate change through nuclear war to… well, interplanetary colonisation. Until we get a more consistent and coherent framework, if I as a longtime EA supporter am sceptical on one of the supposed core components of EA philosophy, I don’t see how I’m supposed to convince mission-driven not-very-utilitarians to listen to its analyses.
I actually find it quite easy to believe that Musk’s initiatives are worth more than the whole EA movement—though I’m agnostic on the point. Those ideas exist in a very different space from effective altruism, and if you fail to acknowledge deep philosophical differences and outside view reasons for scepticism about overcommitting to one worldview you risk polarising the communities and destroying future value trades between them. For example:
Where EA starts (roughly) from an assumption that you have a small number of people philosophically committed to maximising expected welfare, Musk’s companies start with a vision a much larger group of people find emotionally inspiring, and a small subset of them find extremely inspiring. Compare the 35-50 hour work weeks of typical EA orgs’ staff vs the 80-90 common among Tesla/SpaceX employees—the latter seem to be far more driven, and I doubt that telling them to go and work on AI policy would a) work or b) inspire them to anywhere near comparable productivity if it did.
Musk’s orgs are driven by a belief that they can one day make a profit from what they do, and that if they can’t, they shouldn’t succeed.
Most EA orgs have no such market mechanism, even in the long term. And EA research has perverse incentives that we rarely seem to recognise—researchers gain prestige for raising ‘interesting questions’ that might minimally if at all affect anyone’s behaviour (eg moral uncertainty, cluelessness, infinite ethics, doomsday arguments etc), and they’re given money and job security for failing to answer them in favour of ending every essay with ‘more research needed’.
In particular they’re incentivised to produce writings that encourage major donors to fund them. One plausible way of doing this for eg, is to foster an ingroup mentality, encouraging the people who take them seriously to think of themselves as custodians of a privileged way of thinking (cf the early rationality movement’s dismissal of outsiders as ‘NPCs’). I don’t know of any meta-level argument that this should lead to more reliable understanding of the world than, say the wisdom of crowds.
As Halffull discussed in detail in another comment, Musk’s initiatives are immensely complicated, and a priori reasoning about them might essentially be worthless. We could spend lifetimes considering them and still not have meaningfully greater confidence in their outcomes—and we’d have marked ourselves as irrelevant in the eyes of people driven to work on them. Or we could work with the people who’re motivated by the development of such technologies and encourage what Halffull’s ‘culture of continuous oversight/thinking about your impact’ - which those companies seem to have, at least compared to other for-profits .
Empirically, the EA movement has a history of ignoring or rejecting certain causes as being not worthy of consideration, then coming to view them as significant after all. See GWWC’s original climate change research which basically dismissed the cause vs Founders Pledge’s more recent research which takes it seriously as one of their top causes, Open Philanthropy Project’s explicit acknowledgement of their increasing concern with ‘minor’ global catastrophic risks, or just see all the causes OPP have supported with their recent grants (how many EAs would have taken you seriously 10 years ago if you’d thought about donating to US criminal justice reform?). I would say we have a much better track record of unearthing important causes that were being neglected than of providing good reasons to neglect causes.
I really didn’t mean for this post to be saying much about effective altruism, and especially, I wasn’t using it to argue that “effective altruism is better than Elon Musk.”
All that said, as to my own opinions, I think Elon Musk is clearly tremendously important. It’s quite possible, based on rough numbers like market value, that he singlehandedly is still a lot more valuable in expectation than effective altruists in total. His net worth is close to $300 Billion, and among all EAs, maybe we’re at $60 Billion. So even if he spent his money 1/4th as effectively, he could still do more good.
However, that doesn’t mean that he doesn’t have at least some things to learn from the effective altruism community.
On planning, I’m not convinced that Elon Musk is necessarily a master strategist, who was playing some 8-dimensional planning on these topics. He clearly has a lot of talents helping him. I’d expect him to be great at some things, and imagine it will take a while before anyone (including him) could really tell which specific things led to his success.
In my experiences with really top-performing people, often they just don’t think all that much about many of these issues. They have a lot of things to think about. What can look like “a genius move of multi-year planning” from the outside, often looks to me like “a pretty good guess made quickly”.
No-one’s saying he’s a master strategist. Quite the opposite—his approach is to try stuff out and see what happens. It’s the EA movement that strongly favours reasoning everything out in advance.
What I’m contesting is the claim that he has ‘at least some things to learn from the effective altruism community’, which is far from obvious, and IMO needs a heavy dose of humility. To be clear, I’m not saying no-one in the community should do a shallow (or even deep) dive into his impact—I’m saying that we shouldn’t treat him or his employees like they’re irrational for not having done so to our satisfaction with our methods, as the OP implies.
Firstly, on the specific issue of whether bunkers are a better safeguard against catastrophe, that seems extremely short termist. Within maybe 30-70 years if SpaceX’s predictions are even faintly right, a colony on Mars could be self-sustaining, which seems much more resilient than bunkers, and likely to have huge economic benefits for humanity as a whole. Also, if bunkers are so much easier to set up, all anyone has to do is found an inspiring for profit bunker-development company and set them up! If no-one has seriously done so at scale, that indicates to me that socially/economically they’re a much harder proposition, and that this might outweigh the engineering differences.
Secondly, there’s the question of what the upside of such research is—as I said, it’s far from clear to me that any amount of a priori research will be more valuable than trying stuff and seeing what happens.
Thirdly I think it’s insulting to suppose these guys haven’t thought about their impact a lot simply because they don’t use QALY-adjacent language. Musk talks thoughtfully about his reasons all the time! If he doesn’t try to quantify the expectation, rather than assuming that’s because he’s never thought to do so, I would assume it’s because he thinks such a priori quantification is very low value (see previous para) - and I would acknowledge that such a view is reasonable. I would also assume something similar is true for very many of his employees, too, partly because they’re legion compared to EAs, partly because the filtering for their intelligence has much tighter feedback mechanisms than that for EA researchers.
If any EAs doing such research don’t recognise the validity of these sorts of concerns, I can imagine it being useless or even harmful.
It seems like we have some pretty different intuitions here. Thanks for sharing!
I was thinking of many of my claims as representing low bars. To me, “at least some things to learn from a community” isn’t saying all that much. I’m sure he, and us, and many others, have at least some things that would be valuable to learn from many communities.
”Thirdly I think it’s insulting to suppose these guys haven’t thought about their impact a lot simply because they don’t use QALY-adjacent language” → A lot of the people I knew, in the field (including the person I mentioned), pretty clearly hadn’t thought about the impact a whole lot. It’s not just that they weren’t using QALYs, it’s just that they weren’t really comparing it to similar things. That’s not unusual, most people in most fields don’t seem to be trying hard to optimize the impact globally, in my experience.
I really don’t mean to be insulting to them, I’m just describing my impression. These people have lots of other great qualities.
One thing that would clearly prove me wrong would be some lengthy documents outlining the net benefit, compared to things like bunkers, in the long-term. And, it would be nice if it were clear that lots of SpaceX people paid attention to these documents.
Re this particular example, after you had the conversation did the person agree with you that they clearly hadn’t thought about it? If not, can you account for their disagreement other than claiming that they were basically irrational?
I seem to have quite strongly differing intuitions from most people active in central EA roles, and quite similar ones (at least about the limitations to EA-style research) to many people I’ve spoken to who believe the motte of EA but are sceptical of the bailey (ie of actual EA orgs and methodology). I worry that EA has very strong echo chamber effects reflected in eg the OP, in Linch’s comment below, and Hauke’s about Bill Gates, in various other comments in this thread suggesting ‘almost no-one’ thinks about these questions with clarity and in countless of other such casual dismissals I’ve heard by EAs of smart people taking positions not couched in sufficiently EA terms.
FWIW I also don’t think claiming someone has lots of other great qualities is inconsistent with being insulting to them.
I don’t disagree that it’s plausible we can bring something. I just think that assuming we can do so is extremely arrogant (not by you in particular, but as a generalised attitude among EAs). We need to respect the views of intelligent people who think this stuff is important, even if they can’t or don’t explain why in the terms we would typically use. For PR reasons alone, this stuff is important—I can only point to anecdotes, but so many intelligent people I’ve spoken to find EAs collectively insufferable because of this sort of attitude, and so end up not engaging with ideas that might otherwise have appealed to them. Maybe someone could run a Mechanical Turk study on how such messaging affects reception of theoretically unrelated EA ideas.
Also we still, as a community seem confused over what ‘neglectedness’ does in the ITN framework—whether it’s a heuristic or a multiplier, and if the latter how to separate it from tractability and how to account for the size of the problem in question (bigger, less absolutely neglected problems might still benefit more from marginal resources than smaller problems on which we’ve made more progress with fewer resources, yet I haven’t seen a definition of the framework that accounts for this). Yet anecdotally I still hear ‘it’s not very neglected’ used to casually dismiss concerns on everything from climate change through nuclear war to… well, interplanetary colonisation. Until we get a more consistent and coherent framework, if I as a longtime EA supporter am sceptical on one of the supposed core components of EA philosophy, I don’t see how I’m supposed to convince mission-driven not-very-utilitarians to listen to its analyses.