A personal comment (apologies that this is neither feedback nor criticism): I switched from a career plan that was pointing me towards neglected tropical disease genomics and related topics to x-risk/gcr after reading Martin Rees’ Our Final Century (fun fact: I showed up at FHI without any real idea who Nick Bostrom was or why a philosopher was relevant to x-risk research).
6 years later, there’s still a nagging voice in the back of my mind that worries about issues related to what you describe. (Admittedly, the voice worries more so that we don’t end up doing the right work—if we do the right work but it’s not the Xrisk that emerges [but was a sensible and plausible bet], or we are doing the right thing but someone else plays the critical role in averting it due to timing and other factors, I can live with that even if it technically means a wasted career and a waste of funds). I’m hoping that the portfolio of research activities I’m involved in setting up here in Cambridge is broad enough to us a good shot, directly or indirectly, of making some difference in the long run. But it’s not totally clear to me I’ll ever know for certain (i.e. a clean demonstration of a catastrophe that was clearly averted because of work I was involved with at CSER/CFI/FHI seems unlikely). I try to placate that voice by donating to global disease charities (e.g. SCI) despite working in xrisk.
So basically, I’m saying I empathise with these feelings. While it perhaps conflicts with some aspects of really dedicated cause prioritisation, I think a donor ecosystem in which some people are taking the long, high payoff bets, and don’t mind a higher probability that their funds don’t directly end up saving lives in the long run, while others are more conservative and are supporting more direct and measurable do-the-most-gooding, makes for a good overall EA ‘portfolio’ (and one in which the different constituents help to keep each other both open-minded and focused).
While I can’t comment on whether this is selfish or narcissistic, if the end result is an ecosystem with a level of diversity in the causes it supports, that seems to be good given the level of uncertainty we have to have about the long-run importance of many of these things—provided, of course, we have high confidence that the causes within this diversity remain orders of magnitude more important than other causes we are choosing not to support (i.e. the majority of causes in the world).
A personal comment (apologies that this is neither feedback nor criticism): I switched from a career plan that was pointing me towards neglected tropical disease genomics and related topics to x-risk/gcr after reading Martin Rees’ Our Final Century (fun fact: I showed up at FHI without any real idea who Nick Bostrom was or why a philosopher was relevant to x-risk research).
6 years later, there’s still a nagging voice in the back of my mind that worries about issues related to what you describe. (Admittedly, the voice worries more so that we don’t end up doing the right work—if we do the right work but it’s not the Xrisk that emerges [but was a sensible and plausible bet], or we are doing the right thing but someone else plays the critical role in averting it due to timing and other factors, I can live with that even if it technically means a wasted career and a waste of funds). I’m hoping that the portfolio of research activities I’m involved in setting up here in Cambridge is broad enough to us a good shot, directly or indirectly, of making some difference in the long run. But it’s not totally clear to me I’ll ever know for certain (i.e. a clean demonstration of a catastrophe that was clearly averted because of work I was involved with at CSER/CFI/FHI seems unlikely). I try to placate that voice by donating to global disease charities (e.g. SCI) despite working in xrisk.
So basically, I’m saying I empathise with these feelings. While it perhaps conflicts with some aspects of really dedicated cause prioritisation, I think a donor ecosystem in which some people are taking the long, high payoff bets, and don’t mind a higher probability that their funds don’t directly end up saving lives in the long run, while others are more conservative and are supporting more direct and measurable do-the-most-gooding, makes for a good overall EA ‘portfolio’ (and one in which the different constituents help to keep each other both open-minded and focused).
While I can’t comment on whether this is selfish or narcissistic, if the end result is an ecosystem with a level of diversity in the causes it supports, that seems to be good given the level of uncertainty we have to have about the long-run importance of many of these things—provided, of course, we have high confidence that the causes within this diversity remain orders of magnitude more important than other causes we are choosing not to support (i.e. the majority of causes in the world).