it is, at least ,a very interesting use of language where you can build your whole career around existential risk mitigation (spacex) and climate change adaptation (tesla, solar city), help to found OpenAI, help to fund the Future of Life Institute, publicly recommend Nick Bostrom’s work—and yet apparently you don’t qualify as a longtermist.
How about Thiel? He gave the keynote speech at proto-EA-Global back in 2013, and funded MIRI from its early days (not sure when he stopped, but the early support was definitely important). Clearly he’s deeply familiar with the academic AI risk arguments and has backed it up with money. He even had an early affiliation with organized EA! Again, not really sure how he doesn’t qualify as a longtermist.
Ah, so you only count as a longtermist if you think the principles are important and you agree with the practical approach of a small, narrow clique of people? Seems an overly restrictive definition to me.
For both of these people, you can be associated with lots of EA/longtermist/x-risk-reduction activity and still not identify as a longtermist (i.e. you don’t entirely buy the argument at the start of this post). Lots of the things you’ve listed here look good from several other perspectives, not just longtermism.
I’m pretty sure both of them would endorse the claim at the start of this post, and would bet tons of money that Elon in particular would. His career looks really bizarre otherwise.
it is, at least ,a very interesting use of language where you can build your whole career around existential risk mitigation (spacex) and climate change adaptation (tesla, solar city), help to found OpenAI, help to fund the Future of Life Institute, publicly recommend Nick Bostrom’s work—and yet apparently you don’t qualify as a longtermist.
Fair enough, maybe Musk does count.
How about Thiel? He gave the keynote speech at proto-EA-Global back in 2013, and funded MIRI from its early days (not sure when he stopped, but the early support was definitely important). Clearly he’s deeply familiar with the academic AI risk arguments and has backed it up with money. He even had an early affiliation with organized EA! Again, not really sure how he doesn’t qualify as a longtermist.
I think in his case he has since denounced EA and AI safety no?
Ah, so you only count as a longtermist if you think the principles are important and you agree with the practical approach of a small, narrow clique of people? Seems an overly restrictive definition to me.
I was trying to use a definition that matched “are these people our problem on this forum”, since it seemed the most contextually relevant.
For both of these people, you can be associated with lots of EA/longtermist/x-risk-reduction activity and still not identify as a longtermist (i.e. you don’t entirely buy the argument at the start of this post). Lots of the things you’ve listed here look good from several other perspectives, not just longtermism.
I’m pretty sure both of them would endorse the claim at the start of this post, and would bet tons of money that Elon in particular would. His career looks really bizarre otherwise.