I want to note not just the skulls of the eugenic roots of futurism, but also the âcreepy skull pyramidâ of longtermists suggesting actions that harm current people in order to protect hypothetical future value.
This goes anywhere from suggestions to slow down AI progress, which seems comfortably within the Overton Window but risks slowing down economic growth and thus slowing reductions in global poverty, to the extreme actions suggested in some Bostrom pieces. Quoting the Current Affairs piece:
While some longtermists have recently suggested that there should be constraints on which actions we can take for the far future, others like Bostrom have literally argued that preemptive violence and even a global surveillance system should remain options for ensuring the realization of âour potential.â
Mind you, I donât think these tensions are unique to longtermism. In biosecurity, even if youâre focused entirely on the near-term, there are a lot of trade-offs and tensions between preventing harm and securing benefits.
You might have really robust export controls that never let pathogens be shipped around the world⌠but that will make it harder for developing countries to build up their biomanufacturing capacity. Under the bioweapons convention you have a lot of diplomats arguing about balancing Article IV (âany national measures necessary to prohibit and prevent the development, production, stockpiling, acquisition or retention of biological weaponsâ) and Article X (âthe fullest possible exchange of equipment, materials and information for peaceful purposesâ). That said, I think longtermist commitments can increase the relative importance of preventing harm.
ThanksâI largely agree, and am similarly concerned about the potential for such impacts, as was discussed in the thread with John Halstead.
As an aside, I think Harperâs LARB article was being generous in calling Philâs current affairs article ârather hyperbolic,â and think its tone and substance are an unfortunate distraction from various more reasonable criticisms Phil himself has suggested in the past.
I want to note not just the skulls of the eugenic roots of futurism, but also the âcreepy skull pyramidâ of longtermists suggesting actions that harm current people in order to protect hypothetical future value.
This goes anywhere from suggestions to slow down AI progress, which seems comfortably within the Overton Window but risks slowing down economic growth and thus slowing reductions in global poverty, to the extreme actions suggested in some Bostrom pieces. Quoting the Current Affairs piece:
Mind you, I donât think these tensions are unique to longtermism. In biosecurity, even if youâre focused entirely on the near-term, there are a lot of trade-offs and tensions between preventing harm and securing benefits.
You might have really robust export controls that never let pathogens be shipped around the world⌠but that will make it harder for developing countries to build up their biomanufacturing capacity. Under the bioweapons convention you have a lot of diplomats arguing about balancing Article IV (âany national measures necessary to prohibit and prevent the development, production, stockpiling, acquisition or retention of biological weaponsâ) and Article X (âthe fullest possible exchange of equipment, materials and information for peaceful purposesâ). That said, I think longtermist commitments can increase the relative importance of preventing harm.
ThanksâI largely agree, and am similarly concerned about the potential for such impacts, as was discussed in the thread with John Halstead.
As an aside, I think Harperâs LARB article was being generous in calling Philâs current affairs article ârather hyperbolic,â and think its tone and substance are an unfortunate distraction from various more reasonable criticisms Phil himself has suggested in the past.