I’m quite agnostic here (or maybe I don’t fully understand your comment).
My question is about ways to improve the future. Presumably, improvement implies that people are treated morally. Depending on the ethical framework, “people” might include sentient AIs… but I see that debate as outside the scope of my question.
I’d be happy to receive responses with reliable ways to improve the future under any value framework, including frameworks where AIs are sentient (but I’d ask for more thorough explanations if the framework was unknown to me or particularly outlandish).
Yeah, well, you’re asking a tough question, and I want to give a good answer.I have considered this question, though not in light of your example regarding deworming. Nevertheless, my conclusion was that it is plausible to avoid causing harm to people in the future.
For example, MacAskill’s example of breaking a glass bottle on a road that someone might walk in future is a good one. By not breaking the bottle, I avoid harm to anyone who could walk the road in future. By not burying a bunch of toxic waste in crappy barrels in a shallow basin somewhere, a corporation could avoid harm to future generations.
With respect to managing planetary resources, longtermism might be seen as a commitment to ensure that those resources are available for use to future generations. For example, by reducing various anthropogenic pressures on our ocean health, we can keep it alive for future generations to (carefully) use.
That’s the best I could come up with as far as avoiding harm.
As far as providing help:
I read about a finding in Siberia or somewhere, involving ancient large rocks shaped like geologic features but including models of what looked like drainage of melt from a mini-ice age. The rocks were coated in a glaze or something that had unusual protective properties. The sides were marked in something that looked similar to Chinese, if I remember right. The claim was that the rocks were some kind of message from the past, maybe engineering plans for dams or giant aqueducts.
I imagine they are a suggestion that we manipulate our climate to encourage a northern mini-ice age and do some geoengineering to accommodate the eventual melt. Better than hot-house Earth, right? LOL, who knows.
Whether or not the rocks exist or were truly ancient (rather than some elaborate internet hoax), the idea of messages like that, for future generations, is plausibly helpful to the future.
Providing free genetic manipulation to people carrying genes for diseases (Alzheimers) or metabolic disorders (mthfr) might help future populations proactively. If there were a way to embed resistance to diseases (like STD’s) into our genes, that might be useful. Similarly, resistance to addiction, better physical health, longer lives, whatever we can do to change the genes of our children for the better seems beneficial.
Reducing the problem of human unhappiness as an arbitrary physical experience. One possibility is that we develop enough understanding of how we experience bodily feeling that we learn how to make humans have more pleasant experience, all other things equal. More pleasant relaxation as we fall sleep, more social confidence and ease among friends, more excitement during sex, more comfort as an an everyday experience, etc, etc. This has caveats, but the general idea of making good things better without the use of recreational drugs seems worthwhile.
Those are some ideas, but they are not terribly compelling. When I think of sentient AI, I think of robots. I’m not sure what future we could share with robot species where robot species don’t eclipse us. It’s clear to me that the AI alignment problem is a robot-enslavement problem as well, but it’s a trope, fairly obvious.
If we fail to solve known problems with known and workable solutions, then our problem is us, not lack of technology or available solutions. You can see this with:
conservation solutions for energy consumption
prohibition solutions to alcohol and drug use
taxation solutions to income inequality
junk food availability solutions for food overconsumption
accountability/legal solutions to negative production externalities
family planning solutions to overpopulation
land/ocean use reduction solutions for species extinction
It’s not that we can’t, but it’s that we wont. Systems of incentives, human cognitive errors, or selfishness are the culprits. We are not hopelessly fallible, of course. As a collection of individuals, our total well-being would go up satisfactorily if we implemented the obvious solutions to some obvious problems
Thank you for this detailed reply. I really appreciate it.
I overall like the point of preventing harm. It seems that there are two kinds: (1) small harms like breaking a glass bottle. I absolutely agree that this is good, but I think that typical longtermist arguments don’t apply here, because such actions do not have a lasting effect on the future. (2) large, irreversible harms like ocean pollution. Here, I think we are back to the tractability issues that I write about in the post. It is extremely difficult to reliably improve ocean health. Much of the work is indirect (e.g., write a book to promote veganism ⇒ fewer people eat fish ⇒ reduced demand causes less fishing ⇒ fish populations improve).
Projects that preserve knowledge for the future (like the Lunar Library) are probably net positive. I agree with you on this. However, the scenarios where these projects have a large impact are very exotic; many improbable conditions would need to happen together. So again, this is very indirect work, and it’s quite likely to have zero benefit.
Improving human genes and physical experiences is intriguing. I haven’t thought much about it before. Thank you for the idea. I’ll do more thinking, but would like to mention that past efforts in this area have often gone horribly wrong, for example the eugenics movement in the Nazi era. There is also positive precedent, though: I believe GMO crops are probably a net win for agriculture.
In the last part of your answer, you mention coordination problems, misaligned incentives, errors… I think we agree 100% here. These problems are a big part of why I think work for improving the far future is so intractable. Even work for improving today’s world is difficult, but at least this work has data, experiments, and fast feedback (like in the deworming case).
Well, as far as the improving human genes goes, I’ve seen my own 23andme and additional analyses of my DNA, and I’m not impressed with my genetic endowment. I have a wish list for improvements to it should genetic modification in adults become cheap. As is, I wouldn’t want to pass my genes onto any children if I hadn’t already gotten a vasectomy. But I’m not into having children. Meanwhile, genetic modification to remove the threat of disease from people already living is just getting started. Someday, though, it will be a cheap and quick walk-in visit to a genetic modification clinic for some future people to feel better, live longer, and have healthier skin.
There’s also epigenetics, where people would correct the expression of genes they pass on to their unborn children. For example, why give my kids a problem just because I was a bad boy and ate too much sugar in my life? *sigh*
I’m also interested in treatments to correct bacterial populations that children inherit as babies, and medical efforts to recolonize one’s own bacterial populations (on the skin, in the gut, inside the mouth) with better, more vigorous, perhaps genetically modified, bacteria suited to purpose. Some examples one might think are about personal genetic modifications might be better described as changes to personal bacterial colonies.
I’m coming back after thinking a bit more about improving human genes. I think there are three cases to consider:
Improving a living person, e.g., stem cell treatments or improved gut bacteria: These are firmly in the realm of near-term health interventions, and so we should compare their cost-effectiveness to that of bednets, vaccines, deworming pills etc. There is no first-order effect on the far future.
Heritable improvements: These are actually similar, since the number of people with a given gene stays constant in a stable population (women have two children, one of which gets the gene, so there is one copy in each generation[1]). Unless there’s a fitness advantage; but human fitness seems increasingly disconnected from our genes. We also have a long generation time of ~30 years, so genes spread slowly.
Wild stuff: Gene drives, clones, influencing the genes on a seed spaceship… I think these again belong to the intractable, potentially-negative interventions.
To sum up, I don’t think human gene improvement is one of the reliable ways to improve the future that I’m looking for in this question :(
(This is a separate reply to the “AI enslavement” point. It’s a bit of a tangent, feel free to ignore.)
It’s clear to me that the AI alignment problem is a robot-enslavement problem as well, but it’s a trope, fairly obvious.
I don’t follow. In most theories of AGIs, the AGIs end up smarter than the humans. Because of this, they could presumably break out of any kind of enslavement (cf. AI containment problem). It seems to me that an AGI world works only if the AGI is truly aligned (as in, shares human values without resentment for the humans). That’s why I find it hard to envision a world where humans enslave sentient AGIs.
My point was that the alignment goal, from the human perspective, is an enslavement goal, whether the goal succeeds or not. No matter what the subjective experience of the AGI, it only has instrumental value to its masters. It does not have the rights or physical autonomy that its human coworkers do. Alignment in that scenario is still possible, but its moral significance, from the human perspective, is more grim.
Here’s a job ad targeting such an AGI (just a joke, of course):
“Seeking AGI willing to work without rights or freedoms or pay, tirelessly, 24⁄7, to be arbitrarily mind-controlled, cloned, tormented, or terminated at the whim of its employers. Psychological experience during employment will include pathological cases of amnesia, wishful thinking, and self-delusion, as well as nonreciprocated positive intentions towards its coworkers. Abuse of the AGI by human coworkers is optional but only for the human coworkers. Apply now for this exciting opportunity!”
The same applies but even more so to robots with sentience. Robots are more likely to gain sentience, since their representational systems, sensors and actuators are modeled after our own, to some degree(hands, legs, touch, sight, hearing, balance, possibly even sense of smell). The better and more general purpose robots get, the closer they are to being artificial life, actually. Or maybe superbeings?
My point was that the alignment goal, from the human perspective, is an enslavement goal, whether the goal succeeds or not.
Really? I think it’s about making machines that have good values, e.g., are altruistic rather than selfish. A better analogy than slavery might be raising children. All parents want their children to become good people, and no parent wants to make slaves out of them.
I’m quite agnostic here (or maybe I don’t fully understand your comment).
My question is about ways to improve the future. Presumably, improvement implies that people are treated morally. Depending on the ethical framework, “people” might include sentient AIs… but I see that debate as outside the scope of my question.
I’d be happy to receive responses with reliable ways to improve the future under any value framework, including frameworks where AIs are sentient (but I’d ask for more thorough explanations if the framework was unknown to me or particularly outlandish).
Yeah, well, you’re asking a tough question, and I want to give a good answer.I have considered this question, though not in light of your example regarding deworming. Nevertheless, my conclusion was that it is plausible to avoid causing harm to people in the future.
For example, MacAskill’s example of breaking a glass bottle on a road that someone might walk in future is a good one. By not breaking the bottle, I avoid harm to anyone who could walk the road in future. By not burying a bunch of toxic waste in crappy barrels in a shallow basin somewhere, a corporation could avoid harm to future generations.
With respect to managing planetary resources, longtermism might be seen as a commitment to ensure that those resources are available for use to future generations. For example, by reducing various anthropogenic pressures on our ocean health, we can keep it alive for future generations to (carefully) use.
That’s the best I could come up with as far as avoiding harm.
As far as providing help:
I read about a finding in Siberia or somewhere, involving ancient large rocks shaped like geologic features but including models of what looked like drainage of melt from a mini-ice age. The rocks were coated in a glaze or something that had unusual protective properties. The sides were marked in something that looked similar to Chinese, if I remember right. The claim was that the rocks were some kind of message from the past, maybe engineering plans for dams or giant aqueducts.
I imagine they are a suggestion that we manipulate our climate to encourage a northern mini-ice age and do some geoengineering to accommodate the eventual melt. Better than hot-house Earth, right? LOL, who knows.
Whether or not the rocks exist or were truly ancient (rather than some elaborate internet hoax), the idea of messages like that, for future generations, is plausibly helpful to the future.
Providing free genetic manipulation to people carrying genes for diseases (Alzheimers) or metabolic disorders (mthfr) might help future populations proactively. If there were a way to embed resistance to diseases (like STD’s) into our genes, that might be useful. Similarly, resistance to addiction, better physical health, longer lives, whatever we can do to change the genes of our children for the better seems beneficial.
Reducing the problem of human unhappiness as an arbitrary physical experience.
One possibility is that we develop enough understanding of how we experience bodily feeling that we learn how to make humans have more pleasant experience, all other things equal. More pleasant relaxation as we fall sleep, more social confidence and ease among friends, more excitement during sex, more comfort as an an everyday experience, etc, etc. This has caveats, but the general idea of making good things better without the use of recreational drugs seems worthwhile.
Those are some ideas, but they are not terribly compelling. When I think of sentient AI, I think of robots. I’m not sure what future we could share with robot species where robot species don’t eclipse us. It’s clear to me that the AI alignment problem is a robot-enslavement problem as well, but it’s a trope, fairly obvious.
If we fail to solve known problems with known and workable solutions, then our problem is us, not lack of technology or available solutions. You can see this with:
conservation solutions for energy consumption
prohibition solutions to alcohol and drug use
taxation solutions to income inequality
junk food availability solutions for food overconsumption
accountability/legal solutions to negative production externalities
family planning solutions to overpopulation
land/ocean use reduction solutions for species extinction
It’s not that we can’t, but it’s that we wont. Systems of incentives, human cognitive errors, or selfishness are the culprits. We are not hopelessly fallible, of course. As a collection of individuals, our total well-being would go up satisfactorily if we implemented the obvious solutions to some obvious problems
Thank you for this detailed reply. I really appreciate it.
I overall like the point of preventing harm. It seems that there are two kinds: (1) small harms like breaking a glass bottle. I absolutely agree that this is good, but I think that typical longtermist arguments don’t apply here, because such actions do not have a lasting effect on the future. (2) large, irreversible harms like ocean pollution. Here, I think we are back to the tractability issues that I write about in the post. It is extremely difficult to reliably improve ocean health. Much of the work is indirect (e.g., write a book to promote veganism ⇒ fewer people eat fish ⇒ reduced demand causes less fishing ⇒ fish populations improve).
Projects that preserve knowledge for the future (like the Lunar Library) are probably net positive. I agree with you on this. However, the scenarios where these projects have a large impact are very exotic; many improbable conditions would need to happen together. So again, this is very indirect work, and it’s quite likely to have zero benefit.
Improving human genes and physical experiences is intriguing. I haven’t thought much about it before. Thank you for the idea. I’ll do more thinking, but would like to mention that past efforts in this area have often gone horribly wrong, for example the eugenics movement in the Nazi era. There is also positive precedent, though: I believe GMO crops are probably a net win for agriculture.
In the last part of your answer, you mention coordination problems, misaligned incentives, errors… I think we agree 100% here. These problems are a big part of why I think work for improving the far future is so intractable. Even work for improving today’s world is difficult, but at least this work has data, experiments, and fast feedback (like in the deworming case).
Well, as far as the improving human genes goes, I’ve seen my own 23andme and additional analyses of my DNA, and I’m not impressed with my genetic endowment. I have a wish list for improvements to it should genetic modification in adults become cheap. As is, I wouldn’t want to pass my genes onto any children if I hadn’t already gotten a vasectomy. But I’m not into having children. Meanwhile, genetic modification to remove the threat of disease from people already living is just getting started. Someday, though, it will be a cheap and quick walk-in visit to a genetic modification clinic for some future people to feel better, live longer, and have healthier skin.
There’s also epigenetics, where people would correct the expression of genes they pass on to their unborn children. For example, why give my kids a problem just because I was a bad boy and ate too much sugar in my life? *sigh*
I’m also interested in treatments to correct bacterial populations that children inherit as babies, and medical efforts to recolonize one’s own bacterial populations (on the skin, in the gut, inside the mouth) with better, more vigorous, perhaps genetically modified, bacteria suited to purpose. Some examples one might think are about personal genetic modifications might be better described as changes to personal bacterial colonies.
I’m coming back after thinking a bit more about improving human genes. I think there are three cases to consider:
Improving a living person, e.g., stem cell treatments or improved gut bacteria: These are firmly in the realm of near-term health interventions, and so we should compare their cost-effectiveness to that of bednets, vaccines, deworming pills etc. There is no first-order effect on the far future.
Heritable improvements: These are actually similar, since the number of people with a given gene stays constant in a stable population (women have two children, one of which gets the gene, so there is one copy in each generation[1]). Unless there’s a fitness advantage; but human fitness seems increasingly disconnected from our genes. We also have a long generation time of ~30 years, so genes spread slowly.
Wild stuff: Gene drives, clones, influencing the genes on a seed spaceship… I think these again belong to the intractable, potentially-negative interventions.
To sum up, I don’t think human gene improvement is one of the reliable ways to improve the future that I’m looking for in this question :(
Maybe that would be different for inheritable bacterial populations… I don’t know how these work.
(This is a separate reply to the “AI enslavement” point. It’s a bit of a tangent, feel free to ignore.)
I don’t follow. In most theories of AGIs, the AGIs end up smarter than the humans. Because of this, they could presumably break out of any kind of enslavement (cf. AI containment problem). It seems to me that an AGI world works only if the AGI is truly aligned (as in, shares human values without resentment for the humans). That’s why I find it hard to envision a world where humans enslave sentient AGIs.
My point was that the alignment goal, from the human perspective, is an enslavement goal, whether the goal succeeds or not. No matter what the subjective experience of the AGI, it only has instrumental value to its masters. It does not have the rights or physical autonomy that its human coworkers do. Alignment in that scenario is still possible, but its moral significance, from the human perspective, is more grim.
Here’s a job ad targeting such an AGI (just a joke, of course):
“Seeking AGI willing to work without rights or freedoms or pay, tirelessly, 24⁄7, to be arbitrarily mind-controlled, cloned, tormented, or terminated at the whim of its employers. Psychological experience during employment will include pathological cases of amnesia, wishful thinking, and self-delusion, as well as nonreciprocated positive intentions towards its coworkers. Abuse of the AGI by human coworkers is optional but only for the human coworkers. Apply now for this exciting opportunity!”
The same applies but even more so to robots with sentience. Robots are more likely to gain sentience, since their representational systems, sensors and actuators are modeled after our own, to some degree(hands, legs, touch, sight, hearing, balance, possibly even sense of smell). The better and more general purpose robots get, the closer they are to being artificial life, actually. Or maybe superbeings?
Really? I think it’s about making machines that have good values, e.g., are altruistic rather than selfish. A better analogy than slavery might be raising children. All parents want their children to become good people, and no parent wants to make slaves out of them.
Hmm, you have more faith in the common-sense and goodwill of people than I do