EA forum users are making false claims about me. Do not believe everything you read here.
Offline.
EA forum users are making false claims about me. Do not believe everything you read here.
Offline.
I think there are two methods that people use. You could deduce ethical rules from some truths or you could believe it is most probable given the evidence. I think that intuitions are the only form of evidence possible. Something seeming true is a prima facie justification for that ethical truth. We accept intuition in the form of perception, memory knowledge, mathematical knowledge, etc. I don’t find it as much of a leap to accept it in the case of moral truths. Torturing an infant seems wrong and that is evidence it is wrong. I think I remember my name on here is Parrhesia and so that is at least some reason to think my name on here is Parrhesia.
Hello, I’m Parrhesia. I am new to the EA forum. I am very interested in bioethics, particularly the ethics of genetic enhancement. I’ve written several articles defending the practice of preimplantation genetic testing for polygenic disorders (PGT-P). I think this is a very important issue which is under discussed.
Thank you for commenting, Richard! Good to see you. Yes, that is a good point. I think that a move from “we can say nothing” to this attitude would be a step in the right direction. I do agree that accepting PB doesn’t mean you have to accept the total view.
Thanks for your thoughts, Amber. In this article, I just used the term “eugenics” to mention my other article “Harmless Eugenics.” In that article, I do not advocate for using the term “eugenics” to describe genetic enhancement normally but I seek to find a way to deal with the accusation of being a eugenicist or advocating for eugenics. I thought it might be an effective move to ask people to find the harm involved and then provide some potential responses. From the article (bolding for emphasis):
...there are very few instances of people using the expression “harmless eugenics.” This is how I believe defenders of genetic enhancement should refer to the practice of embryo selection. When discussing the issue typically, they should use a term such as “genetic enhancement.” However, they will inevitably face the accusation of being a eugenicist. When this occurs, I believe the move will be to retort with something like, “the only type of eugenics I advocate for is harmless eugenics.” I think “it’s still eugenics” is a weak response, and so the temptation will be to try to find harm in the practice of embryo selection. This isn’t easy.
I agree with you that it may be better for optics to not just say “I like eugenics” or something along those lines. But if you do advocate for this voluntary procedure in which a woman is exercising her reproductive autonomy, you will be accused of eugenics. The issue I wanted to address was how to deal with that. I find saying “this is not eugenics” inadequate. So, I think a good move is to make your interlocutor find the harm involved. I address a bunch of possible responses.
It may not be incoherent to be risk averse, but there are instances in which it is not expected utility maximizing. If you have a 51% chance of world doubling, the expected utility is greater if you take the bet. That’s true, no matter how many times it was previously offered. I don’t quite understand why the CLT is relevant.
Yes, that’s probably right. Accelerating scientific and technological advances in the field of genetics would be another important move. Most notably, in vitro gametogenesis and getting larger sample sizes for genome-wide association studies, especially of cognitive ability. As well as advocating for government subsidy of IVF and polygenic testing. Genetic enhancement will be a game changer. We’ll create humans that are significantly healthier, happier, and more intelligent. Widespread cognitive enhancement would largely eliminate many social issues.
Negatives might be large relative to potential gains for sperm donation. That seems reasonable. I think people interpret sperm donation as self-importance. Subsidy of IVF + polygenic screening might not come off that way but could still be called eugenics. Returns to that would be much larger though.
I recommend Making Sense of Heritability by Neven Sesardić.
I’ve thought about questions like this to some extent. For my moral philosophy, I think it would be morally better to recreate a once-existing person.
I do not think that personal thoughts would be sufficient to reverse engineer a persons brain. Preserving DNA would probably be much better, but still insufficient. Really good brain scans might do the trick. Really really not sure.
It may just happen that you get reincarnated even without trying. If time is infinite and you have a theory that allows the recreation of people, you would expect to be born again. When I die, I might just wake up in a new body in a new world. (https://philpapers.org/archive/HUEEIE.pdf)
It seems like if simulated people can exist and live blissful lives and there will be trillions and trillions, then I should be living as a simulation. The fact that my life is good but not entirely blissful is perhaps evidence against utilitarianism or that simulating minds is possible. This depends on what your view on observer selection effects (https://www.lesswrong.com/tag/observation-selection-effect).
Anyway, very interesting thoughts. This stuff is cool but hard to think about.
A clone wouldn’t have the same consciousness , so that’s a bad deal. But for whatever reason, people have a sense of a personal identity across time. I am fully willing to make inter temporal trade offs. It seems more just to make up for past injustices.
Whether or not you could in theory create a replica of a person which has the same consciousness isn’t necessary clear. If you’re entirely a physicalist and believe in computational theory of mind, what reason is there for you not to believe you could recreate a persons consciousness? Just exactly replicate all their brain processes.
Thanks. I wrote about it here: https://parrhesia.substack.com/p/utilitarianism-casts-doubt-on-the. I don’t really hold to these ideas very strongly. Just something to consider.
A power-seeking malicious AI could use lab workers to create dangerous viruses. It could blackmail the lab workers or threaten them in some way. This is a weak point for AI safety in my view. AI could figure out how to create diseases or dangerous compounds.
Great thoughts
There is a distinction between IQ gaps existing, “intelligence” gaps existing, and intelligence gaps existing that are attributable to genetic differences between populations. This is also distinct from “race X is inferior to race Y.”
Different races have different average scores on IQ tests, which it appears you acknowledge. Intelligence tests are created by assembling a wide range of cognitively demanding test items. Their scores happen to align well with what people generally mean when they say “intelligent,” although perhaps not perfectly.
Believing in the existence of gaps attributable to genetic differences is not a dumb prior. It would be astounding if all people at all times in all places no matter how you divided them happen to have the exact same average in this polygenic trait. This is especially considering cognitive ability influences behavior with regard to immigration, fertility, assortative mating, etc. that would create deviations from perfect equality.
Believing in differences does not mean that we should stop treating others with dignity and respect. Despite believing the above, I treat others well because my treatment is not contingent on the statistical average member of one particular group one is apart of. I think people can share my belief and still be dedicated toward doing good in the future.
I would suspect a sizeable portion of EA would agree.
Embryo selection for cognitive ability would have plenty of positive downstream consequences. If in vitro gametogenesis enables selection from large batches, there could be large gains from selection. If smart fraction theory is true, then widespread cognitive genetic enhancement even among a small portion of the population may have disproportionately large downstream positive consequences. Not discussing cognitive ability might be deterimental considering the benefits are so large. This is one cause area that I think is drastically underconsidered due in part to stigma.
Wouldn’t an acceptable approach to race science be to demonstrate that races are actually all the same across every trait we care about and the racists are wrong? Why not fight bad science with good science?
In response to the drama over Bostroms apology for an old email, the original email has been universally condemned from all sides.
I do not condemn Bostrom.
And if the content is so offensive as to be upsetting and harmful to the movement, it must also be harmful to continue posting and discussing its contents.
I think he discussed eugenics because he was preemptively addressing potential attacks that he suspected where coming.
“Eugenics” as typically considered is very different from human genetic enhancement of the type in which parents voluntarily select embryos to implant during IVF, as discussed in Bostrom and Shulman’s article “Embryo Selection for Cognitive Enhancement: Curiosity or Game Changer?”
Eugenics of the twentieth century was bad because it harmed people. Who is harmed when a mother voluntarily decides to select an embryo that is genetically predisposed to be healthier, happier or more intelligent? To prevent a mother from having the right to genetically enhance (engage in “eugenics”) would be coercion in reproduction.
Seeing as embryo selection can extend a child’s healthspan and mental well-being by selecting against schizophrenia/depression/etc., I think it is extraordinarly moral to support this practice even if it could correctly be called “eugenics.” I will stand on the side of less premature death and suffering even if it means I can be grouped with other bad “eugenicists” for rather tenous reasons.
Would you support discussions and research into ancestral population differences?
The only meta-ethical justification we should care about is our ethical theory being true. We should only care about a ethical theory being aesthetically pleasing, “fit for a the modern age”, easily explainable, future-proofed, or having other qualities to the extent that it correlates with truthfulness. I see the future-proof goal as misguided. To me, it feels as though you may have selected this meta-ethical principle with the idea of justifying your ethical theory rather than having this meta-ethical theory and using it to find an ethical there which coheres to it.
I could be a Christian and use the meta-ethical justification “I want an ethical theory uncorrupted by 21st century societal norms!” But like the utilitarian, this would seem selected in a biased way to reach my conclusion. I could have a number of variables like aesthetically pleasing, easily communicable, looked upon favorably by future humans and so forth, but the only variable I’m maximizing on is truth.
Your goal is to select an ethical theory that will be looked upon favorably by future humans. You want this because you believe in moral progress. You believe in moral progress because you look down on past humans as less moral than more recent humans. You look down on past humans as less moral because they don’t fit your ethical theory. This is circular; your method for selecting an ethical theory uses an ethical theory to determine it is a good method.
The irony is that this can be presented as insanity and horrible without justification. There is no need to say why lynching and burning humans at picnics is bad. Karnofsky does not even try to apply a utility analysis to dissuading crimes via lynch mobs or discuss the effectiveness of waterboardining or the consequences of the female vote. He doesn’t need to do this because these things are intuitively immoral. Ironically, it goes without saying because of intuition.
Once again, we can flip the argument. I could take someone from 1400 and tell him that homosexuality is legalized and openly practiced. In some places, teenage boys are encouraged to openly express their homosexuality by wearing flag pins. A great deal of homosexuals actually have sex with many men. Every adult, and unfortunately many minors, has access to a massive video library of sexual acts which illicit feelings of disgust in even the most open minded. If this man from 1400 saw the future as a bleak and immoral place which we should avoid becoming, how would you convince him he was wrong. Why are your intuitions right and his intuitions wrong? What objective measure are you using? If he formulated a meta-ethical principle that “We should not become like the future”, what would be wrong with that?
My take is that intuitions are imperfect, but they are what we have. I think that the people who hung homosexuals probably had an intuitive sense that it was immoral, but religious ferver was overwhelming. There are evil and wicked people that existed in the past, but there were also people who saw these things as immoral. I’m sure many saw burning and lynching humans as repugnant. Intuitions are the only tool we have for determining right from wrong. The fact that people were wrong in the past is not a good reason to say that we can’t use intuition whatsoever.
Very intelligent people of a past era used the scientific method, deduction and inductive inference to reach conclusions that were terribly wrong. These people were often motivated by their ideological desires or influenced by their peers and culture. People thought the earth was at the center of the solar system and they had elaborate theories. I don’t think Karnofsky is arguing we should throw out intuitions entirely, but for those who don’t believe in intuitions: we can’t throw out intuitions like we can’t throw out the scientific method, deduction and induction because people of a past era were wrong.
How do we know the people of the future won’t be non-systemitizing, non-utilitarian and not care about AI or animals quite as much? I think in order to think they will, we must believe in moral progress. In order to believe moral progress results in these beliefs, we must believe that our moral theory is the actually correct one.
I just think that you can flip these things around so easily and apply them to stuff that isn’t utilitarianism and sentientism. I think that Roman Catholicism would be a good example of a future proofed ethical system. They laid out a system of rules and took it where it goes. Even if it seems unintuitive to modern Catholics to oppose homosexuality or if in the past it felt okay to commit infanticide or abortion, we should just follow the deep truths of the doctrine. I don’t think we can just say “well Catholicism is wrong.” I think the Catholic ethical code is wrong, but I think it meets your systematizing heuristic.
Once again, I’ll just flip this and say that ethics should be God centered. It should be based as much as possible on the needs and wants of others. Why is the God centered principle false and your principle true? Intuition? How do we know the future will be other centered ethics?
I’m confused. How are you getting these principles? Why are you not following precisely the system you just argued for.