Your first post on the Forum was, in my mind, rather dismissive of objections to the infamous Bostrom listserv, and suggested we instead criticize whoever brought this information to light (even though there is zero reason to believe they are a member of this community or an adjacent community). That’s not a good way to start signaling good faith.
You may disagree with my argument, but it was made in good faith. I’m not trolling or lying in that article. The reason I wrote that was because I felt that I could contribute a perspective which the majority of EA was overlooking. Similarly for the case for genetic enhancement. It is not discussed very much, so I felt I could make a unique contribution. Whereas, in other areas like animal welfare—I did not feel like I had a particularly important insight. If someone’s first post was about veganism and a later posts were about veganism, it would not be a good reason to think the person is arguing in bad faith.
I think the reason you might think what I am doing might be bad faith is because you attribute nefarious intentions to people interested in genetic enhancement. Perhaps the base rate of people talking about “eugenics” is higher for being bad faith, but it is much simpler to just consider the content of the message they are sending at the moment. Besides, if someone writes a 10K word well-argued article (in my opinion) for topic X that is attempting to be grounded in reality and extensively cited, it seems weird to call it “bad faith” if it is not trollish or highly deceptive.
Much of your prior engagement in comments on the Forum has related to race, genetics, eugenics, and intelligence, although it has started to broaden as of late. That’s not a good way to show that you are not seeking to “inject a discussion about race, genetics, eugenics, and intelligence in EA circles” either.
When I see that EAs are making wrong statements about something I know about, I feel like I am in a position to correct them. These not mostly responses to EAs who are already discussing these topics. Moreover, if a discussion of intelligence, genes, genetic enhancement (or even race) could improve human welfare then it is worth having. My work is not merely an effort to “inject” these topics needlessly into EA.
Single-focus posters are not going to get the same presumption of good faith on topics like this that a more balanced poster might. Maybe you are a balanced EA in other areas, but I can only go by what you have posted here, in your substack, and (presumably) elsewhere as Ives Parr. I understand why you might prefer a pseudonym, but some of us have a consistent pseudonym under which we post on a variety of topics. So I’m not going to count the pseudonym against you, but I’m going to base my starting point on “Ives Parr” as known to me without assuming more well-rounded contributions elsewhere.
If I was a single-issue poster on veganism, would you assume I am bad faith? If you want to have a prior of suspiciousness based being somewhat single-issue, I suppose you can. But you should have a posterior belief based on the actual content of the posts. I’ll further add here that I have been thinking about EA generally and considered myself an EA for a long time:
I could use further evidence of my participation in the EA community, but you have to understand my hesitation as people are suggesting I’m basically a Nazi and parsing over my past work—something I consider immoral and malicious in this context.
But ultimately, I don’t think this matters too much because you can just literally read the content. Arguing like this is kind of silly. It involves a type of reputation destruction based on past comments that is quite unvirtuous intellectually. And once we have the content of the post, it no longer seems relevant. We should just majorly update on whether I seem good faith in the post or not.
I must commend you for actually engaging with the content. Thank you.
A Surprising Conclusion
As far as the environmental/iodine issues, let me set for a metaphor to explain one problem in a less ideologically charged context. Let’s suppose I was writing an article on improving life expectancy in developing countries. Someone with a passing knowledge of public health in developing countries, and the principles of EA. might expect that the proposed solution would be bednets or other anti-infectious disease technologies. Some might assign a decent probability to better funding for primary care, a pitch for anti-alcohol campaigns, or sodium reduction work. Almost no one would have standing up quaternary-care cancer facilities in developing countriesusing yet-to-be-developed drugs on their radar list. If someone wrote a long post suggesting that was the way, I would suspect they might have recently lost a loved one to cancer or might have some other external reason for reaching that conclusion.
I reject this analogy and I substitute my own which I think is more fitting. If someone was discussing alleviating the impact of malaria with bed nets, and someone came along with a special interest in gene drives and suggested it could have a huge impact—perhaps a much larger impact that bed nets—then it would seem this is a reasonable point of discussion that is not necessarily motivated by some ulterior motive. I used this analogy in the article as well. Whether or not gene drives are better is an empirical question. If someone made an extended argument why they think it could be high impact, then it is questionable to think it’s bad faith. Especially if there is not trollish or rude or highly deceptive comments.
I think that’s a fair analogy of your recommendation here—you’re proposing technology that doesn’t exist and wouldn’t be affordable to the majority of people in the most developed countries in the world if it did. The fact that your chosen conclusion is an at least somewhat speculative, very expensive technology should have struck you as pretty anomalous and thrown up some caution flags. Yours could be the first EA cause area that would justify massive per-person individual expenditures of this sort, but the base rate of that being true seems rather low. And in light of your prior comments, it is a bit suspicious that your chosen intervention is one that is rather adjacent to the confluence of “race, genetics, eugenics, and intelligence in EA circles.”
Some of the technology currently exists. We can perform polygenic embryo screening and gene-editing is in its early stages but not yet safe. We have also achieved IVG in mice, and there are start ups that are working on it currently. That breakthrough would bring very large returns in terms of health, intelligence, and happiness. Metaculus estimated that IVG was ~10 years away.
My argument is not for “massive per-person individual expenditures of this sort.” This is wrong. I gave 8 policy proposals and give a bunch of money to people to use this technology was not on the list. I was mostly advocating accelerating the research and allowing voluntary adoption. If EA accelerates the breakthroughs, people would use it voluntarily.
A Really Concerning Miss in Your Post
Turning to your post itself, the coverage of possible environmental interventions in developing countries in the text (in the latter portions of Part III) strikes me as rather skimpy. You acknowledge that environmental and nutritional factors could play a role, but despite spending 100+ hours on the post, and despite food fortification being at least a second-tier candidate intervention in EA global health for a long time, you don’t seem to have caught the massive effect of cheap iodine supplementation in the original article. None of the citations for the four paragraphs after “The extent to which the failure of interventions in wealthy nations is applicable to developing nations is unclear” seem to be about environmental or nutritional effects or interventions in developing countries.
While I can’t tell if you didn’t know about iodine or merely chose not to cite any study about nutritional or environmental intervention in developing countries, either way Bob’s reference to a 13-point drop in IQ from iodine deficiency should have significantly updated you that your original analysis had either overlooked or seriously undersold the possibility for these interventions. Indeed, much relevant information was in a Wikipedia article you linked on the Flynn effect, which notes possible explanations such as stimulating environment, nutrition, infectious diseases, and removal of lead from gasoline [also a moderately well-known EA initiative]. Given that you are someone who has obviously studied intelligence a great deal, I am pretty confident you would know all of this, so it seems implausible that this was a miss in research.
On a single Google search (“effects of malnutrition in children on iq”), one of the top articles was a study in JAMA Pediatrics describing a stable 15.3-point drop in IQ from malnutrition that was stable over an eight-year time period. This was in Mauritius in the 1970s, which had much lower GDP per capita at the time than now but I believe was still better in adjusted terms than many places are in 2024. The percentage deemed malnourished was about 22%, so this was not a study about statistically extreme malnutrition. And none of the four measures were described as reflecting iodine deficiency. That was the first result I pulled, as it was in a JAMA journal. A Wikipedia article on “Impact of Health on Intelligence” was also on the front page, which would have clued you into a variety of relevant findings.
We should be giving people iodine where they are deficient and preventing starvation. Bob raised this objection and I addressed it in the comments. It is worth mentioning. I did say that environmental conditions can depress IQ in the original article, especially at the extremes. The part about heritability that I mentioned undermines the impactfulness to some extent because the environmentality of IQ is low and the sources of variation are not particularly clear. But heritability is not well estimated between developing and developed nations, so I expressed some hesistancy about reaching a strong conclusion there.
There is a lot of work on preventing starvation and malnutrition already, so the aim was to be neglected, tractable, and important. The benefit of accelerating enhancement is that people can voluntarily use it without the need for spending money in each case. Moreover, the gains from enhancement would be very very large for certain forms of technology and there we can embrace both types of intervention where the environmental interventions are effective. Here is what I said in the original article:
The extent to which the failure of interventions in wealthy nations is applicable to developing nations is unclear. If interventions are largely ineffective, this is evidence that they may be ineffective in the developing world. However, there is a plausible case to be made for certain threshold effects or influences unique to the conditions of poor nations. In some countries, children suffer from extreme levels of malnutrition and exposure to parasites. Extremely few children in the developed world face such obstacles. An intervention that prevents extreme malnutrition might appear ineffective in the United States but shows gains in Yemen or South Sudan. When nutrient deprivation is so great that it disrupts proper brain formation, it is likely to depress not only IQ scores but also cognitive ability. Similarly, when groups are wholly unexposed to logical reasoning, they are likely to score lower on IQ tests. Such issues are not wholly uncommon, and interventions would play an important role in such instances. Furthermore, for populations unexposed to academic tests, IQ scores will likely underestimate ability.
The extent to which we can expect environmental interventions to work as a means of improving NIQ largely depends on the extent to which we think environmental differences are driving international differences. If we suspect that NIQ differences are driven entirely by environmental differences, then improvements in nutrition and education may equalize scores. If genetic differences are playing a causal role, equalizing environments will not equalize NIQ scores. A reasonable prior assumption is non-trivial levels of influence from both. Various lines of evidence point to the prospect of zero genetic influence globally being exceptionally unlikely. For example, interventions are largely ineffective in the USA, with an average IQ of approximately 97-99, and the US still lags behind Singapore with an NIQ of approximately 106-107 (Becker, 2019). While some dismiss the genetic influence of genes on NIQ as “not interesting,” it is extremely relevant to the near future of humanity, especially considering that countries with lower NIQ typically have higher fertility (Francis, 2022).
Even if one embraces the 100% environmental explanation for national differences in IQ, one can still consider the possibility of environmental interventions being less cost-effective or more limited in magnitude relative to what could be called “genetic interventions.” Furthermore, since there are little to no means of permanently boosting IQ in more developed countries, there may be stagnation once a country reaches beyond a certain threshold of average nutrition and education.
Looking toward genetic interventions may be more fruitful, even if we accept that environmental interventions are important to some extent. IQ gains without diminishing marginal returns are implausible, given that adults in academic institutions or pursuing academic interests do not continue to add IQ points cumulatively until they achieve superintelligence. Some forms of genetic enhancement would not suffer from this problem of diminishing returns, and could in fact create superintelligent humans. Also importantly, if a genetic intervention could be administered at birth and reduce the need for additional years of schooling, it could save a tremendous amount of a student’s time.
This is a really bad miss in my mind, and is really hard for me to square with the post being written by a curious investigator who is following the data and arguments where they lead toward the stated goal of effectively ending poverty through improving intelligence. If readily-available data suggest a significant increase in intelligence from extremely to fairly cheap, well-studied environmental interventions like vitamin/mineral supplementation, lead exposure prevention, etc., then I would expect an author on this Forum pitching a much more speculative, controversial, and expensive proposal to openly acknowledge and cite that. As far as I can see, there is not even a nod toward achieving the low-hanging environmental/nutritional fruit in your conclusion and recommendations. This certainly gives the impression that you were pre-committed to “genetic enhancement” rather than a search for effective, achievable solutions to increase intelligence in developed countries and end poverty. Although I do not expect posts to be perfectly balanced, I don’t think the dismissal of environmental interventions here supports a conclusion of good-faith participation in the Forum.
I’ve addressed this above and in the original article I compared environmental with genetic, providing some evidence to think that the potential gains are limited in a way that genetic enhancement is not. Much of the effort to prevent the causes that depress IQ are widely understood as problems and addressed by global health initiatives.
I can understand if someone disagrees, but does this really seem like a bad faith argument? It seems like this accusation is considered more intuitively plausible because what I am arguing elicits feelings of moral disgust.
Conclusion
That is not intended as an exhaustive list of reasons I find your posts to be concerning and below the standards I would expect for good-faith participation in the Forum. The heavy reliance on certain sources and authors described in the original post above is not exactly a plus, for instance. The sheer practical implausibility of offering widespread, very expensive medical services in impoverished countries—both from a financial and a cultural standpoint—makes the post come across as a thought experiment (again: one that focuses on certain topics that certain groups would like to discuss for various reasons despite tenuous connections to EA).
The technology will be adopted voluntarily without EA funds if the tech is there. I am not advocating for spending on individuals.
EAs seem generally fine with speculation and “thought experiments” generally if they have a plausible aim of improving human flouring, which my argument does. That should be the central focus of critiques.
Also, this is the EA Forum, not a criminal trial. We tend to think probabilistically here, which is why I said things like it being “difficult to believe that any suggestion . . . is both informed and offered in good faith” (emphasis added). The flipside of that is that posters are not entitled to a trial prior to Forum users choosing to dismiss their posts as not reflecting good-faith participation in the Forum, nor are they entitled to have their entire 42-minute article read before people downvote those posts (cf.your concern about an average read time of five minutes).
I understand it’s not a criminal trial. But expecting someone to read an article before downvoting or attacking stawman arguments seems quite reasonable as a standard for the forum. This EA forum post we are commenting on suggests that I am supporting Nazi ideology (which I am not!). How can someone recognize this without actually reading?
This incentivizes these sorts of critiques and creates a culture of fear to discuss important but taboo ideas. If an idea where to rise that was actually important, it may end up neglected if people don’t give it a fair chance.
Thank you for grappling with the actual content of the article. I’ll state that I do feel your characterization of me being in bad faith feels quite unfair. It seems strange that I would go through all this effort to respond if I was just trolling or trying to mess with the EA forum users.
You may disagree with my argument, but it was made in good faith. I’m not trolling or lying in that article. The reason I wrote that was because I felt that I could contribute a perspective which the majority of EA was overlooking. Similarly for the case for genetic enhancement. It is not discussed very much, so I felt I could make a unique contribution. Whereas, in other areas like animal welfare—I did not feel like I had a particularly important insight. If someone’s first post was about veganism and a later posts were about veganism, it would not be a good reason to think the person is arguing in bad faith.
I think the reason you might think what I am doing might be bad faith is because you attribute nefarious intentions to people interested in genetic enhancement. Perhaps the base rate of people talking about “eugenics” is higher for being bad faith, but it is much simpler to just consider the content of the message they are sending at the moment. Besides, if someone writes a 10K word well-argued article (in my opinion) for topic X that is attempting to be grounded in reality and extensively cited, it seems weird to call it “bad faith” if it is not trollish or highly deceptive.
When I see that EAs are making wrong statements about something I know about, I feel like I am in a position to correct them. These not mostly responses to EAs who are already discussing these topics. Moreover, if a discussion of intelligence, genes, genetic enhancement (or even race) could improve human welfare then it is worth having. My work is not merely an effort to “inject” these topics needlessly into EA.
If I was a single-issue poster on veganism, would you assume I am bad faith? If you want to have a prior of suspiciousness based being somewhat single-issue, I suppose you can. But you should have a posterior belief based on the actual content of the posts. I’ll further add here that I have been thinking about EA generally and considered myself an EA for a long time:
“Should Effective Altruists make Risky Investments?” (Dec 9, 2021)
“What We Owe The Future” book review (Sep 28, 2022)
Defending EA against a critique by Bryan Caplan (Aug 4, 2023)
I could use further evidence of my participation in the EA community, but you have to understand my hesitation as people are suggesting I’m basically a Nazi and parsing over my past work—something I consider immoral and malicious in this context.
But ultimately, I don’t think this matters too much because you can just literally read the content. Arguing like this is kind of silly. It involves a type of reputation destruction based on past comments that is quite unvirtuous intellectually. And once we have the content of the post, it no longer seems relevant. We should just majorly update on whether I seem good faith in the post or not.
I must commend you for actually engaging with the content. Thank you.
I reject this analogy and I substitute my own which I think is more fitting. If someone was discussing alleviating the impact of malaria with bed nets, and someone came along with a special interest in gene drives and suggested it could have a huge impact—perhaps a much larger impact that bed nets—then it would seem this is a reasonable point of discussion that is not necessarily motivated by some ulterior motive. I used this analogy in the article as well. Whether or not gene drives are better is an empirical question. If someone made an extended argument why they think it could be high impact, then it is questionable to think it’s bad faith. Especially if there is not trollish or rude or highly deceptive comments.
Some of the technology currently exists. We can perform polygenic embryo screening and gene-editing is in its early stages but not yet safe. We have also achieved IVG in mice, and there are start ups that are working on it currently. That breakthrough would bring very large returns in terms of health, intelligence, and happiness. Metaculus estimated that IVG was ~10 years away.
My argument is not for “massive per-person individual expenditures of this sort.” This is wrong. I gave 8 policy proposals and give a bunch of money to people to use this technology was not on the list. I was mostly advocating accelerating the research and allowing voluntary adoption. If EA accelerates the breakthroughs, people would use it voluntarily.
We should be giving people iodine where they are deficient and preventing starvation. Bob raised this objection and I addressed it in the comments. It is worth mentioning. I did say that environmental conditions can depress IQ in the original article, especially at the extremes. The part about heritability that I mentioned undermines the impactfulness to some extent because the environmentality of IQ is low and the sources of variation are not particularly clear. But heritability is not well estimated between developing and developed nations, so I expressed some hesistancy about reaching a strong conclusion there.
There is a lot of work on preventing starvation and malnutrition already, so the aim was to be neglected, tractable, and important. The benefit of accelerating enhancement is that people can voluntarily use it without the need for spending money in each case. Moreover, the gains from enhancement would be very very large for certain forms of technology and there we can embrace both types of intervention where the environmental interventions are effective. Here is what I said in the original article:
The extent to which the failure of interventions in wealthy nations is applicable to developing nations is unclear. If interventions are largely ineffective, this is evidence that they may be ineffective in the developing world. However, there is a plausible case to be made for certain threshold effects or influences unique to the conditions of poor nations. In some countries, children suffer from extreme levels of malnutrition and exposure to parasites. Extremely few children in the developed world face such obstacles. An intervention that prevents extreme malnutrition might appear ineffective in the United States but shows gains in Yemen or South Sudan. When nutrient deprivation is so great that it disrupts proper brain formation, it is likely to depress not only IQ scores but also cognitive ability. Similarly, when groups are wholly unexposed to logical reasoning, they are likely to score lower on IQ tests. Such issues are not wholly uncommon, and interventions would play an important role in such instances. Furthermore, for populations unexposed to academic tests, IQ scores will likely underestimate ability.
The extent to which we can expect environmental interventions to work as a means of improving NIQ largely depends on the extent to which we think environmental differences are driving international differences. If we suspect that NIQ differences are driven entirely by environmental differences, then improvements in nutrition and education may equalize scores. If genetic differences are playing a causal role, equalizing environments will not equalize NIQ scores. A reasonable prior assumption is non-trivial levels of influence from both. Various lines of evidence point to the prospect of zero genetic influence globally being exceptionally unlikely. For example, interventions are largely ineffective in the USA, with an average IQ of approximately 97-99, and the US still lags behind Singapore with an NIQ of approximately 106-107 (Becker, 2019). While some dismiss the genetic influence of genes on NIQ as “not interesting,” it is extremely relevant to the near future of humanity, especially considering that countries with lower NIQ typically have higher fertility (Francis, 2022).
Even if one embraces the 100% environmental explanation for national differences in IQ, one can still consider the possibility of environmental interventions being less cost-effective or more limited in magnitude relative to what could be called “genetic interventions.” Furthermore, since there are little to no means of permanently boosting IQ in more developed countries, there may be stagnation once a country reaches beyond a certain threshold of average nutrition and education.
Looking toward genetic interventions may be more fruitful, even if we accept that environmental interventions are important to some extent. IQ gains without diminishing marginal returns are implausible, given that adults in academic institutions or pursuing academic interests do not continue to add IQ points cumulatively until they achieve superintelligence. Some forms of genetic enhancement would not suffer from this problem of diminishing returns, and could in fact create superintelligent humans. Also importantly, if a genetic intervention could be administered at birth and reduce the need for additional years of schooling, it could save a tremendous amount of a student’s time.
I’ve addressed this above and in the original article I compared environmental with genetic, providing some evidence to think that the potential gains are limited in a way that genetic enhancement is not. Much of the effort to prevent the causes that depress IQ are widely understood as problems and addressed by global health initiatives.
I can understand if someone disagrees, but does this really seem like a bad faith argument? It seems like this accusation is considered more intuitively plausible because what I am arguing elicits feelings of moral disgust.
The technology will be adopted voluntarily without EA funds if the tech is there. I am not advocating for spending on individuals.
EAs seem generally fine with speculation and “thought experiments” generally if they have a plausible aim of improving human flouring, which my argument does. That should be the central focus of critiques.
I understand it’s not a criminal trial. But expecting someone to read an article before downvoting or attacking stawman arguments seems quite reasonable as a standard for the forum. This EA forum post we are commenting on suggests that I am supporting Nazi ideology (which I am not!). How can someone recognize this without actually reading?
This incentivizes these sorts of critiques and creates a culture of fear to discuss important but taboo ideas. If an idea where to rise that was actually important, it may end up neglected if people don’t give it a fair chance.
Thank you for grappling with the actual content of the article. I’ll state that I do feel your characterization of me being in bad faith feels quite unfair. It seems strange that I would go through all this effort to respond if I was just trolling or trying to mess with the EA forum users.