EA forum users are making false claims about me. Do not believe everything you read here.
Offline.
EA forum users are making false claims about me. Do not believe everything you read here.
Offline.
Your first post on the Forum was, in my mind, rather dismissive of objections to the infamous Bostrom listserv, and suggested we instead criticize whoever brought this information to light (even though there is zero reason to believe they are a member of this community or an adjacent community). That’s not a good way to start signaling good faith.
You may disagree with my argument, but it was made in good faith. I’m not trolling or lying in that article. The reason I wrote that was because I felt that I could contribute a perspective which the majority of EA was overlooking. Similarly for the case for genetic enhancement. It is not discussed very much, so I felt I could make a unique contribution. Whereas, in other areas like animal welfare—I did not feel like I had a particularly important insight. If someone’s first post was about veganism and a later posts were about veganism, it would not be a good reason to think the person is arguing in bad faith.
I think the reason you might think what I am doing might be bad faith is because you attribute nefarious intentions to people interested in genetic enhancement. Perhaps the base rate of people talking about “eugenics” is higher for being bad faith, but it is much simpler to just consider the content of the message they are sending at the moment. Besides, if someone writes a 10K word well-argued article (in my opinion) for topic X that is attempting to be grounded in reality and extensively cited, it seems weird to call it “bad faith” if it is not trollish or highly deceptive.
Much of your prior engagement in comments on the Forum has related to race, genetics, eugenics, and intelligence, although it has started to broaden as of late. That’s not a good way to show that you are not seeking to “inject a discussion about race, genetics, eugenics, and intelligence in EA circles” either.
When I see that EAs are making wrong statements about something I know about, I feel like I am in a position to correct them. These not mostly responses to EAs who are already discussing these topics. Moreover, if a discussion of intelligence, genes, genetic enhancement (or even race) could improve human welfare then it is worth having. My work is not merely an effort to “inject” these topics needlessly into EA.
Single-focus posters are not going to get the same presumption of good faith on topics like this that a more balanced poster might. Maybe you are a balanced EA in other areas, but I can only go by what you have posted here, in your substack, and (presumably) elsewhere as Ives Parr. I understand why you might prefer a pseudonym, but some of us have a consistent pseudonym under which we post on a variety of topics. So I’m not going to count the pseudonym against you, but I’m going to base my starting point on “Ives Parr” as known to me without assuming more well-rounded contributions elsewhere.
If I was a single-issue poster on veganism, would you assume I am bad faith? If you want to have a prior of suspiciousness based being somewhat single-issue, I suppose you can. But you should have a posterior belief based on the actual content of the posts. I’ll further add here that I have been thinking about EA generally and considered myself an EA for a long time:
“Should Effective Altruists make Risky Investments?” (Dec 9, 2021)
“What We Owe The Future” book review (Sep 28, 2022)
Defending EA against a critique by Bryan Caplan (Aug 4, 2023)
I could use further evidence of my participation in the EA community, but you have to understand my hesitation as people are suggesting I’m basically a Nazi and parsing over my past work—something I consider immoral and malicious in this context.
But ultimately, I don’t think this matters too much because you can just literally read the content. Arguing like this is kind of silly. It involves a type of reputation destruction based on past comments that is quite unvirtuous intellectually. And once we have the content of the post, it no longer seems relevant. We should just majorly update on whether I seem good faith in the post or not.
I must commend you for actually engaging with the content. Thank you.
A Surprising Conclusion
As far as the environmental/iodine issues, let me set for a metaphor to explain one problem in a less ideologically charged context. Let’s suppose I was writing an article on improving life expectancy in developing countries. Someone with a passing knowledge of public health in developing countries, and the principles of EA. might expect that the proposed solution would be bednets or other anti-infectious disease technologies. Some might assign a decent probability to better funding for primary care, a pitch for anti-alcohol campaigns, or sodium reduction work. Almost no one would have standing up quaternary-care cancer facilities in developing countriesusing yet-to-be-developed drugs on their radar list. If someone wrote a long post suggesting that was the way, I would suspect they might have recently lost a loved one to cancer or might have some other external reason for reaching that conclusion.
I reject this analogy and I substitute my own which I think is more fitting. If someone was discussing alleviating the impact of malaria with bed nets, and someone came along with a special interest in gene drives and suggested it could have a huge impact—perhaps a much larger impact that bed nets—then it would seem this is a reasonable point of discussion that is not necessarily motivated by some ulterior motive. I used this analogy in the article as well. Whether or not gene drives are better is an empirical question. If someone made an extended argument why they think it could be high impact, then it is questionable to think it’s bad faith. Especially if there is not trollish or rude or highly deceptive comments.
I think that’s a fair analogy of your recommendation here—you’re proposing technology that doesn’t exist and wouldn’t be affordable to the majority of people in the most developed countries in the world if it did. The fact that your chosen conclusion is an at least somewhat speculative, very expensive technology should have struck you as pretty anomalous and thrown up some caution flags. Yours could be the first EA cause area that would justify massive per-person individual expenditures of this sort, but the base rate of that being true seems rather low. And in light of your prior comments, it is a bit suspicious that your chosen intervention is one that is rather adjacent to the confluence of “race, genetics, eugenics, and intelligence in EA circles.”
Some of the technology currently exists. We can perform polygenic embryo screening and gene-editing is in its early stages but not yet safe. We have also achieved IVG in mice, and there are start ups that are working on it currently. That breakthrough would bring very large returns in terms of health, intelligence, and happiness. Metaculus estimated that IVG was ~10 years away.
My argument is not for “massive per-person individual expenditures of this sort.” This is wrong. I gave 8 policy proposals and give a bunch of money to people to use this technology was not on the list. I was mostly advocating accelerating the research and allowing voluntary adoption. If EA accelerates the breakthroughs, people would use it voluntarily.
A Really Concerning Miss in Your Post
Turning to your post itself, the coverage of possible environmental interventions in developing countries in the text (in the latter portions of Part III) strikes me as rather skimpy. You acknowledge that environmental and nutritional factors could play a role, but despite spending 100+ hours on the post, and despite food fortification being at least a second-tier candidate intervention in EA global health for a long time, you don’t seem to have caught the massive effect of cheap iodine supplementation in the original article. None of the citations for the four paragraphs after “The extent to which the failure of interventions in wealthy nations is applicable to developing nations is unclear” seem to be about environmental or nutritional effects or interventions in developing countries.
While I can’t tell if you didn’t know about iodine or merely chose not to cite any study about nutritional or environmental intervention in developing countries, either way Bob’s reference to a 13-point drop in IQ from iodine deficiency should have significantly updated you that your original analysis had either overlooked or seriously undersold the possibility for these interventions. Indeed, much relevant information was in a Wikipedia article you linked on the Flynn effect, which notes possible explanations such as stimulating environment, nutrition, infectious diseases, and removal of lead from gasoline [also a moderately well-known EA initiative]. Given that you are someone who has obviously studied intelligence a great deal, I am pretty confident you would know all of this, so it seems implausible that this was a miss in research.
On a single Google search (“effects of malnutrition in children on iq”), one of the top articles was a study in JAMA Pediatrics describing a stable 15.3-point drop in IQ from malnutrition that was stable over an eight-year time period. This was in Mauritius in the 1970s, which had much lower GDP per capita at the time than now but I believe was still better in adjusted terms than many places are in 2024. The percentage deemed malnourished was about 22%, so this was not a study about statistically extreme malnutrition. And none of the four measures were described as reflecting iodine deficiency. That was the first result I pulled, as it was in a JAMA journal. A Wikipedia article on “Impact of Health on Intelligence” was also on the front page, which would have clued you into a variety of relevant findings.
We should be giving people iodine where they are deficient and preventing starvation. Bob raised this objection and I addressed it in the comments. It is worth mentioning. I did say that environmental conditions can depress IQ in the original article, especially at the extremes. The part about heritability that I mentioned undermines the impactfulness to some extent because the environmentality of IQ is low and the sources of variation are not particularly clear. But heritability is not well estimated between developing and developed nations, so I expressed some hesistancy about reaching a strong conclusion there.
There is a lot of work on preventing starvation and malnutrition already, so the aim was to be neglected, tractable, and important. The benefit of accelerating enhancement is that people can voluntarily use it without the need for spending money in each case. Moreover, the gains from enhancement would be very very large for certain forms of technology and there we can embrace both types of intervention where the environmental interventions are effective. Here is what I said in the original article:
The extent to which the failure of interventions in wealthy nations is applicable to developing nations is unclear. If interventions are largely ineffective, this is evidence that they may be ineffective in the developing world. However, there is a plausible case to be made for certain threshold effects or influences unique to the conditions of poor nations. In some countries, children suffer from extreme levels of malnutrition and exposure to parasites. Extremely few children in the developed world face such obstacles. An intervention that prevents extreme malnutrition might appear ineffective in the United States but shows gains in Yemen or South Sudan. When nutrient deprivation is so great that it disrupts proper brain formation, it is likely to depress not only IQ scores but also cognitive ability. Similarly, when groups are wholly unexposed to logical reasoning, they are likely to score lower on IQ tests. Such issues are not wholly uncommon, and interventions would play an important role in such instances. Furthermore, for populations unexposed to academic tests, IQ scores will likely underestimate ability.
The extent to which we can expect environmental interventions to work as a means of improving NIQ largely depends on the extent to which we think environmental differences are driving international differences. If we suspect that NIQ differences are driven entirely by environmental differences, then improvements in nutrition and education may equalize scores. If genetic differences are playing a causal role, equalizing environments will not equalize NIQ scores. A reasonable prior assumption is non-trivial levels of influence from both. Various lines of evidence point to the prospect of zero genetic influence globally being exceptionally unlikely. For example, interventions are largely ineffective in the USA, with an average IQ of approximately 97-99, and the US still lags behind Singapore with an NIQ of approximately 106-107 (Becker, 2019). While some dismiss the genetic influence of genes on NIQ as “not interesting,” it is extremely relevant to the near future of humanity, especially considering that countries with lower NIQ typically have higher fertility (Francis, 2022).
Even if one embraces the 100% environmental explanation for national differences in IQ, one can still consider the possibility of environmental interventions being less cost-effective or more limited in magnitude relative to what could be called “genetic interventions.” Furthermore, since there are little to no means of permanently boosting IQ in more developed countries, there may be stagnation once a country reaches beyond a certain threshold of average nutrition and education.
Looking toward genetic interventions may be more fruitful, even if we accept that environmental interventions are important to some extent. IQ gains without diminishing marginal returns are implausible, given that adults in academic institutions or pursuing academic interests do not continue to add IQ points cumulatively until they achieve superintelligence. Some forms of genetic enhancement would not suffer from this problem of diminishing returns, and could in fact create superintelligent humans. Also importantly, if a genetic intervention could be administered at birth and reduce the need for additional years of schooling, it could save a tremendous amount of a student’s time.
This is a really bad miss in my mind, and is really hard for me to square with the post being written by a curious investigator who is following the data and arguments where they lead toward the stated goal of effectively ending poverty through improving intelligence. If readily-available data suggest a significant increase in intelligence from extremely to fairly cheap, well-studied environmental interventions like vitamin/mineral supplementation, lead exposure prevention, etc., then I would expect an author on this Forum pitching a much more speculative, controversial, and expensive proposal to openly acknowledge and cite that. As far as I can see, there is not even a nod toward achieving the low-hanging environmental/nutritional fruit in your conclusion and recommendations. This certainly gives the impression that you were pre-committed to “genetic enhancement” rather than a search for effective, achievable solutions to increase intelligence in developed countries and end poverty. Although I do not expect posts to be perfectly balanced, I don’t think the dismissal of environmental interventions here supports a conclusion of good-faith participation in the Forum.
I’ve addressed this above and in the original article I compared environmental with genetic, providing some evidence to think that the potential gains are limited in a way that genetic enhancement is not. Much of the effort to prevent the causes that depress IQ are widely understood as problems and addressed by global health initiatives.
I can understand if someone disagrees, but does this really seem like a bad faith argument? It seems like this accusation is considered more intuitively plausible because what I am arguing elicits feelings of moral disgust.
Conclusion
That is not intended as an exhaustive list of reasons I find your posts to be concerning and below the standards I would expect for good-faith participation in the Forum. The heavy reliance on certain sources and authors described in the original post above is not exactly a plus, for instance. The sheer practical implausibility of offering widespread, very expensive medical services in impoverished countries—both from a financial and a cultural standpoint—makes the post come across as a thought experiment (again: one that focuses on certain topics that certain groups would like to discuss for various reasons despite tenuous connections to EA).
The technology will be adopted voluntarily without EA funds if the tech is there. I am not advocating for spending on individuals.
EAs seem generally fine with speculation and “thought experiments” generally if they have a plausible aim of improving human flouring, which my argument does. That should be the central focus of critiques.
Also, this is the EA Forum, not a criminal trial. We tend to think probabilistically here, which is why I said things like it being “difficult to believe that any suggestion . . . is both informed and offered in good faith” (emphasis added). The flipside of that is that posters are not entitled to a trial prior to Forum users choosing to dismiss their posts as not reflecting good-faith participation in the Forum, nor are they entitled to have their entire 42-minute article read before people downvote those posts (cf. your concern about an average read time of five minutes).
I understand it’s not a criminal trial. But expecting someone to read an article before downvoting or attacking stawman arguments seems quite reasonable as a standard for the forum. This EA forum post we are commenting on suggests that I am supporting Nazi ideology (which I am not!). How can someone recognize this without actually reading?
This incentivizes these sorts of critiques and creates a culture of fear to discuss important but taboo ideas. If an idea where to rise that was actually important, it may end up neglected if people don’t give it a fair chance.
Thank you for grappling with the actual content of the article. I’ll state that I do feel your characterization of me being in bad faith feels quite unfair. It seems strange that I would go through all this effort to respond if I was just trolling or trying to mess with the EA forum users.
I don’t enjoy responses that are 3x as long as the message I wrote.
I don’t know how to respond to this.
So I think another issues is that you don’t really make clear what your policy proposal is.
It’s clearly laid out in a list at the end of 8 points in the conclusion. I am not advocating for awful stuff, nor illegal stuff, nor coercive stuff. I don’t want to trial it there—but I want the developing world to have access to this technology so couples can voluntarily use it.
Are you deliberately framing your ideas in a way that they can be misinterpreted?
No. I think people are either not reading it or being deliberately dishonest and I don’t think it’s because of the title.
That’s fine. I was adding more clarity.
I think the title is accurate and the content of my article is clear that I am not suggesting violating anyone’s consent or the law. Did you read the article? I don’t see how you draw these conclusions from the title alone or how the title is misleading. I gave policy recommendations which mostly involved funding research.
I am not advocating for pushing changes on anyone. I am advocating for the voluntary use of this technology and accelerating research. See more in my response on that comment.
You say that you are writing under a pseudonym because you believe “What I write should speak for itself and be judged on its own merits and accuracy.” But you are not affording me the same decency.
Is this sort of attack really a good way to evaluate whether or not my argument is actually correct and/or moral? Could the best approach really be collecting offensive quotes from other writers or articles? No definitely not.
This person is creating a discussion of race and eugenics and trying to make me look very bad by highlighting extremely offensive but unrelated content. Quotations from cited authors or people who run a journal are quite irrelevant to my argument which is aligned with EA values. These sorts of attacks distort your intuitions and make you feel moral disgust, but are largely irrelevant to my core argument. The author took a quote from an argument where I was trying to emphasize how much of a rights violation restrictions on immigration are and presented it in a misleading way, see Nathan Youngs comment. Right after that I reveal I am against closed borders and birth restrictions (with the extreme exception of something like brother-sister marriage).
It seems the efforts to throw mud on me are what is actually inflammatory. The original post is not inflammatory in tone. Nor does it dive into race. It is the attackers of the post that are bringing up the upsetting content to tarnish my reputation. There is a similar attack pattern against EA, which aims to associate it with crypto-fraud. Many people in EA recognize these attacks as unfair as the core mission of EA is virtuous. If you are actually worried about about optics, then trying to broadcast to everyone how EA is hosting “white supremacists” aggressively and posting offensive (and unrelated) quotes does not seem to be helping.
I feel this is a wildly unfair attack. And it seems like people don’t want me to defend myself, my reputation, or my article. They just want me to go away for optics reasons but that let’s censors win and incentivizes this sort of behavior of digging up quotes and smearing people.
In my opinion, the evidence and arguments are generally bad and rely on flawed and often racist sources.
The arguments are generally good. What can I do to defend against mere assertion but ask that people read the article and think for themselves?
If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake.
I am not misinformed. I worked hard on my article. Many people are not even reading what was written or engaging seriously with it except to claim that citations are racist.
This is sad to see EAs advocate for censorship.
I was not trying to implement a strange voluntary option. I was giving a hypothetical to make it apparent how egregious immigration restrictions are in terms of their harm. The argument was comparing how extreme the violation of birth restrictions is and comparing it to restrictions on immigration which could have extreme downsides. As I say in the article, I was against closed borders and restrictions on birth unless extreme circumstances (brother-sister marriage type situation). The reductio was supposed to push the reader toward supporting open borders. However, I think my willingness to make a socially undesirable comparison between two rights was used against me.
I wrote that article years ago and it’s hardly relevant to whether or not my other article is true and moral. This sort of reasoning and argumentation style should be rejected outright. I think this person is just trying to throw mud on my reputation so people try to censor me. Quite unvirtuous in my view.
Also why is my original post bad?
I am acting in good faith, but it seems that you are incredulous. I have been interested in EA and attending meetups for years. The content of the article explains the argument from an EA perspective. What can I do to prove that I am acting in good faith? What aspect of the article suggests that I am bad faith? Did you read the article before accusing me of bad faith?
Why would I invest what is probably like 100 hours into writing my article and defending it if it was just a simple bad faith attempt to “inject a discussion about race, genetics, eugenics, and intelligence in EA circles.”
I discuss environmental interventions and compare them with the benefits of genetic enhancement technology in the article. I discussed the potential for Iodine in the comments, but the relative benefits are constrained in a way that they are not with enhancement.
I can empathise with people who guessed they’d disagree and so downvoted without reading closely.
It seems unfair to me that people are downvoting me without reading my article. What function does the downvote serve except to suppress ideas if those using it are not even reading the article? This seems out of line with EA virtues.
At one point (no longer it appears), it appeared my article was not even searchable with the EA forum search function and the analytics suggested that the average person who viewed it was reading about 10% (4-5 minutes/40 minutes) of it. Perhaps they are reading it but not “closely”, I cannot be certain. Maybe it’s inflated by people who are responding to comments or just looking again.
But I have responded in a respectful manner to constructive comments. If someone has constructive thoughts, they can share them in the comments. I think that would contribute more to the EA community and improve people’s ability to think clearly about the issue than merely downvoting.
I have also asked how to better advocate for my cause, and still received many downvotes. What can I do and what should I do to avoid being downvoted by people who are (most likely) not even reading my article?
In general I think large changes shouldn’t happen without consent.
In this article, I am not advocating for violating consent. Why do you think otherwise? I said:
The era of “new eugenics” characterized by the use of reprogenetic technology will be morally incomparable to the atrocities of the past because this form will not only be harmless but actually be consensual and improve human welfare. An irony of the “eugenics” objection to some forms of reprogenetic technology is that the new eugenics facilitates better-informed consensual reproductive decisions, while those who want to ban such technology are advocating coercion in reproduction. [...]
New eugenics, characterized by reprogenetic technology and voluntary choice, …
In my policy proposal, I am not advocating for forcing this on people. I do say:
In governments with restrictions on selection on the basis of cognitive traits, advocates for genetic enhancement should lobby for reproductive autonomy.
None of the policy proposals involve forcing this on anyone. I want to make the technology available for voluntary use, and I think that should be EAs aim.
Seems a pretty bad idea to push onto poor nations when rich nations don’t allow this. Note how this is different from vaccinations and cash transfers which are both legal and desired by those receiving them.
The technologies that I mention are emerging technologies and most have them have yet to be created. I want EA to accelerate the advances so people can voluntarily use the technology. I am not advocating violating consent.
The use of PGT is not entirely legal for cosmetic/IQ in all countries (for example prohibited in Germany), but it is legal in the US and some others. IVF is legal almost all over the world. Besides, restricting people from making voluntary reproductive choices is actually coercion, not lifting legal restrictions. Letting women have reproductive autonomy is not forcing this on poor nations.
If westerners want to genetically enhance their kids they can, and if we give money to those in poverty and they decide to use it for genetic enhancement (unlikely), fair enough. But trialling things that we in the west find deeply controversial in poorer nations seems probably awful, whether it’s on an individual or national level.
See my recommendations in the conclusion. My argument is that people will voluntarily adopt it if the technology is available. No direct spending on subsidizing or “trialling” on poorer nations is necessary.
The case seems more interesting to remove the ban on voluntary genetic enhancement in the West and see how that goes. I’d read estimates of impact there.
Polygenic screening is available in the US and currently practiced. Research is being done to improve gene editing. Start ups and researchers are working on IVG. For example, research is underway on iterated meiotric selection and sperm selection. I don’t believe any of that is illegal, at least not now. I think legalizing gene-editing of embryos is probably not a good idea right now because the tech isn’t ready yet. But most of my focus was on accelerating the technology. I want it to be safe.
I did provide a discussion of the expected impact. One of the publicly available companies, Genomic Prediction, investigated the impact as measured in DALYs. I mentioned this in the article:
There are worthwhile benefits to PGT-P. Widen et al. (2022) constructed a health index that led to a gain of roughly 4 DALYs among individuals of European ancestry. The benefits of selection among siblings of European ancestry would be between 3 and 4 DALYs.
I also discussed the expected return from selection for IQ among currently available batches in section IV. And I discussed why IQ was important and what the expected impact could look like.
It seems that I am failing in communicating my message through my article, so please help me to be better. What more can I do to be persuasive or better present my message in a way that aligns with EA virtues? I genuinely believe this is an important cause area and I want to help humanity.
Thanks Pat. That is something good to consider.
Great thoughts. I will need to think more deeply about how to make this possible cost wise. We need a large sample to find the genes, but the brain imaging might make this challenging.
I recently wrote a post on the EA forum about turning animal suffering to animal bliss using genetic enhancement. Titotal raised an thoughtful concern: “How do you check that your intervention is working? For example, suppose your original raccoons screech when you poke them, but the genetically engineered racoons don’t. Is that because they are experiencing less pain, or have they merely evolved not to screech?”
This is a very good point. I was recently considering how we could be sure to not just change the expressions of suffering and I believe that I have determined a means of doing so. In psychology, it is common to use factor analysis to study a latent variables—the variables that we cannot measure directly. It seems extremely reasonable to think that animal pain is real, but the trouble is measuring it. We could try to get at pain by getting a huge array of behaviors and measures that are associated with pain (heart rate, cortisol levels, facial expressions, vocalizations, etc.) and find a latent factor of suffering that accounts for some of these behaviors.
To determine if an intervention is successful at changing the latent factor of suffering for the better, we could test for measurement invariance which is an important step in making a relevant comparison between two groups. This basically tests whether the nature of the factor loadings remains the same between groups. This would mean a reduction in all of the traits associated with suffering. This would also seem relevant for environmental interventions as well.
As an illustration: imagine that I measure wefare of a raccoon by the amount of screeching it does. A bad intervention would be taping the raccoons mouth shut. This would reduce screeching, but there is no good reason to think that would alleviate suffering. However, imagine I gave the raccoon a drug and it acted less stressed, screeched less, had less cortisol, and started acting much more friendly. This would be much better evidence of true reduction in suffering.
There is much more to be defended in my thesis but this felt like a thought worth sharing.
From a utilitarian perspective, it would seem there are substantial benefits to accurate measures of welfare.
I was listening to Adam Mastroianni discuss the history of trying measure happiness and life satisfaction and it was interesting to find a level of stability across the decades. Could it really be that the increases in material wealth do not result in huge objective increases in happiness and satisfaction for humans? It would seem the efforts to increase GDP and improve standard of living beyond the basics may be misdirected.
Furthermore, it seems like it would be extremely helpful in terms of policy creation to have an objective unit like a util.
We could compare human and animal welfare directly, and genetically engineer animals to increase their utils.
While efforts might not super successful, it would seem very important to merely improve objective measures of wellbeing by say 10%.
This is probably a good exercise. I do want to point out a common bias about getting existential risks wrong. If someone was right about doomsday, we would not be here to discuss it. That is a huge survivorship bias. Even catestrophic events which lessen the number of people are going to be systemically underestimated. This phenomenon is the anthropic shadow which is relevant to an analysis like this.
I think [April Fools] added to the title might be a good addition since the tag is hard to see.
In conversations of x-risk, one common mistake seems to be to suggest that we have yet to invent something that kills all people and so the historical record is not on the side of “doomers.” The mistake is survivorship bias, and Ćirković, Sandberg, and Bostrom (2010) call this the Anthropic Shadow. Using base rate frequencies to estimate the probability of events that reduce the number of people (observers), will result in bias.
If there are multiple possible timelines and AI p(doom) is super high (and soon), then we would expect a greater frequency of events that delay the creation of AGI (geopolitical issues, regulation, maybe internal conflicts at AI companies, other disaster, etc.). It might be interesting to see if super forecasters consistently underpredict events that would delay AGI. Although, figuring out how to actually interpret this information would be quite challenging unless it’s blatantly obvious.
I guess more likely is that I’m born in a universe with more people and everything goes fine anyway. This is quite speculative and roughly laid out, but something I’ve been thinking about for a while.
I see. Well, that changes my perspective. Originally, I assumed that you did not give away everything except for what is necessary to live. With the context that you are giving maximally, then donating your liver or kidney can go beyond that so it makes more sense why you are asking the question. I don’t think analyzing QALYs is strange generally.
You are quite the EA! Congrats
Such an altruistic sacrifice is quite admirable, but why not just commit to donating more money to effective charities (global dev/animal welfare)? That is much more efficient and effective, unless you place a low value on undergoing a kidney removal? An analysis for cost-effectiveness seems somewhat strange in this context.
I still defend that I am not parroting Nazi arguments or citing Nazi sources. I think that’s an inaccurate thing to say, and it is quite an accusation. MQ is not a Nazi journal and from what I read in the article not even that guy is a Nazi if we are being technical. The logic here seems to be that he is basically a Nazi, so MQ is basically a Nazi journal, so I’m basically parroting Nazi arguments and citing Nazi sources. This is like calling EA a “crypto-scam-funded organization.” I especially take issue with it being said that I’m parroting Nazi arguments. Do you think that is a fair assessment after reading my article?
I think the purpose of saying such a thing is to throw mud over the whole article because some citations are a journal connected to a racist. But this is the worst way to argue—the person who ran a journal that published the article says bad things and associates with bad people—is very far from the central point of the argument. Critiques should strike at the heart of the argument instead of introduce moral disgust about some non-central aspect.
If you had a good critique of the empirical or moral claims, you should forward that. The moral arguments are wholly-EA—we should help the world through charitable actions to improve the world’s welfare (nothing wrong here!). So, then this is just a claim some of the citations are questionable in terms of their empirical quality. Fine, throw out all the MQ citations. I could rewrite my article without them. I am considering doing this. I still maintain (1) national measures of cognitive ability are associated with good outcomes, and (2) we can use genetic enhancement to boost them.
My overall argument still holds IMO, and so this feels like nitpicking used to distort people’s intuitions about my article through introducing a lot of moral disgust and then trying to get me banned. This seems like the opposite of what EAs should do.
Why do you care and how do you know if my name is real or not? Were you searching the web in an effort to doxx me?