Hi! I strongly endorse pronatalism, and I will readily admit to wanting to reduce x-risk in order to keep my family safe.
Daniel Kirmani
What is “Effective Altruism” effective with respect to?
I Converted Book I of The Sequences Into A Zoomer-Readable Format
I’d be curious to know why people downvoted this.
Strengthening the association between “rationalist” and “furry” decreases the probability that AI research organizations will adopt AI safety proposals proposed by “rationalists”.
The EA consensus is roughly that being blunt about AI risks in the broader public would cause social havoc.
Social havoc isn’t bad by default. It’s possible that a publicity campaign would result in regulations that choke the life out of AI capabilities progress, just like the FDA choked the life out of biomedical innovation.
As Wei Dai mentioned, tribes in the EEA weren’t particularly fond of other tribes. Why should people’s ingroup-compassion scale up, but their outgroup-contempt shouldn’t? Your argument supports both conclusions.
“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent.
This reasoning seems confused. Caring more about certain individuals than others is a totally valid utility function that you can have. You can’t
especially care about individual people while simultaneously caring about everyone equally. You just can’t. “Logically consistent” means that you don’t claim to do both of these mutually exclusive things at once.
I think you should be in favor of caring more (shut up and multiply) over caring less (shut up and divide) because your intuitive sense of caring evolved when your sphere of influence was small.
Your argument proves too much:
My sex drive evolved before condoms existed. I should extend it to my new circumstances by reproducing as much as possible.
My subconscious bias against those who don’t look like me evolved before there was a globalized economy with opportunities for positive-sum trade. Therefore, I should generalize to my new circumstances by becoming a neonazi.
My love of sweet foods evolved before mechanized agriculture. Therefore, I should extend my default behavior to my modern circumstances by drinking as much high-fructose corn syrup as I can.
I don’t like this post. It feels like a step down a purity spiral. An Effective Altruist is anyone who wants to increase net utility, not one who has no other goals.
Curing aging also fixes the demographic collapse.
20 Critiques of AI Safety That I Found on Twitter
TSMC, a Taiwanese firm, is currently the global semiconductor linchpin. What would be the implications of Chinese invasion for AGI timelines?
Edit: Kinda-answered here by Wei Dai, and in this very comment thread. My takeaways: Chinese invasion would push AI timelines into the future, but only a little. It would also disadvantage Chinese AI capabilities research relative to that of NATO.
Insects are more likely to be copies of each other and thus have less moral value.
There are two city-states, Heteropolis and Homograd, with equal populations, equal average happiness, equal average lifespan, and equal GDP.
Heteropolis is multi-ethnic, ideologically-diverse, and hosts a flourishing artistic community. Homograd’s inhabitants belong to one ethnic group, and are thoroughly indoctrinated into the state ideology from infancy. Pursuits that aren’t materially productive, such as the arts, are regarded as decadent in Homograd, and are therefore virtually nonexistent.
Two questions for you:
Would it be more ethical to nuke Homograd than to nuke Heteropolis?
Imagine a trolley problem, with varying numbers of Homograders and Heteropolites tied to each track. Find a ratio that renders you indifferent as to which path the trolley takes. What is the moral exchange rate between Homograders and Heteropolites?
Erratum: “asymptomatic” → “asymptotic”.
While EA called itself “effective”, we rarely see its effects, because the biggest effects are supposed to happen in the remote future, remote countries and be statistical.
EA pumps resources from near to far: to distant countries, to a distant future, to other beings. At the same time, the volume of the “far” is always greater than the volume of the near, that is, the pumping will never stop and therefore the good of the “neighbours” will never come. And this causes a deaf protest from the general public, which already feels that it has been robbed by taxes and so on.
Generating legible utility is far more costly than generating illegible utility, because people compete to generate legible utility in order to jockey for status. If your goal is to generate utility, to hell with status, then the utility you generate will likely be illegible.
But sometimes helping a neighbour is cheaper than helping a distant person, because we have unique knowledge and opportunities in our inner circle.
If you help your neighbor, he is likely to feel grateful, elevating your status in the local community. Additionally, he would be more likely to help you out if you were ever down on your luck. I’m sure that nobody would ever try to rationalize this ulterior motive under the guise of altruism.
I might’ve slightly decreased nuclear risk. I worked on an Air Force contract where I trained neural networks to distinguish between earthquakes and clandestine nuclear tests given readings from seismometers.
The point of this contract was to aid in the detection (by the Air Force and the UN) of secret nuclear weapon development by signatories to the UN’s Comprehensive Test Ban Treaty and the Nuclear Non-Proliferation Treaty. (So basically, Iran.) The existence of such monitoring was intended to discourage “rogue nations” (Iran) from developing nukes.
That being said, I don’t think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war. Also, it’s not clear that my performance on my contribution to the contract actually increased the strength of the deterrent to Iran. However, if (a descendant of) my model ends up being used by NATO, perhaps I helped out by decreasing the chance of a false positive.
Disclaimer: This was before I had ever heard of EA. Still, I’ve always been somewhat EA-minded, so maybe you can attribute this to proto-EA reasoning. When I was working on the project, I remember telling myself that even a very small reduction in the odds of a nuclear war happening meant a lot for the future of mankind.
- Is school good or bad? by 3 Dec 2022 13:14 UTC; 10 points) (LessWrong;
- 4 Nov 2022 0:29 UTC; 6 points) 's comment on [Video] How having Fast Fourier Transforms sooner could have helped with Nuclear Disarmament—Veritaserum by (LessWrong;
If you spend a lot of time in deep thought trying to reconcile “I did X, and I want to do Y” with the implicit assumption “I am a virtuous and pure-hearted person”, then you’re going to end up getting way better at generating prosocial excuses via motivated reasoning.
If, instead, you’re willing to consider less-virtuous hypotheses, you might get a better model of your own actions. Such a hypothesis would be “I did X in order to impress my friends, and I chose career path Y in order to make my internal model of my parents proud”.
Realizing such uncomfortable truths bruises the ego, but can also bear fruit. For example: If a lot of EAs’ real reason for working on what they do is to impress others, then this fact can be leveraged to generate more utility. A leaderboard on the forum, ranking users by (some EA organization’s estimate of) their personal impact could give rise to a whole bunch of QALYs.
Reminder that split-brain experiments indicate that the part of the brain that makes decisions is not the part of the brain that explains decisions. The evolutionary purpose of the brain’s explaining-module is to generate plausible-sounding rationalizations for the brain’s decision-modules’ actions. These explanations also have to adhere to the social norms of the tribe, in order to avoid being shunned and starving.
Humans are literally built to generate prosocial-sounding rationalizations for their behavior. They rationalize things to themselves even when they are not being interrogated, possibly because it’s best to pre-compute and cache rationalizations that one is likely to need later. It has been postulated that this is the reason that people have internal monologues, or indeed, the reason that humans evolved big brains in the first place.
We were built to do motivated reasoning, so it’s not a bad habit that you can simply drop after reading the right blog post. Instead, it’s a fundamental flaw in our thought-processes, and must always be consciously corrected. Anytime you say “I did X because Y” without thinking about it, you are likely dead wrong.
The only way to figure out why you did anything is through empirical investigation of your past behavior (revealed preferences). This is not easy, it risks exposing your less-virtuous motivations, and almost nobody does it, so you will seem weird and untrustworthy if you always respond to “Why did you do X?” with “I don’t know, let me think”. People will instinctively want to trust and befriend the guy who always has a prosocial rationalization on the tip of his tongue. Honesty is hard.
These ones:
No, this is not one of the things that scares me. Also, birth rates decline predictably once a nation is developed, so if this were a significant concern, it would end up hitting China and India just as hard as it is currently hitting the US and Europe.
No. Adoption of Progressive ideology is a memetic phenomenon, with mild to no genetic influence. (Update, 2023-04-03: I don’t endorse this claim, actually. I also don’t endorse the quoted “worry”.)
I guess this intervention would be better than nothing, strictly speaking. The mechanism of action here is “people have kids” → {”people feel like they have a stake in the future”, “people want to protect their descendants”} → “people become more aligned with longtermism”. I don’t think this is a particularly effective intervention.
Yes.
Eh, maybe.