I second Hay’s suggestion of making a more formal argument. The unstructured sections of this post made it unclear which propositions you took to support which.
I’d also note that your definition of “objectivity” at the beginning makes it trivially true that morality is sometimes subjective, since people are surely at least sometimes biased by their emotions when discussing morality.
An alternative definition of “objectivity” that is pretty standard within meta-ethics goes something like this: X is objective if it is not constitutively dependent on the attitudes/reactions of observers. The funniness of a comedian is subjective because it is constituted by how amused the comedian makes people feel. In contrast, the solidity of a table is objective because it does not depend on anyone’s reactions.
nathan98000
I don’t have a problem with this in principle—I think immigration restrictions in the US are unjustly restrictive. But I think there are many problems in practice. For example:
There are legal penalties for immigration marriage fraud, including 5 years in prison and a $250,000 fine.
Many EAs advise following the math when trying to improve welfare but caution against breaking any widely held social/moral/legal norms. Lying about the purpose of your marriage would certainly count as one of those norms.
While there might not be substantial monetary costs to marrying someone, there are social costs:
Without substantial time getting to know the immigrant you’re marrying, you might not be familiar with their personality—they might be abusive or mentally unwell.
You won’t be able to marry someone else who’s more fitting for you romantically.
People who know you might disapprove of your dishonesty.
You say that this “plainly pencils out as optimal,” but you don’t provide the penciling. I think a full accounting of this decision would show that’s it’s probably unwise.
Regarding 2: It can still be worth talking with someone who isn’t willing to change their mind. For example, I know nothing about physics, so I wouldn’t expect a physicist to seriously entertain the speculations that I have about the subject. But I think that I could learn a great deal about physics by talking to a physicist, so it makes the conversation worthwhile.
Also, in certain contexts, it can sound somewhat rude to ask someone if they are willing to change their mind. It can implicitly suggest that you think the other person is closed-minded, so it doesn’t make sense to ask it explicitly. (Likewise, I think of each of the questions in your post are useful rhetorical tools in the right context, but they don’t all need to be asked explicitly, even in an ideal conversation.) . An analogy: If you ask, “Did you smoke anything when you came up with that thought?” it implies that you have a low opinion of their intelligence.
Regarding 5: It seems false to say other people are never evil. Sometimes people do genuinely hold different values from us. And if they got what they wanted, it would be a significant set back to our own values. Eg. Some people place no weight on the welfare of humans in other countries or non-human animals.
I’m not understanding the distinction you’re making between the “experience” and the “response.” In my example, there is a needle poking someone’s arm. Someone can experience that in different ways (including feeling more or less pain depending on one’s mindset). That experience is not distinct from a response, it just is a response.
And again, assuming the experience of pain is inescapable, why does it follow that it is necessarily bad? It can’t just be because the experience is inescapable. My example of paying attention to my fingers snapping was meant to show that merely being inescapable doesn’t make something good or bad.
I agree that many of the goals that people pursue implicitly suggest that they believe pleasure and the avoidance of pain are “value-laden”. However, in the links I included in my previous comment, I suggested there are people who explicitly reject the view that this is all that matters (a view known as hedonism in philosophy, not to be confused with the colloquial definition that prioritizes short-term pleasures). And you’ve asserted that hedonism is true, but I’m not sure what the argument for it has been.
So just to clarify, I see you as making two points:If something causes pain/suffering, then it is necessarily (intrinsically) bad.
If something is bad, then it is only because it causes pain/suffering.
I’m looking for arguments for these two points.
The foundational claim of inescapably value-laden experiences is that we do not get to choose how something feels to us
Well… this isn’t quite right. A stimulus can elicit different experiences in a person depending on their mindset. Someone might experience a vaccine with equanimity or they might freak out about the needle.
But regardless, even if some particular experience is inescapable, I don’t see how it would follow that it’s inherently value-laden. Like, if I snap my fingers in front of someone’s face, maybe they’ll inescapably pay attention to me for a second. It doesn’t follow that the experience of paying attention to me is inherently good or bad.
I challenge you to think about values we would agree are moral and see if you can derive them from pleasure and suffering
Some people explicitly reject the hedonism that you’re describing. For example, they’d say that experiencing reality, the environment, or beauty are valuable for their own sake, not because of their effect on pleasure and suffering. I don’t think you’ve given a reason to discard these views.
Why think that pain is inherently bad? (Are you using “bad” as synonymous with “dispreferred”?) And why think that pleasure and pain are the only things that are value-laden?
There’s a common criticism made of utilitarianism: Utilitarianism requires that you calculate the probabilities of every outcome for every action, which is impossible to do.
And the standard response to this is that, no, spending your entire life calculating probabilities is unlikely to lead to the greatest happiness, so it’s fine to follow some other procedure for making decisions. I think a similar sort of response applies to some of the points in your post.
For example, are you really going to do the most good if you completely “set aside your emotional preferences for friends and family”? Probably not. You might get a reputation as someone who’s callous, manipulative, or traitorous. Without emotional attachments to friends and family, your mental health might suffer. You might not have people to support you when you’re at your low points. You might not have people willing to cooperate with you to achieve ambitious projects. Etc. In other words, there are many reasons why our emotional attachments make sense even under a utilitarian perspective.
And what if we’re forced to make a decision between the life of our own child and those of many others’? Does utilitarianism say that our own child’s death is “morally agreeable”? No! The death of our child will be a tragedy, since presumably they could have otherwise lived a long and happy life if not for our decision. The point of utilitarianism is not to minimize this tragedy. Rather, a utilitarian will point out that the death of someone else’s child is just as much a tragedy. And 10 deaths will be 10 times as much a tragedy, even if those people’s lives aren’t personally related to you. This seems correct to me.
I do think EA would benefit from appealing more to conservatives. According to the most recent survey, EA is heavily leftist. And I don’t see any good reason for this.
The 80,000 Hours website lists these as the most pressing world problems:Risks from AI
Catastrophic pandemics
Nuclear weapons
Great power conflict
Factory farming
Global priorities research
Building EA
Improving decision making (especially in important institutions)
Apart from factory farming and maybe pandemic preparedness, none of these issues seem especially aligned with the political left. These are issues that everyone can get on board with. No one wants AI to kill everyone. No one wants North Korea to launch a nuclear missile.
So this doesn’t seem to me like a case of failing to appeal to conservative values. It seems more like a failure to appeal to conservatives, period. Anecdotally, a lot of outreach happens through people’s loose social networks. And if people only have leftist friends, then they’re only going to recruit more leftist people into EA.
I think it would be worth actively seeking out more conservative spaces to present EA ideas. I’d expect the College Republicans on many campuses to be open to learning more about policy in AI, nuclear weapons, and great power conflict. And I’d expect many Christian groups to be open to hearing about effective uses for their charitable donations.
I’m familiar with psychology. But the causes and consequences of poverty are beyond my expertise.
In general, I think the case for alleviating poverty doesn’t need to depend on what it does to people’s cognitive abilities. Alleviating poverty is good because poverty sucks. People in poverty have worse medical care, are less safe, have less access to quality food, etc. If someone isn’t moved by these things, then saying it also lowers IQ is kind of missing the point.
Another theme in your post is that those in poverty aren’t to blame, since it was the poverty that caused them to make their bad decisions. I think a stronger case can be made by pointing to the fact that people don’t choose where they’re born. (And this fact doesn’t depend on any dubious psychology studies.) For someone in Malawi, it will be hard to think about saving for retirement when you make $5/day.
The link I sent also discusses an article that meta-analyzed replications of studies using scarcity priming. The meta-analysis includes a failed replication of a key study from the Mani et al (2013) article you discuss in your post.
The Mani article itself has the hallmarks of questionable research practices. It’s true that each experiment has about 100 participants, but given that these participants are split across 4 conditions, this is the bare minimum for the standard (n = 20-30 / group) at that time. The main results also have p-values between .01-.05, which is an indicator of p-hacking. And yes, the abnormally large effect sizes are relevant. An effect as large as is claimed by Mani et al (d=.88-.94) should be glaringly obvious. That’s close to the effect size for the association between height and weight (r = .44 → d = .98)
And more generally at this point, the default view should be that priming studies are not credible. One shouldn’t wait for a direct failed replication of any particular study. There’s enough indirect evidence that that whole approach is beset by bad practices.
One phenomenon that has arisen through these explorations is that defectors gain a short term, relative advantage, while cooperators benefit from a sustained long term absolute advantage
It seems like you’re drawing a general conclusion about cooperation and defection. But your simulated game has very specific parameters. The pay off matrix, the stipulation that nobody dies, the stipulation that everyone who interacts with a defector recognizes so and remembers, the stipulation that there are only two types of agents, etc. It doesn’t seem like any general lessons about cooperation/defection are supported by a hyper-specific set up like this
I enjoyed this post and this series overall. However, I would have liked more elaboration on the section about EA’s objectionable epistemic features. Only one of the links in this section refer to EA specifically; the others warn about risks from group deliberation more generally.
And the one link that did specifically address the EA community wasn’t persuasive. It made many unsupported assertions. And I think it’s overconfident about the credibility of the literature on collective intelligence, which IMO has significant problems.
FWIW the study on scarcity priming that you cite on your website has failed to replicate.
It might help to provide a short summary of main points discussed in your post.
You spend over a 1000 words saying that Sam Harris is correct. But at no point do you provide an argument for thinking he’s correct. You simply assert it. Over and over.
I downvoted this post. I watched the first hour of the video and was very unimpressed by the “argument” in it. It seems to be a mix of implicit conspiracism, irrelevant tangents, and intro philosophy of science.
It does (correctly) point out that the replication crisis revealed many weaknesses in the way science has been conducted, but the discussion is superficial. And whereas most scientists who learn about the replication crisis advocate for greater rigor (e.g. larger sample sizes, more diverse samples, preregistration), the video implies that the real problem is that scientists have been making some unwarranted metaphysical/ontological assumptions. For example, scientists should be more open to the idea that extra sensory perception is real??
I think a better use of time would be reading Stuart Ritchie’s book Science Fictions, which more clearly and cogently discusses the replication crisis and problems in science more generally.
I was quite surprised by bioethicists’ views on paying organ donors. I’d be curious to see what the best argument against it is. I’ve been extremely unimpressed by the arguments I’ve seen so far.
I downvoted this post for a few reasons.
1. The post meanders too much.
For example, you briefly mention free will, quantum systems, Adler, and social mobility. None of these topics are covered in much depth, and they don’t support your central claim regardless. You explicitly say we should consider ourselves as personally responsible whether or not we actually have free will. But if it doesn’t matter whether or not we have free will, then don’t bring it up.Discussing quantum systems was also not helpful. Few people have any familiarity with quantum systems, so it’s like explaining World War II in terms of Kabbalistic interpretations of the Bible. You shouldn’t expect people to say, “Ahh, now I get it!” In general, it’s better to explain things we don’t know in terms of things we do know. Not the other way around. And you definitely don’t need to invoke quantum systems to get people to understand that groups of people can exhibit patterns that are hard to predict when looking at individuals.
2. Dubious social science.
For the past decade, the social sciences have been undergoing a crisis of confidence. Many studies are fraudulent or the result of shoddy theory, methods, and analysis. Growth mindset has been a prominent target of these concerns. And though I haven’t read the links to Rotter’s and Bandura’s research, it seems your summary of them doesn’t strongly support your main claim. You say there’s a correlation between believing in yourself and achieving success, therefore we should believe in ourselves. But there’s a confound here: The people who believe in themselves are more likely to have the knowledge, skills, and resources necessary to achieve success. So perhaps those are the things that drive success, not the mere belief in oneself.
3. Unclear argument for a double standard.
It seems like you take a scattershot approach to arguing against holding others responsible. I see a few possible arguments someone could take away from the post:
a) When holding others responsible we will often fail to consider (systemic) factors that were not in the others’ control.b) Holding others responsible requires believing everyone will be their best selves, which is unrealistic.
c) Holding others responsible assumes we live in a meritocracy, which is unrealistic.
d) Holding others responsible doesn’t fix society-wide issues.
It’s unclear which of these you think the reader should pay most attention to, and you don’t elaborate much on any of the arguments enough to make them compelling.
It’s odd that you say the reviewer provides no support for his assertions. It seems to me like the reviewer presents quite a bit of evidence.
For example, in responding to Bregman’s claim that male control over female sexuality (and gender inequality more generally) began with the rise of agriculture, Buckner (the reviewer) mentions arranged marriages among the !Kung, a hunter-gatherer society. Buckner also references husbands beating their wives for infidelity among the Kaska, a nomadic foraging society. He also references the Ache, a hunter-gatherer society, whose elite men “monopolized many fertile women in the population.” He also references the Mi’kmaq foragers, whose elite men get priority over the women and children for prime food.
In response to Bregman’s claim that sedentism and property ownership are responsible for the origins of warfare, Buckner cites a paper by Wrangham and Glowacki who summarize the literature: “cases of hunter-gatherers living with different societies of hunter-gatherers as neighbors show that the threat of violence was never far away.”
In response to Bregman’s claim that hunter-gatherers didn’t take ownership over inventions or tunes, Buckner contradicts this by referencing the Yolngu and Northwest Coast fisher-forager societies who do just that.
I don’t think anyone who actually read the review could honestly say, “The reviewer provides no support for his assertions.”
A couple thoughts on this:
1. Perhaps it’s true that elections are mostly sold to the highest bidder in developing poor countries. (I’m not familiar with the research on this, and I’d be reluctant to simply trust your Wikipedia link.) Should EAs help the “better” candidate buy their way to power? It seems like this risks undermining the legitimacy of their elections.
2. It’s not clear to me that it’s easy to figure out who the better candidate is. In one’s own country that can often be difficult. Understanding the politics of a foreign country would be even harder. And I’m skeptical that we can just defer to whatever the majority of a country wants because a) it won’t always be clear what the majority wants and b) there are reasons to think the majority will be mistaken due to bias or ignorance.
And I don’t see how the footnote you cite on this point supports your position. It summarizes research about the effects of information dissemination on voters’ choices. It finds that in some cases voters change their decisions after receiving information about social policies or political candidates. In other words, it shows that the citizens were ignorant—they did not know what was happening in politics. As the researchers note:
Moreover, the upshot of the research is that sometimes people change their minds when given information about policies or candidates. This doesn’t show that they ended up choosing the better policy or candidate.
I think the standard of evidence needs to be much higher before EAs get involved in foreign countries’ political affairs.