Any updates here? I share Devon’s concern: this news also makes me less likely to want to donate via EA Funds. At worst, the fear would be this: so much transparency is lost that donations go into mysterious black holes rather than funding effective organizations. What steps can be taken to convince donors that that’s not what’s happening?
jayquigley
What is your stance regarding aiming your output at an EA audience vs. a wider audience? (Academic & governmental audiences, etc.?)
It seems that a large portion of output begins on your blog and in EA Forum posts. What other venues do you aim at, if any?
To what extent do you regard tailoring your work to academic journals with “peer-review” as counterfactually worthwhile?
California bans fur! https://www.nytimes.com/2019/10/14/style/fur-ban-california.html
For the cross-referencing, did they ask your permission first? Hopefully so. Otherwise, there can be the awkward situation where one does not actually want to work at the organization to which one has been referred.
Amazing idea! I’ll be thinking and talking more about this, including with the animal-issue lobbying organizations I’ve worked with here in the US and California.
For the animal advocacy space, my anecdata suggest that the talent gap is in large part a product of funding constraints. Most animal charities pay rather poorly, even compared to other nonprofits.
Thanks for your engaging insights!
this sounds like you’re talking about a substantive concept of rationality
Yes indeed!
Substantive concepts of rationally always go under moral non-naturalism, I think.
I’m unclear on why you say this. It certainly depends on how exactly ‘non-naturalism’ is defined.
One contrast of the Gert-inspired view I’ve described and that of some objectivists about reasons or substantive rationality (e.g. Parfit) is that the latter tend to talk about reasons as brute normative facts. Sometimes it seems they have no story to tell about why those facts are what they are. But the view I’ve described does have a story to tell. The story is that we had a certain robust agreement in response toward harms (aversion to harms and puzzlement toward those who lack the aversion). Then, as we developed language, we developed terms to refer to the things that tend to elicit these responses.
Is that potentially the subject of the ‘natural’ sciences? It depends: it seems to be the subject not of physical sciences but of psychological and linguistic sciences. So it depends whether psychology and linguistics are ‘natural’ sciences. Does this view hold that facts about substantive rationality are not identical with or reducible to any natural properties? It depends on whether facts about death, pain, injury, and dispositions are reducible to natural properties.
It’s not clear to me that the natural/non-natural distinction applies all that cleanly to the Gert-inspired view I’ve delineated. At least not without considerably clarifying both the natural/non-natural distinction and the Gert-inspired view.
you can be a constructivist in two different ways: Primarily as an intersubjectivist metaethical position, and “secondarily” as a form of non-naturalism.
This seems like a really interesting point, but I’m still a little unclear on it.
Rambling a bit
It’s helpful to me that you’ve pointed out that my Gert-inspired view has an objectivist element at the ‘normative bedrock’ level (some form of realism about harms & rationality) and a constructivist element at the level of choosing first-order moral rules (‘what would impartial, rational people advocate in a public system?’).
A question that I find challenging is, ‘Why should I care about, or act on, what impartial, rational people would advocate in a public system?’ (Why shouldn’t I just care about harms to, say, myself and a few close friends?) Constructivist answers to that question seem inadequate to me. So it seems we are forced to choose between two unsatisfying answers. On the one hand, we might choose a minimally satisfying realism that asserts that it’s a brute fact that we should care about people and apply moral rules to them impartially; it’s a brute fact that we ‘just see’. On the other hand, we might choose a minimally satisfying anti-realism that asserts that caring about or acting on morality is not actually something we should do; the moral rules are what they are and we can choose it if our heart is in it, but there’s not much more to it than hypotheticals.
So you know who’s asking, I happen to consider myself a realist, but closest to the intersubjectivism you’ve delineated above. The idea is that morality is the set of rules that impartial, rational people would advocate as a public system. Rationality is understood, roughly speaking, as the set of things that virtually all rational agents would be averse to. This ends up being a list of basic harms—things like pain, death, disability, injury, loss of freedom, loss of pleasure. There’s not much more objective or “facty” about rationality than the fact that basically all vertebrates are disposed to be averse to those things, and it’s rather puzzling for someone not to be. People can be incorrect about whether a thing is harmful, just as they can be incorrect about whether a flower is red. But there’s nothing much more objective or “facty” about whether the plant is red than that ordinary human language users on earth are disposed to see and label it as red.
I don’t know whether or not you’d label that as objectivism about color or about rationality/harm. But I’d classify it as a weak form of realism and objectivism because people can be incorrect, and those who are not reliably disposed to identify cases correctly would be considered blind to color or to harm.
These things I’m saying are influenced by Joshua Gert, who holds very similar views. You may enjoy his work, including his Normative Bedrock (2012) or Brute Rationality (2004). He is in turn influenced by his late father Bernard Gert, whose normative ethical theory Josh’s metaethics work complements.
One thought is that if morality is not real, then we would not have reasons to do altruistic things. However, I often encounter anti-realists making arguments about which causes we should prioritize, and why. A worry about that is that if morality boils down to mere preference, then it is unclear why a different person should agree with the anti-realist’s preference.
What do you think are the implications of moral anti-realism for choosing altruistic activities?
Why should we care whether or not moral realism is true?
(I would understand if you were to say this line of questions is more relevant to a later post in your series.)
Just want to second that interested readers visit Khorton’s very helpful link. It’s a great article with a very helpful decision tree produced by 80,000 Hours & the Global Priorities Project.
The idea behind trying to end factory farming for animals’ sake is that animals who spend their whole lives on factory farms are enduring lives that are not worth living. It is better not to bring creatures into existence who would live net negative lives.
You’re right that extinction is a (very) extreme case. It’s more likely that even with a drastic reduction in factory farming, a small fraction of descendants of farmed species would be preserved—either for farming, or in zoos or similar institutions. After all, they’re easy to domesticate, having been bred over the centuries for precisely those purposes.
Another useful, well-writtten statement of this argument is in Brian Tomasik’s “Does Vegetarianism Make a Difference?”:
Suppose that a supermarket currently purchases three big cases per week of factory-farmed chickens, with each case containing 25 birds. The store does not purchase fractions of cases, so even if several surplus chickens remain each week, the supermarket will continue to buy three cases. This is what the anti-vegetarian means by “subsisting off of surplus animal products that would otherwise go to waste”: the three cases are purchased anyway, so consuming one or two more chickens simply attenuates the surplus.
What would happen, though, if 25 customers decided to buy tempeh or beans instead of chickens? The purchasing agent who orders weekly cases of chickens would probably buy two cases instead of three. But any given consumer can’t tell how far the store is from that cutoff point between three vs. two cases. The probability that any given chicken is the chicken that causes two cases instead of three to be purchased is 1⁄25. If you do avoid the chicken at the cutoff point, you prevent a whole case -- 25 chickens—from being ordered next week. Thus, the expected value of any given chicken is (1/25) * 25 = 1 chicken, just like common sense would suggest.
Joey, do you think you would adjust this for different circumstances—say, if living in a more expensive region, facing medical hardship, or having to support an elderly family member? For example, assuming you’re renting a room for $440 USD, rents in the Bay Area would be anywhere from 200% to 500% more. If for some reason you wound up here, would you take the price difference into account, or still try to go with the global average?
Your point is well taken. Indeed, the goal is a world where everyone’s interest is given the same weight as equivalent interests, regardless of species.
It is probable that lofty philosophical visions motivate and inspire people, just as you indicate.
I suppose the reason we don’t always lead with that kind of messaging is that it can scare away opponents who aren’t ready to dare challenging the “meat” industry and worry about slippery slopes. Including lawmakers whose constituents include scores of entrepreneurs who sell animal bodies as food.
If BCA were a major animal protection organization such as HSUS or PETA, I would mostly agree with you. But we are an all-volunteer force of around 4 dedicated members in one of the very most progressive cities in the U.S. What we should prioritize is not the building of awareness but rather the accumulation of inspiring legislative victories which will help mobilize the rest of those who are already aware of animal issues.
Rather than “run[ning] around and try[ing] to do something about every incidence of suffering [we] see”, we are prioritizing attainable, potentially replicable, key legislative victories.
Incidentally, we’ve begun to think that if we run out of such potential initiatives, we should switch focus to educating local progressive political leaders about farmed animal issues.
Speaking specifically for Fur Free Berkeley, and speculating on behalf of Fur Free West Hollywood, the reasons for focusing on banning fur were that it was:
attainable yet challenging
a meaningful step in an incremental progression toward further, more all-encompassing reforms
a farmed animal issue with which the general public has substantial sympathy
an industry wherein welfare misdeeds are egregious and relatively well-understood
an issue on which both welfare reformers and staunch abolitionists can agree (because it is a form of outright prohibition rather than welfare-oriented reform)
a form of animal farming that people can thoroughly sympathize with, encouraging further sympathies with other varieties of farmed animals, including the massive classes of individuals you mention
Specifically in the case of going for a second ban, there were additional advantages:
The legal language was already formulated
The WeHo law had already been successfully defended in federal court
As for the reasoning process for pursuing a given item, our unofficial criteria tend to be related to attainability (especially, in talking with legislators, do they feel excited enough about an idea to sponsor the item), defensibility (how worried would the bill’s backer be about backlash), and momentum for the broader animal advocacy movement.
We do have further legislation ideas, some of which would make Berkeley the first to accomplish a particular feat. While we’re not ready to announce anything yet, you can stay tuned on what we’re up to by following us on Facebook: https://www.facebook.com/BerkeleyCoalitionforAnimals/
How We Banned Fur in Berkeley
I worry that SI will delineate lots of research questions usefully, but that it will be harder to make needed progress on those questions. Are you worried about this as well, and if so, are there steps to be taken here? One idea is promoting the research projects to graduate students in the social sciences, such as via grants or scholarships.
The vague term “great” gets used a lot in this post. If possible, wielding more precise concepts regarding what you’re looking for—what counts as “great” in the sense you’re using the term—could be helpful moving forward. By honing in on the particular kind of skill you’re seeking, you’ll help identify those who have those skills. And you may help yourselves confirm which specific skills are truly are essential to the position you’re seeking to fill.
(Also, I think there are more ways to be a “great” software engineer than being able to write a substantial pull request for a major machine learning library with minimal ramp-up time. So other wording can help you be more precise, as well as kinder to engineers who are great in other ways.)