All opinions are my own unless otherwise stated. Geophysics and math graduate with some web media and IT skills.
Noah Scales
Simple and useful, thanks.
In my understanding, Pascal’s Mugger offers a set of rewards with risks that I estimate myself. Meanwhile, I need a certain amount of money to give to charity, in order to accomplish something. Let’s assume that I don’t have the money sufficient for that donation, and have no other way to get that money. Ever. I don’t care to spend the money I do have on anything else. Then, thinking altruistically, I’ll keep negotiating with Pascal’s Mugger until we agree on an amount that the mugger will return that, if I earn it, is sufficient to make that charitable donation. All I’ve done is establish what amount to get in return from the Mugger before I give the mugger my wallet cash. Whether the mugger is my only source of extra money, and whether there is any other risk in losing the money I do have, and whether I already have enough money to make some difference if I donate, is not in question. Notice that some people might object that my choice is irrational. However, the mugger is my only source of money, and I don’t have enough money otherwise to do anything that I care about for others, and I’m not considering consequences to me of losing the money.
In Yudkowsky’s formulation, the Mugger is threatening to harm a bunch of people, but with very low probability. Ok. I’m supposed to arrive at an amount that I would give to help those people threatened with that improbable risk, right? In the thought experiment, I am altruistic. I decide what the probability of the Mugger’s threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn’t have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren’t people better off if I give that money to charity after all?
You wrote,
“I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldn’t even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger.”
The threshold of risk you refer to there is the additional selfish one that I referred to in my last comment, where loss of the money in an altruistic effort deprives me of some personal need that the money could have served, an opportunity cost of wagering for more money with the mugger. That risk could be a high threshold of risk even if the monetary amount is low. Lets say I owe a bookie 5 dollars and if I don’t repay they’ll break my legs. Therefore, even though I could give the mugger 5 dollars and in my estimation, save some lives, I won’t. Because the 5 dollars is all I have and I need it to repay the bookie. That personal need to protect myself from the bookie defines that threshold of risk. Or more likely, it’s my rent money, and without it, I’m turned out onto predatory streets. Or it’s my food money for the week, or my retirement money, or something else that pays for something integral to my well-being. That’s when that personal threshold is meaningful.
Many situations could come along offering astronomical altruistic returns, but if taking risks for those returns will incur high personal costs, then I’m not interested in those returns. This is why someone with a limited income or savings typically shouldn’t make bets. It’s also why Effective Altruism’s betting focus makes no sense for bets with sizes that impact a person’s well-being when the bets are lost. I think it’s also why, in the end, EA’s don’t put their money where their mouthes are.
EA’s don’t make large bets or they don’t make bets that risk their well-being. Their “big risks” are not that big, to them. Or they truly have a betting problem, I suppose. It’s just that EA’s claim that betting money clarifies odds because EA’s start worrying about opportunity costs, but does it? I think the amounts involved don’t clarify anything, they’re not important amounts to the people placing bets. What you end up with is a betting culture, where unimportant bets go on leading to limited impact on bayesian thinking, at best, to compulsive betting and major personal losses, at worst. By the way, Singer’s utilitarian ideal was never to bankrupt people. Actually, it was to accomplish charity cost-effectively, implicitly including personal costs in that calculus (for example, by scaling % income that you give to help charitable causes according to your income size). Just an aside.
Hmm. Interesting, but I don’t understand the locality problem. I suspect that you think of consequences as non-local, but instead far-flung, thus involving you in weighing interests with greater significance than you would prefer for decisions. Is that the locality problem to you?
What an interesting and fun post! Your analysis goes many directions and I appreciate your investigation of normative, descriptive, and prescriptive ethics.
The repugnant conclusion worries me. As a thought experiment, it seems to contain an uncharitable interpretation of principles of utilitarianism.
-
You increase total and average utility to measure increases in individual utility across an existing and constant population. However, those measures, total and average, are not adequate to handle the intuition people associate with them. Therefore, they should not be used for deciding changes in utility across a population of changing size or one containing drastic differences in individual utility. For example, there’s no value in increasing total utility by adding additional people, but it will drive total utility up, even if individual utility is low.
-
You pursue egalitarianism to raise everyone’s utility up to the same level. Egalitarian is not an aspiration to lower some people’s well-being while raising other’s well-being. Likewise, egalitarianism is not pursuit of equality of utility at any utility level. Therefore, egalitarianism does not imply an overriding interest in equalizing everyone’s utility. For example, there’s no value in lowering other’s utility to match those with less.
-
You measure utility accumulated by existent people in the present or the future to know utility for all individuals in a population and that utility is only relevant to the time period during when those people exist. Those individuals have to exist in order for the measures to apply. Therefore, utilitarianism can be practiced in contexts of arbitrary changes in population, with a caveat: consequences for others of specific changes to population, someone’s birth or death, are relevant to utilitarian calculations. TIP: the repugnant conclusion thought experiment only allows one kind of population change: increase. You could ask yourself whether the thought experiment says anything about the real world or requirements of living in it.
-
Utility is defined with respect to purposes (needs, reasons, wants) that establish a reference point of accumulation of utility suitable for some purpose. That reference point is always at a finite level of accumulation. Therefore, to assume that utility should be maximized to an unbounded extent is an error, and speaks to a problem with some arguments for transitivity. NOTE: by definition, if there is no finite amount of accumulated utility past which you have an unnecessary amount for your purposes, then it is not utility for you.
The repugnant conclusion does not condemn utilitarianism to disuse, but points 1-4 seem to me to be the principles to treat charitably in showing that utilitarianism leads to inconsistency. I don’t believe that current formulations of the repugnant conclusion are charitable to those principles and the intuitions behind them.
-
About steel-manning vs charitably interpreting
The ConcernedEA’s state:
“People with heterodox/‘heretical’ views should be actively selected for when hiring to ensure that teams include people able to play ‘devil’s advocate’ authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view”
I disagree. Ability to accurately evaluate the views of the heterodox minority depends on developing a charitable interpretation (not necessarily a steel-manning) of the views. Furthermore, if the majority can not or will not develop such a charitable interpretation, then the heretic must put their argument in a form that the majority will accept (for example, using jargon and selectively adopting non-conflicting elements of the majority ideology). This unduly increases burden on the person with heterodox views.
The difference between a charitably -interpreted view and a steel-manned view is that the steel-manned view is strengthened to seem like a stronger argument to the opposing side. Unfortunately, if there are differences in evaluating strength of evidence or relevance of lines of argument (for example, due to differing experiences between the sides), then steel-manning will actually distort the argument. A charitable interpretation only requires that you accurately determine what the person holding the view intends to mean when they communicate it, not that you make the argument seem correct or persuasive to you.
Sometimes I think EA’s mean “charitable interpretation” when they write “steel-manning”. Other times I think that they don’t. So I make the distinction here.
It’s up to the opposing side to charitably interpret any devil’s advocate position or heretical view. While you could benefit from including diverse viewpoints, the burden is on you to interpret them correctly, to gain any value available from them.
Developing charitable interpretation skills
To charitably interpret another’s viewpoint takes Scout Mindset, first of all. With the wrong attitude, you’ll produce the wrong interpretation no matter how well you understand the opposing side. It also takes some pre-existing knowledge of the opposing side’s worldview, typical experiences, and typical communication patterns. That comes from research and communication skills training. Trial-and-error also plays a role: this is about understanding another’s culture, like an anthropologist would. Immersion in another person’s culture can help.
However, I suspect that the demands on EA’s to charitably interpret other people’s arguments are not that extreme. Charitable interpretations are not that hard in the typical domains you require them. To succeed with including heterodox positions, though, demands on EA’s empathy, imagination, and communication skills do go up.
About imagination, communication skills, and empathy for charitably interpreting
EA’s have plenty of imagination, that is, they can easily consider all kinds of strange views, it’s a notable strength of the movement, at least in some domains. However, EA’s need training or practice in advanced communication skills and argumentation. They can’t benefit from heterodox views without them. Their idiosyncratic takes on argumentation (adjusting Bayesian probabilities) and communication patterns (schelling points) fit some narrative about their rationalism or intelligence, I suppose, but they could benefit from long-standing work in communication, critical thinking, and informal logic. As practitioners of rationalism to the degree that mathematics is integral, I would think that EA’s would have first committed their thinking to consistent analysis with easier tools, such as inference structures, setting aside word-smithing for argument analysis. Instead, IBT gives EA’s the excuse not to grapple with the more difficult skills of analyzing argument structures, detailing inference types, and developing critical questions about information gaps present in an argument. EDIT: that’s a generalization, but is how I see the impact of IBT in practical use among EA’s.
The movement has not developed in any strong way around communication skills specifically, aside from a commitment to truth-seeking and open-mindedness, neither of which is required in order to understand others’ views, but are still valuable to empathy.
There’s a generalization that “lack of communication skills” is some kind of remedial problem. There are communication skills that fit that category, but those skills are not what I mean.
After several communication studies courses, I learned that communication skills are difficult to develop, that they require setting aside personal opinions and feelings in favor of empathy, and that specific communication techniques require practice. A similar situation exists with interpreting arguments correctly: it takes training in informal logic and plenty of practice. Scout mindset is essential to all this, but not enough on its own.
Actually, Galef’s podcast Rationally Speaking includes plenty of examples of charitable interpretation, accomplished through careful questions and sensitivity to nuance, so there’s some educational material there.
Typically the skills that require practice are the ones that you (and I) intentionally set aside at the precise time that they are essential: when our emotions run high or the situation seems like the wrong context (for example, during a pleasant conversation or when receiving a criticism). Maybe experience helps with that problem, maybe not. It’s a problem that you could address with cognitive aids, when feasible.
Is moral uncertainty important to collective morality?
Ahh, am I right that you see the value of moral uncertainty models as their use in establishing a collective morality given differences in the morality held by individuals?
You state: “Effective Altruist Political Ideology is hardly correct in every detail, but I don’t think it’s a bad sign if a movement broadly agrees on a lot of political issues. Some political policies are harmful! Other policies make things better!”
I identified EA as right-leaning because of lack of EA concern about climate change, as well as an emphasis in other areas (economics, personal finances, corporate regulation, technology development) that matches a right-leaning worldview. However, according to this 2018 survey , EA’s lean left, more than 60%.
There’s some overlap or really, flexibility, in how lefties in California approach financial and economic issues. Their left-leaning ideology expresses itself with opinions on abortion, racism, and climate change, and less with opinions about taxation, corporate regulation, or technology development. Which leads me to conclude that it is not helpful for me to identify EA’s with larger movements when dealing with EA views on specific issues. Better to focus on a specific EA brand of political ideology being developed inside the movement, and describe its formative influences (as the OP does), than to assume a more typical political ideology is present, such as liberal or conservative ideologies.
You state: “However, I think that this subject should be addressed with care. When you’re talking about homogeneity, it’s important to acknowledge effective altruist members of various groups underrepresented in effective altruism. Very few things are more unwelcoming than ‘by the way, people like you don’t exist here.’”
You think that acknowledging the diversity already present in EA is important, and I agree. The ConcernedEA’s don’t intend to insult or isolate any group. They are sincere in wanting to increase diversity in the EA movement, and their statements are to the effect that “The EA movement lacks diversity that would strengthen it provided there were some necessary overlap in values held by all.”
Yeah. I’ll add:
Single-sourcing: Building Modular Documentation by Kurt Ament
Dictionary of Concise Writing by Robert Hartwell Fiske
Elements of Style by William Strunk Jr
A Rulebook for Arguments by Anthony Weston
There are more but I’m not finished reading them. I can’t say that I’ve learned what I should from all those books, but I got the right idea, more than once, from them.
effectivealtruism.org suggests that EA values include:
proper prioritization: appreciating scale of impact, and trying for larger scale impact (for example, helping more people)
impartial altruism: giving everyone’s interests equal weight
open truth-seeking: including willingness to make radical changes based on new evidence
collaborative spirit: involving honesty, integrity, and compassion, and paying attention to means, not just ends.
Cargill Corporation lists its values as:
Do the Right Thing
Put People First
Reach Higher
Lockheed-Martin Corporation lists its values as:
Do What’s Right
Respect Others
Perform with Excellence
Shell Global Corporation lists its values as:
Integrity
Honesty
Respect
Short lists seem to be a trend, but longer lists with a different label than “values” appear from other corporations(for example, from Google or General Motors) . They all share the quality of being aspirational, but there’s a difference with the longer lists, they seem closer suited to the specifics of what the corporations do.
Consider Google’s values:
Focus on the user and all else will follow.
It’s best to do one thing really, really well.
Fast is better than slow.
Democracy on the web works.
You don’t need to be at your desk to need an answer.
You can make money without doing evil. .
There’s always more information out there.
The need for information crosses all borders
You can be serious without a suit
Great just isn’t good enough
Google values are specific. Their values do more than build their brand.
I would like to suggest that EA values are lengthy and should be specific enough to:
identify your unique attributes.
focus your behavior.
reveal your preferred limitations[1].
Having explicit values of that sort:
limit your appeal.
support your integrity .
encourage your honesty.
The values focus and narrow in addition to building your brand. Shell Global, Lockheed-Martin and Cargill are just building their brand. The Google Philosophy says more and speaks to their core business model.
All the values listed as part of Effective Altruism appear to overlap with the concerns that you raise. Obviously, you get into specifics.
You offer specific reforms in some areas. For example:
“A certain proportion EA of funds should be allocated by lottery after a longlisting process to filter out the worst/bad-faith proposals*”
“More people working within EA should be employees, with the associated legal rights and stability of work, rather than e.g. grant-dependent ‘independent researchers’.”
These do not appear obviously appropriate to me. I would want to find out what a longlisting process is, and why employees are a better approach than grant-dependent researchers. A little explanation would be helpful.
However, other reforms do read more like statements of value or truisms to me. For example:
“Work should be judged on its quality...”[rather than its source].
“EAs should be wary of the potential for highly quantitative forms of reasoning to (comparatively easily) justify anything”
It’s a truism that statistics can justify anything as in the Mark Twain saying, “There are three kinds of lies: lies, damned lies, and statistics”.
These reforms might inspire values like:
judge work on its quality alone, not its source
Use quantitative reasoning only when appropriate
*You folks put a lot of work into writing this up for EA’s. You’re smart, well-informed, and I think you’re right, where you make specific claims or assert specific values. All I am thinking about here is how to clarify the idea of aligning with values, the values you have, and how to pursue them. *
You wrote that you started with a list of core principles before writing up your original long post? I would like to see that list, if it’s not too late and you still have the list. If you don’t want to offer the list now, maybe later? As a refinement of what you offered here?
Something like the Google Philosophy, short and to the point, will make it clear that you’re being more than reactive to problems, but instead actually have either:
differences in values from orthodox EA’s
differences in what you perceive as achievement of EA values by orthodox EA’s
Here are a few prompts to help define your version of EA values:
-
EA’s emphasize quantitative approaches to charity, as part of maximizing their impact cost-effectively. Quantitative approaches have pros and cons, so how to contextualize them? They don’t work in all cases, but that’s not a bad thing. Maybe EA should only pay attention to contexts where quantitative approaches do work well. Maybe that limits EA flexibility and scope of operations, but also keeps EA integrity, accords with EA beliefs, and focuses EA efforts. You have specific suggestions about IBT and what makes a claim of probabilistic knowledge feasible. Those can be incorporated into a value statement. Will you help EA focus and limit its scope or are you aiming to improve EA flexibility because that’s necessary in every context where EA operates?
-
EA’s emphasize existential risk causes. ConcernedEA’s offer specific suggestions to improve EA research into existential risk. How would you inform EA values about research in general to include what you understand should be the EA approach to existential risk research? You heed concerns about evaluation of cascading and systemic risks. How would those specific concerns inform your values?
-
You have specific concerns about funding arrangements, nepotism, and revolving doors between organizations. How would those concerns inform your values about research quality or charity impact?
-
You have concerns about lack of diversity and its impact on group epistemics. What should be values there?
You can see the difference between brand-building:
ethicality
impactfulness
truth-seeking
and getting specific
research quality
existential, cascading, and systemic risks
scalable and impactful charity
quantitative and qualitative reasoning
multi-dimensional diversity
epistemic capability
democratized decision-making
That second list is more specific, plausibly hits the wrong notes for some people, and definitely demonstrates particular preferences and beliefs. As it should! Whatever your list looks like, would alignment with its values imply the ideal EA community for you? That’s something you could take another look at, articulating the values behind specific reforms if those are not yet stated or incorporating specific reforms into the details of a value, like:
democratized decision-making: incorporating decision-making at multiple levels within the EA community, through employee polling, yearly community meetings, and engaging charity recipients.
I don’t know whether you like the specific value descriptors I chose there. Perhaps I misinterpreted your values somewhat. You can make your own list. Making decisions in alignment with values is the point of having values. If you don’t like the decisions, the values, or if the decisions don’t reflect the values, the right course is to suggest alterations somewhere, but in the end, you still have a list of values, principles, or a philosophy that you want EA to follow.
[1] As I wrote in a few places in this post, and taking a cue from Google and the linux philosophy, sometimes doing one thing and doing it well is preferable to offering loads of flexibility. If EA is supposed to be the swiss-army knife of making change in the world, there’s still a lot of better organizations out there for some purposes rather than others, as any user of a swiss-army knife will attest, they are not ideal for all tasks. Also, your beliefs will inform you about what you do well. Does charity without quantitative metrics inevitably result in waste and corruption? Does use of quantitative metrics limit the applicability of EA efforts to specific types of charity work (for example, outreach campaigns)? Do EA quantitative tools limit the value of its work in existential risk? Can they be expanded with better quantitative tools (or qualitative ones)? Maybe EA is self-limiting because of its preferred worldview, beliefs and tools. Therefore, it has preferred limitations. Which is OK, even good.
Hm, ok. Couldn’t Pascal’s mugger make a claim to actually being God (with some small probability or very weakly plausibly) and upset the discussion? Consider basing dogmatic rejection on something other than the potential quality of claims from the person whose claims you reject. For example, try a heuristic or psychological analysis. You could dogmatically believe that claims of godliness and accurate probabilism are typical expressions of delusions of grandeur.
My pursuit of giving to charity is not unbounded, because I don’t perceive an unbounded need. If the charity were meant to drive unbounded increase in the numbers of those receiving charity, that would be a special case, and not one that I would sign up for. But putting aside truly infinite growth of perceived need for the value returned by the wager, in all wagers of this sort that anyone could undertake, they establish a needed level of utility, and compare the risk involved to whatever stakeholders of taking the wager at that utility level against the risks of doing nothing or wagering for less than the required level.
In the case of ethics, you could add an additional bounds on personal risk that you would endure despite the full need of those who could receive your charity. In other words, there’s only so much risk you would take on behalf of others. How you decide that should be up to you. You could want to help a certain number of people, or reach a specific milestone towards a larger goal, or meet a specific need for everyone, or spend a specific amount of money, or whathaveyou, and recognize that level of charity as worth the risks involved to you of acquiring the corresponding utility. You just have to figure it out beforehand.
If by living 100 years, I could accomplish something significant, but not everything, on behalf of others, that I wanted, but I would not personally enjoy that time, then that subjective decision makes living past 100 years unattractive, if I’m deciding solely based on my charitable intent. I would not, in fact, live an extra 100 years for such a purpose without meeting additional criteria, but for example’s sake, I offered it.
I think identifying common modes of inference (e.g., deductive, inductive, analogy) can be helpful, if argument analysis takes place. Retrodiction is used to describe a stage of retroductive (abductive) reasoning, and so has value outside a Bayesian analysis.
If there’s ever an equivalent in wider language for what you’re discussing here (for example, “important premise” for “crux”), consider using the more common form rather than specialized jargon. For example, I find EA use of “counterfactual” to confuse me about the meaning of what I think are discussions of necessary conditions, whereas counterfactual statements are, to me, false statements, relevant in a discussion of hypothetical events that do not occur. Many times I wanted to discuss counterfactuals but worried that the conversation with EA’s would lead to misunderstandings, as if my analysis were to explore necessary conditions for some action or consequence, when that was not the intent.
The “typical mind fallacy” is interesting. On the one hand, I think some inferences taking the form of shared values or experience are fallacious. On the other hand, some typical inferences about similarities between people are reliable and we depend on them. For example, that people dislike insults. A common word starting with ‘n’ has a special case, but is mostly taken as a deeply unwelcome insult, our default is to treat that knowledge as true. We rely on default (defeasible) reasoning when we employ those inferences, and add nuance or admit special cases for their exceptions. In the social world, the “typical mind fallacy” has some strong caveats.
I’m not sure I’m understanding. It looks like at some K, you arbitrarily decide that the probability is zero, sooner than the table that the paper suggests. So, in the thought experiment, God decides what the probability is, but you decide that at some K, the probability is zero, even though the table lists the N at which the probability is zero where N > K. Is that correct?
Another way to look at this problem is with respect to whether what is gained through accepting a wager for a specific value is of value to you. The thought experiment assumes that you can gain very large amounts and matter how high the accumulated value at N, the end of the game, you still have a use for the amount that you could, in principle, gain.
However, for any valuable thing I can think of (years of life, money, puppies, cars), there’s some sweet spot, with respect to me in particular. I could desire a 100 hundred years of life but not 1000, or 10 cars but not 100, or fifty million dollars but not five hundred million dollars, or one puppy but not ten. Accordingly, then, I know how much value to try to gain.
Assuming some pre-existing need, want, or “sweet spot”, then, I can look at the value at i, where at i the value meets my need. If N< i, the question becomes whether I still gain if I get less value than I want. If N> i, then I know to take a risk up to K, where K = i and K < N. If N=i, then I know to play the game (God’s game) to the end.
In real life, people don’t benefit past some accumulation of valuable something, and what matters is deciding what level past which an accumulation is wasteful or counterproductive. One hundred cars would be too much trouble, even one puppy is a lot of puppies when you have to clean up puppy poop, and why not $500,000,000? Well, that’s just more than I need, and would be more of a burden than a help. Put differently, if I really needed big sums, I’d take a risks for up to that amount, but no higher. When would I need such big sums and take the accompanying big risks? Maybe if I owed a bookie $50,000,000 and the bookie had very unpleasant collectors?
Do you have specific concerns about how the capital is spent? That is, are you dissatisfied and looking to address concerns that you have or to solve problems that you have identified?
I’m wondering about any overlap between your concerns and the OP’s.
I’d be glad for an answer or just a link to something written, if you have time.
Well, thank you for the helpful follow-up. I went ahead and bought the book, and will read it. I have browsed three articles and read two through.
The first article was “Animal advocacy’s Stockholm Syndrome”, written by several authors. The tone of that article is positive toward EA, starting off with “It’s time for Effective Altruists in the farmed animal protection movement to expand their strategic imagination, their imagination of what is possible, and their imagination of what counts as effective. … Effective Altruist support has brought new respect and tractability to the neglected plight of farmed animals, and we … are grateful. We write this essay as allies.”
And then they write about difficulties in getting metrics thinking to apply to systemic change efforts in animal advocacy, and yes they do mention EA homogeneity as reason to expand diversity within EA and so develop new perspectives within it. I expect the calls for inclusiveness and diversity are a theme throughout the book.
The second article that I read was “How ‘alternative proteins’ create a private solution to a public problem” by Michele Simon, a veteran of the vegan food and animal rights movement.
Simon suggests that increasing investment in vegetarian meat replacements results in increasing profits for big food companies but not changes in the consumption behavior of existing meat eaters. Instead, the vegetarian meat replacements attract vegetarians to brands or restaurant chains. The article mentions that vegetarian options on a restaurant menu handle the “veto vote”, that one person in a group who can’t eat meat. Simon claims that offering a vegetarian option can result in more meat consumption at a restaurant like McDonalds as opposed to somewhere else serving pasta or salad or another option with less meat. However, I suspect that anyone willing to eat at McDonalds will eat a comparable meat meal at another restaurant (for example, Burger King) if a veto vote counts. Bringing vegetarians into their chains lets the chains sell more food overall.
Simon makes the point that alternative meats are being trialed, and if their sales drop, companies stop selling them. She lists a few examples from fast food chains to prove the point. Alternative proteins are not very popular when trialed, and initial enthusiasm and sales drop.
I interpret Simon to think that big food is interested in keeping up meat sales along with adding other products, and that there is no replacement of meat with non-meat taking place. Instead, food corporations are trying to appeal to vegetarians with foods that provide similar taste experiences but without the meat ingredients. That would explain why Simon thinks that lauding these companies for their inclusion of non-meat items really misses what the companies are trying to do. Basically, all the meat replacements do is give vegetarians a seat at the same table. The meat-eaters stick with the meat versions.
If Simon is right, then a switch to meat alternatives has to happen through consumer interest, from vegetarians or from meat eaters. It has to be driven by new demand, rather than new supply.
The article discusses GFI, cultured meat, and how the food industry subverts the food systems perspective, “where food comes from and how it is grown matters.” Simon hints that there’s an ontology useful for understanding food systems that the GFI marketing literature doesn’t use.
Both the articles I read put out this “EA’s are white guys” thing. I’m not offended, because I’m not an EA, and even if I were, maybe I should just agree? I am a white guy. There’s some argument for increasing diversity in your community, the ConcernedEA’s make a strong case in their recent sequence.
Where I think both of the articles I read are right is in claiming that EA does not offer a political or economic or regulatory perspective that puts its activism in opposition to larger business interests in food or animal farming.
I haven’t explored the whole issue of legal precedents that the book addresses yet.
Thank you for your insights in all these areas, if you have more to add, please do. I appreciate the insider conversation.
Thank you for the chapter pointers.
You mention obvious reasons. The reasons are not obvious to me, because I am ignorant about this topic. Do you mean that these critics are being self-serving and that some animal advocacy orgs lost funding for other reasons than EA competition or influence?
The book’s introduction proposes:
sanctuary X lost funding because of EA competition and other EA influence.
legal cases to free animals lose some impetus because without a sanctuary for a freed animal, an abused animal could suffer a worse fate than their current abuse.
me: legal cases create precedents that work at a larger scale later on, successful cases build a movement toward better treatment of animals, and sanctuaries contribute indirectly to that result.
EA competition results in lost legal cases and reduced precedents for animal legal rights and positive treatment standards.
Premise 1 is what you think is false, if I understand you correctly, and substitution of another premise, such as:
Sanctuary X lost funding because of poor fund-raising approaches or a drop in animal advocacy funding overall.
could be an alternative.
A plausible claim in the book could be that EA has a very strong reputation in animal advocacy, and is changing how animal advocacy is done by affecting how pre-EA orgs do their work (for example, their metrics). Is that something happening behind the scenes?
I’m not intending to find out whether EA is bad or not, more just how strong of a trend is the EA mode of thought (efficacy, metrics, large-scale impact), whatever it is currently, in deciding what’s good for animals broadly. There’ll always be some negative effects of any change in thinking, and some unintended consequences. However, I suspect that you, and other EA’s, think that EA does not have a strong influence in the animal advocacy field overall, despite what these authors in the book claim. Am I right about that?
[Question] How prominent is EA in animal advocacy?
I wrote:
“You need to rock selfishness well just to do charity well (that’s my hunch).”
Selfishness, so designated, is not a public health issue nor a private mental health issue, but does stand in contrast to altruism. To the extent that society allows your actualization of something you could call selfishness, that seems to be your option to manifest, and by modern standards, without judgement of your selfishness. Your altruism might be judged, but not your selfishness, like, “Oh, that’s some effective selfishness” vs “Oh, that’s a poser’s selfishness right there” or “That selfishness there is a waste of money”.
Everyone thinks they understand selfishness, but there don’t seem to be many theories of selfishness, not competing theories, nor ones tested for coherence, nor puzzles of selfishness. You spend a great deal of time on debates about ethics, quantifying altruism, etc, but somehow selfishness is too well-understood to bother?
The only argument over selfishness that has come up here is over self-care with money. Should you spend your money on a restaurant meal, or on charity? There was plenty of “Oh, take care of yourself, you deserve it” stuff going around, “Don’t be guilty, that’s not helpful” but no theory of how self-interest works. It all seems relegated to an ethereal realm of psychological forces, that anyone wanting to help you with must acknowledge.
Your feelings of guilt, and so on, are all tentatively taken as subjectively impactful and necessarily relevant just by the fact of your having them. If they’re there, they matter. There’s pop psychology, methods of various therapy schools, and different kinds of talk, really, or maybe drugs, if you’re into psychiatric cures, but nothing too academic or well thought out as far as what self-interest is, how to perform it effectively, how or whether to measure it, and its proper role in your life. I can’t just look at the problem, so described, and say, “Oh, well, you’re not using a helpful selfishness theory to make your decisions there, you need to...” and be sure I’m accomplishing anything positive for you. I might come up with some clever reframe or shift your attention successfully, but that says nothing about a normative standard of selfishness that I could advocate.
I understand rationalization and being self-serving, but only in well-defined domains where I’ve seen it before, in what some people call “patterns of behavior.” Vices do create pathological patterns of behavior, and ending them is clarifying and helpful to many self-interested efforts. A 100-hundred year effort to study selfishness is about more than vices. Or, well, at least on the surface, depending on what researchers discover. I have my own suspicions.
Anyway, we don’t have the shared vocabulary to discuss vices well. What do you think I mean by them? Is adderall a vice? Lite beer? Using pornography? The occasional cigarette? Donuts? Let’s say I have a vice or two, and indulge them regularly, and other people support me in doing that, but we end doing stuff together that I don’t really like, aside from the vice. Is it correct then to say that I’m not serving myself by keeping my vice going? Or do we just call that a reframe because somebody’s trying to manipulate me into giving up my habits? What if the vice gets me through a workday?
Well, there’s no theories of self-interest that people study in school to help us understand those contexts, or if there are, they don’t get much attention. I don’t mean theories from psychology that tend to fail in practice. It’s a century’s effort to develop and distribute the knowledge to fill that need for good theories.
Galef took steps to understand selfish behavior. She decided that epistemic rationality served humanity and individuals, and decided to argue for it. That took some evaluation of behavior in an environment. It motivated pursuit of rationality in a particular way.
Interestingly, her tests, such as the selective critic test, or the double standard test, reveal information that shifts subjective experience. Why do we need those tests(Not, do we need them, but, why do we need them)? What can we do about the contexts that seem to require them? Right now, your community’s culture encourages an appetite for risk, particularly financial risk, that looks like a vice. Vices seem to attract more vices.
You’re talking about epistemics. A lot of lessons in decision-making are culturally inherited. For various reasons, modern society could lose that inheritance. Part of that inheritance is a common-sense understanding of vices. Without that common-sense there is only a naivete that could mean our extinction. Or that’s how I see it.
For example, in 2020, one of the US’s most popular talk show hosts (Steven Colbert) encouraged viewers to drink, and my governor (Gavin Newsom) gave a speech about loosening rules for food deliveries so that we could all get our wine delivered to our doors while we were in lockdown. I’m not part of the Christian right, but I think they still have the culture to understand that kind of behavior as showing decadence and inappropriateness. I would hope so. Overall, though, my country, America, didn’t see it that way. Not when, at least in people’s minds, there was an existential threat present. A good time to drink, stuck at home, that’s apparently what people thought.
I’m really not interested in making people have a less fun time. That is not my point at all.
I’ve also been unsuccessful in persuading people to act in their own self-interest. I already know it doesn’t work.
If you don’t believe in “vices”, you don’t believe in them. That’s fine. My point here was that it’s not safe to ignore them, and I would like to add, there’s nothing stronger than a vice to make sure you practice self-serving rationalization.
If, for the next 40-60 years, humanity faces a drawn out, painful coping with increasing harms from climate change, as I believe, and our hope for policy and recommendations is communities like yours, and what we get is depressed panicky people indulging whatever vices they can and becoming corrupt as f**k? Well, things will go badly.
I understand, Henrik. Thanks for your reply.
Forum karma
The karma system works similarly to highlight information, but there’s these edge cases. Posts appear and disappear based on karma from first page views. New comments that get negative karma are not listed in the new comments from the homepage, by default.
This forum in relation to the academic peer review system
The peer review system in scientific research is truly different than a forum for second-tier researchers doing summaries, arguments, or opinions. In the forum there should be encouragement of access to disparate opinions and perspectives.
The value of disparate information and participants here
Inside the content offered here are recommendations for new information. I evaluate that information according to more conventional critical thinking criteria: peer-reviewed, established science, good methodology. Disparate perspectives among researchers here let me gain access to multiple points of view found in academic literature and fields of study. For example, this forum helped me research a foresight conflict between climate economists and earth scientists that is long-standing (as well as related topics in climate modeling and scenario development).
NOTE:Peer-reviewed information might have problems as well, but not ones to fix with a voting system relying on arbitrary participants.
Forum perspectives should not converge without rigorous argument
Another system that bubbles up what I’d like to read? OK, but will it filter out divergence, unpopular opinions, evidence that a person has a unique background or point of view, or a new source of information that contradicts current information? Will your system make it harder to trawl through other researchers’ academic sources by making it less likely that forum readers ever read those researchers’ posts?
In this environment, among folks who go through summaries, arguments, and opinions for whatever reason, once an information trend appears, if its different and valid, it lets me course correct.
The trend could signal something that needs changing, like “Here’s new info that passes muster! Do something different now!” or signal that there’s a large information gap, like “Woa, this whole conversation is different! I either seem to disagree with all of the conclusions or not understand them at all. What’s going on? What am I missing?”
A learning environment
Forum participants implicitly encourage me to explore bayesianism and superforecasting. Given what I suspect are superforecasting problems (its aggregation algorithms and bias in them and in forecast confirmations), I would be loathe to explore it otherwise. However, obviously smart people continue to assert its value in a way that I digest as a forum participant. My minority opinion of superforecasting actually leads to me learning more about it because I participate in conversation here. However, if I were filtered out in my minority views so strongly that no one ever conversed with me at all I could just blog about how EA folks are really wrong and move on. Not the thing to do, but do you see why tolerance of my opinions here matters? It serves both sides.
From my perspective, it takes patience to study climate economists, superforecasting, bayesian inference, and probabilism. Meanwhile, you folks, with different and maybe better knowledge than mine on these topics, but a different perspective, provide that learning environment. If there can be reciprocation, that’s good, EA folks deserve helpful outside perspectives.
My experience as other people’s epistemic filter
People ignore my recommendations, or the intent behind them. They either don’t read what I recommend, or much more rarely, read it but dismiss it without any discussion. If those people use anonymized voting as their augmentation approach, then I don’t want to be their filter. They need less highlighting of information that they want to find, not more.
Furthermore, at this level of processing information, secondary or tertiary sources, posts already act like filters. Ranking the filtering to decide whether to even read it is a bit much. I wouldn’t want to attempt to provide that service.
Conclusion
ChatGPT, and this new focus on conversational interfaces makes it possible that forum participants in future will be AI, not people. If so,they could be productive participants, rather than spam bots.
Meanwhile, the forum could get rid the karma system altogether, or add configuration that lets a user turn off karma voting and ranking. That would be a pleasant alternative for someone like me, who rarely gets much karma anyway. That would offer even less temptation to focus on popular topics or feign popular perspectives.
Right, the first class are the use cases that the OP put forward, and vote brigading is something that the admins here handle.
The second class is more what I asking about, so thank you for explaining why you would want a conversation bubble. I think if you’re going to go that far for that reason, you could consider a entrance quiz. Then people who want to “join the conversation” could take the quiz, or read a recommended reading list, and then take the quiz, to gain entrance to your bubble.
I don’t know how aversive people would find that, but if lack of technical knowledge were a true issue, that would be one approach to handling it while still widening the group of conversation participants.
Can you explain with an example when a bubble would be a desirable outcome?
Yes, I took a look at your discussion with MichaelStJules. There is a difference in reliability between:
probability that you assign to the Mugger’s threat
probability that the Mugger or a third party assigns to the Mugger’s threat
Although I’m not a fan of subjective probabilities, that could be because I don’t make a lot of wagers.
There are other ways to qualify or quantify differences in expectation of perceived outcomes before they happen. One way is by degree or quality of match of a prototypical situation to the current context. A prototypical situation has one outcome. The current context could allow multiple outcomes, each matching a different prototypical situation. How do I decide which situation is the “best” match?
a fuzzy matching: a percentage quantity showing degree of match between prototype and actual situation. This seems the least intuitive to me. The conflation of multiple types and strengths of evidence (of match) into a single numeric system (for example, that bit of evidence is worth 5%, that is worth 10%) is hard to justify.
a hamming distance: each binary digit is a yes/no answer to a question. The questions could be partitioned, with the partitions ranked, and then hamming distances calculated for each ranked partition, with answers about the situation in question, and questions about identifying a prototypical situation.
a decision tree: each situation could be checked for specific values of attributes of the actual context, yielding a final “matches prototypical situation X” or “doesn’t match prototypical situation X” along different paths of the tree. The decision tree is most intuitive to me, and does not involve any sums.
In this case, the context is one where you decide whether to give any money to the mugger, and the prototypical context is a payment for services or a bribe. If it were me, the fact that the mugger is a mugger on the street yields the belief “don’t give” because, even if I gave them the money, they’d not do whatever it is that they promise anyway. That information would appear in a decision tree, somewhere near the top, as “person asking for money is a criminal?(Y/N)”