Research related to the research OP mentioned found that increases in carbon emissions also come with things that decrease (and increase) suffering in other ways, which has complicated the analysis of whether it results in a net increase or decrease in suffering. https://reducing-suffering.org/climate-change-and-wild-animals/ https://reducing-suffering.org/effects-climate-change-terrestrial-net-primary-productivity/
Timothy Chan
Yes, a similar dynamic (relating to siding with another side to avoid persecution) might have existed in Germany in the 1920s/1930s (e.g. I imagine industrialists preferred Nazis to Communists). I agree it was not a major factor in the rise of Nazi Germany—which was one result of the political violence—and that there are differences.
I would add that it’s shunning people for saying vile things with ill intent which seems necessary. This is what separates the case of Hanania from others. In most cases, punishing well-intentioned people is counterproductive. It drives them closer to those with ill intent, and suggests to well-intentioned bystanders that they need to choose to associate with the other sort of extremist to avoid being persecuted. I’m not an expert on history but from my limited knowledge a similar dynamic might have existed in Germany in the 1920s/1930s; people were forced to choose between the far-left and the far-right.
Given his past behavior, I think it’s more likely than not that you’re right about him. Even someone more skeptical should acknowledge that the views he expressed in the past and the views he now expresses likely stem from the same malevolent attitudes.
But about far-left politics being ‘not racist’, I think it’s fair to say that far-left politics discriminates in favor or against individuals on the basis of race. It’s usually not the kind of malevolent racial discrimination of the far-right—which absolutely needs to be condemned and eliminated by society. The far-left appear primarily motivated by benevolence towards racial groups perceived to be disadvantaged or are in fact disadvantaged, but it is still racially discriminatory (and it sometimes turns into the hateful type of discrimination). If we want to treat individuals on their own merits, and not on the basis of race, that sort of discrimination must also be condemned.
I’m skeptical about the value of slowing down leading AI labs primarily because it likely reduces the influence of the values of EAs in shaping the deployment of AGI/ASI. Anthropic is the best example of a lab with people who share these values, but I’d imagine that EAs also have more overlap with the staff at OpenAI and DeepMind than actors who would catch up because of a slowdown. And for what it’s worth, the labs were founded with the stated goal of benefiting humanity before it became far more apparent that current paradigms have a high chance of resulting in AGI with the potential of granting profit/power to their human operators and investors.
As others have noted, people and powerful groups outside of this community and surrounding communities don’t seem to be interested in consequentialist, impartial, altruistic priorities like creating a positive long-term future for humanity, but are instead more self-interested. Personally I’m more downside-focused, but I think it’s relevant to most EAs that other parties wouldn’t be as willing to dedicate a large amount of resources towards creating large amounts of happiness for others, and because of that, the reduction of influence of the values of EAs will result in a considerable loss of expected future value.
Thank you for bringing attention to fetal suffering—especially the possibility of suffering of <24 weeks fetuses.
Others have already pointed out that the interventions of applying anaesthetics to fetuses has issues of political tractability, but I think there’s also a dynamic that could result in backfire on moral circle expansion efforts to include fetuses and/or other “less complex” entities.
Most people haven’t spent time thinking about whether simpler entities can suffer and haven’t formed an opinion so it seems like they’re particularly susceptible to first impressions. The suggestion that less developed fetuses can suffer would likely imply to them that early abortions are wrong. People who don’t like this normative implication might decide (probably unjustifiably) to think less developed fetuses and by extension other “less complex” entities cannot suffer to absolve themselves of acting in ways that might increase fetal suffering. “Early abortions are not wrong → early fetuses cannot suffer → anything of “lower complexity” cannot suffer”. On the other hand, first introducing the ideas abstractly and suggesting that we should care about simple entities “in general” sidesteps this and could lead people to eventually care about fetal suffering in an, admittedly indirect, but less politically charged way.
So between two strategies, (1) advocate for lower complexity entities in general, and (2) advocate for less developed fetuses, those concerned with moral circle expansion to fetuses and/or other simpler entities should probably focus on the first strategy.
(Personally, I’d prefer if people accepted that they act in ways that might increase suffering, while simultaneously aim to decrease suffering.)
I think there’s a connection that results from how both theories dissolve the concept of qualia. Eliminativism does this by saying qualia is actually physics and panpsychism (in its most expansive forms) does this by saying all physics has qualia. Both theories effectively make the “suffering” label less exclusive—and more processes would have a higher probability of being correctly associated with that label (unclear whether probability is the right word in the case of eliminativism). With panpsychism, processes are conscious and the only remaining question is whether they also suffer. With eliminativism, the distinction between “what we usually take to be suffering processes” and “other processes” is blurred and we’re more permissive of some members of the latter being considered the former, or with a probabilities framing, less certain that some members of the latter is not the former. (Although, I guess alternatively the uncertainty can go the other way and we might be more skeptical of processes being suffering processes. But most people already put zero weight or very little weight on particles suffering so it seems like the uncertainty/blurred distinction should increase it?)
This seems similar to how empty individualism and open individualism are related. They both dissolve the common-sense concept of personal identity featured in closed individualism. Personal identity ceases to be “special”: open individualism merges everyone and empty individualism atomizes everyone into individual-moments.
Tomasik also offers an analogy of how the concept of élan vital was dissolved in another article. As I understand it, the concept was eliminated with advances in knowledge of biochemistry. But alternatively people could have also said “actually everything is pretty similar to stuff we consider alive—let’s just say everything falls under the term ‘alive’ then” (while not making any unscientific claims; it just means expanding the definition of “alive” to include everything) and élan vital would be similarly dissolved. The final result seems similar and the concept doesn’t distinguish processes from one another in a way previously thought as meaningful.
A while back I wrote that I agreed with the observation that some of (new wave) EA’s norms seem similar to those of the religion imposed on me and others as children. My current thinking is that there may actually be a link connecting the culture in parts of Protestantism and some of the (progressive) norms EA adopts, along with an atypical origin that probably deserves more scrutiny. The “link” part might be more apparent to people who’ve noticed a “god-shaped hole” in the West that makes some secular movements resemble certain parts of religions. The “origin” part might be less apparent but it’s been discussed by Scott Alexander before. So, this theory isn’t all that original.
Essentially: Puritans, as one of four major cultures originating from the UK, exert huge founder effects on America, which both influences parts of itself as well as other countries for better or worse → Protestant culture gradually changed to be more socially judgmental in some ways etc. → More recently, people increasingly reject the existence of a God but keep elements of the culture of that religion → EA now draws heavily from nth generation ex-Protestants/Protestant-adjacents who also tend to be more active in trying to change society (other people’s actions) and approach it with some of the same inherited attitudes
That is one causal chain but a tree might show more causes and effects. For example, the Puritan founder effects probably also influenced modern academia (in part, spearheaded by a few institutions in New England) which again, EA heavily draws from. Other secular institutions might also be influenced by osmosis, and produce downstream effects.
It seems difficult to believe these attitudes just disappeared without affecting other movements, culture, and society. The Puritan legacy also seems to have a track record of being quite influential.
Some relevant numbers in this article: https://reducing-suffering.org/insect-suffering-silk-shellac-carmine-insect-products/
From the study it looks like participants were given a prompt and asked to “free-list” instead checking boxes so it might be more indicative of what’s actually on people’s minds.
The immoral behaviors prompt being:
The aim of this study is to learn which actions or behaviors are considered immoral. Please provide a list of actions and behaviors which, in your opinion, are immoral. Please list at least five examples. There are no correct answers, we are just interested in your opinion.
My impression is that the differences between the American and Chinese lists (with the Lithuanian list somewhat in between) appear to be a function of differences in the degree of societal order (i.e., crime rates, free speech), cultural differences (i.e., extent of influence of: Anglo-American progressivism, purity norms of parts of Christianity, traditional cultures, and Confucianism), and demographics (i.e, topics like racism/discrimination that might arise in contexts that are ethnically diverse instead of homogenous).
Oh, I see. Do you know if she is ok with eating lamb/mutton/goat? I suspect there are also grazing effects (that might reduce wild-animal suffering overall) but I don’t know whether they are as significant. Maybe @Brian_Tomasik knows?
Beef consumption directly costs less suffering/kg because of the amount of meat provided per cow. It also plausibly reduces wild-animal suffering by taking up land.
Not sure why your question was downvoted.
Brian Tomasik has written a few articles on this, including https://reducing-suffering.org/wild-caught-fishing-affects-wild-animal-suffering/
Overall, he thinks its unclear but urges erring on the side of caution:
“This piece surveys reasons why the harvesting of wild fish might reduce as well as increase the suffering of oceanic creatures. The net impact is extremely unclear. (...) That said, I would probably err on the side of not eating fish, especially because wild-catch fishing may increase the amount of fish farming in the future.”
Regarding the TCS PhD, is it possible to work on it remotely from London?
Another relevant article on “machine psychology” https://arxiv.org/abs/2303.13988 (interestingly, it’s by a co-author of Peter Singer’s first AI paper)
You seem to have written against proposing norms in the past. So apologies for my mistake and I’m glad that’s not your intention.
To be clear, I think we should be free to write as we wish. Regardless, it still seems to me that voicing support for an already quite popular position on restricting expression comes with the risk of strengthening associated norms and bringing about the multiple downsides I mentioned.
Among the downsides, yes, the worry that strengthening strong norms dealing with ‘offensive’ expression can lead to unfair punishments. This is not a baseless fear. There are historical examples of norms on restricting expression leading to unfair punishments; strong religious and political norms have allowed religious inquisitors and political regimes to suppress dissenting voices.
I don’t think EA is near the worst forms of it. In my previous comment, I was only pointing to a worrying trend towards that direction. We may (hopefully) never arrive at the destination. But along the way, there are more mild excesses. There have been a few instances where, I believe, the prevailing culture has resulted in disproportionate punishment either directly from the community or indirectly from external entities whose actions were, in part, enabled by the community’s behavior. I probably won’t discuss this too publicly but if necessary we can continue elsewhere.
It seems that you, correct me if I’m wrong, along with many who agree with you, are looking to further encourage a norm within this domain (on the basis of at least one example, i.e. the one example from the blog post, that challenged it).
This might benefit some individuals by reducing their emotional distress. But strengthening such a norm that already seems strong/largely uncontroversial/to a large extent popular in the context of this community, especially one within this domain, makes me concerned in several ways:
Norms like these that target expression considered offensive seem to often evolve into/come in the form of restrictions that require enforcement. In these cases, enforcement often results in:
“Assholes”/”bad people” (and who may much later even be labeled “criminals” through sufficient gradual changes) endure excessive punishments, replacing what could have been more proportionate responses. Being outside of people’s moral circles/making it low status to defend them makes it all too easy.
Well-meaning people get physically or materially (hence also emotionally) punished for honest mistakes. This may happen often—as it’s easy for humans to cause accidental emotional harm.
Enforcement can be indeed more directed but this is not something we can easily control. Even if it is controlled locally, it can go out of control elsewhere.
Individuals who are sociopolitically savvy and manipulative may exploit their environment’s aversion of relatively minor issues to their advantage. This allows them to appear virtuous without making substantial contributions or sacrifices.
At best, this is inefficient. At worst, to say the least—it’s dangerous.
Restrictions in one domain often find their way into another. Particularly, it’s not challenging to impose restrictions that are in line with illegitimate authority as well as power gained through intimidation.
This can lead people to comfortably dismiss individuals who raise valid but uncomfortable concerns, by labeling them as mere “assholes”. To risk a controversial, but probably illuminating example, people often unfairly dismiss Ayaan Hirsi Ali as an “Islamophobe”.
This burdens the rest of society with those other restrictions and their consequences. Those other restrictions can range from being a mere annoyance to being very bad.
I’d be less worried (and possibly find it good) if such a norm was strengthened in a context where it isn’t strong, which gives us more indication that the changes are net positive. However, it’s evident that a large number of individuals here already endorse some version of this norm, and it is quite influential. Enthusiasm could easily become excessive. I sincerely doubt most people intend to bring about draconian restrictions/punishments (on this or something else), but those consequences can gradually appear despite that.
FWIW, Brian Tomasik does a fuzzies/utilons split thing too. One justification is that it helps avoid cognitive dissonance between near-term causes and, in his mind, more effective longtermist causes.
My position, in contrast, is that I acknowledge the epistemic force of far-future arguments but maintain some commitment to short-term helping as an intrinsic spiritual impulse. Along the lines of Occam’s imaginary razor, this allows me to avoid distorting my beliefs about the far-future question based on emotional pulls to stop torture-level suffering in the present. In the face of emotion-based cognitive dissonance, it’s often better to change your values than to change your beliefs.
It might be overly confusing to call it “changing [my ideal] values”. It’s more that I have preferences for both. Some that seem like ones I would ideally like to keep (minimizing suffering in expectation), but some that as a human, for better or worse, I have (drives to reduce suffering in front of me, sticking to certain principles...).
If the price of a split in donations/personal focus results in me becoming more effective at the far-future stuff that I think is more important for utilons, in a way that makes those utilons go up, then that seems worth it.
Yeah, in a scenario with “nation-controlled” AGI, it’s hard to see people from the non-victor sides not ending up (at least) as second-class citizens—for a long time. The fear/lack of guarantee of not ending up like this makes cooperation on safety more difficult, and the fear also kind of makes sense? Great if governance people manage to find a way to alleviate that fear—if it’s even possible. Heck, even allies of the leading state might be worried—doesn’t feel too good to end up as a vassal state. (Added later (2023-06-02): It may be a question that comes up as AGI discussions become mainstream.)
Wouldn’t rule out both American and Chinese outside of respective allied territory being caught in the crossfire of a US-China AI race.
Political polarization on both sides in the US is also very scary.
Some people make the argument that the difference in suffering between a worst-case scenario (s-risk) and a business-as-usual scenario, is likely much larger than the difference in suffering between a business-as-usual scenario and a future without humans. This suggests focusing on ways to reduce s-risks rather than increasing extinction risk.
A helpful comment from a while back: https://forum.effectivealtruism.org/posts/rRpDeniy9FBmAwMqr/arguments-for-why-preventing-human-extinction-is-wrong?commentId=fPcdCpAgsmTobjJRB
Personally, I suspect there’s a lot of overlap between risk factors for extinction risk and risk factors for s-risks. In a world where extinction is a serious possibility, it’s likely that there would be a lot of things that are very wrong, and these things could lead to even worse outcomes like s-risks or hyperexistential risks.