FYI, if people want to look into what aliens might value, an interesting direction might be to think about convergent evolution. One (the only?) existing book on the topic: The Zoologist’s Guide to the Galaxy/ Quanta magazine article. Geoffrey Miller mentioned related work in a comment a few months ago.
Is this pointless speculation? I suspect knowing about what aliens might value would be useful in understanding how to better implement Evidential Cooperation in Large Worlds (ECL) (right now, right here, on Earth) although some people may disagree with me on that.
I’ve also encountered thinking that this could help avoid/reduce conflicts with aliens (which may motivate work on it from various longtermist perspectives).
I guess this kind of stuff would be particularly suited for people with an evolutionary biology/related field background but it also seems like people can pick these things up quickly/use AI assistants to help out.
I find it odd that many people’s ideas about other minds don’t involve, or even contradict, the existence of some non-arbitrary function that maps a (finite) number of discrete fundamental physical entities (assuming physics is discrete) in a system to a corresponding number of minds (or some potentially quantifiable property of minds) in that same system.
I have intuitions (which could be incorrect) that “physics is all there is” and that “minds are ultimately physical,” and it feels possible, in principle, to unify them somehow and relate “the amount of stuff” in both the physical and mental domains through such a function.
To me, this solution (“count all subsets of all elements within systems”) proposed by Brian Tomasik appears to be among plausible non-arbitrary options, and it could also be especially ethically relevant. Solutions such as these that suggest the existence of a very large number of minds imply moral wagers, e.g. to minimize possible suffering in the kinds of minds that are implied to be most numerous (in this case, those that comprise ~half of everything in the universe), which might make them worth investigating further.
Even if physics is continuous rather than discrete, it still seems possible that there could be a mapping from continuous physics to discrete minds. (disclaimer: I don’t know much physics, and I haven’t thought much about how it relates to the philosophy of mind.)
This is all speculative and counterintuitive. On the other hand, common-sense intuitions developed through evolution might not accurately represent the first-person experiences, or lack thereof, of other systems. They seem to have instead evolved because they helped model complicated systems relevant to fitness by picturing them as similar to one’s own mind. Common-sense intuitions aren’t necessarily reliable, and counterintuitive conclusions could potentially be true.
I’m skeptical about the value of slowing down leading AI labs primarily because it likely reduces the influence of the values of EAs in shaping the deployment of AGI/ASI. Anthropic is the best example of a lab with people who share these values, but I’d imagine that EAs also have more overlap with the staff at OpenAI and DeepMind than actors who would catch up because of a slowdown. And for what it’s worth, the labs were founded with the stated goal of benefiting humanity before it became far more apparent that current paradigms have a high chance of resulting in AGI with the potential of granting profit/power to their human operators and investors.
As others have noted, people and powerful groups outside of this community and surrounding communities don’t seem to be interested in consequentialist, impartial, altruistic priorities like creating a positive long-term future for humanity, but are instead more self-interested. Personally I’m more downside-focused, but I think it’s relevant to most EAs that other parties wouldn’t be as willing to dedicate a large amount of resources towards creating large amounts of happiness for others, and because of that, the reduction of influence of the values of EAs will result in a considerable loss of expected future value.
EDIT (2024-05-19): When I wrote this I had in mind Anthropic > OpenAI > DeepMind but Anthropic > DeepMind > OpenAI seems more sensible now. Unclear where to insert various governments/militaries/politicians/CEOs into this ranking.
A while back I wrote that I agreed with the observation that some of (new wave) EA’s norms seem similar to those of the religion imposed on me and others as children. My current thinking is that there may actually be a link connecting the culture in parts of Protestantism and some of the (progressive) norms EA adopts, along with an atypical origin that probably deserves more scrutiny. The “link” part might be more apparent to people who’ve noticed a “god-shaped hole” in the West that makes some secular movements resemble certain parts of religions. The “origin” part might be less apparent but it’s been discussed by Scott Alexander before. So, this theory isn’t all that original.
Essentially: Puritans, as one of four major cultures originating from the UK, exert huge founder effects on America, which both influences parts of itself as well as other countries for better or worse → Protestant culture gradually changed to be more socially judgmental in some ways etc. → More recently, people increasingly reject the existence of a God but keep elements of the culture of that religion → EA now draws heavily from nth generation ex-Protestants/Protestant-adjacents who also tend to be more active in trying to change society (other people’s actions) and approach it with some of the same inherited attitudes
That is one causal chain but a tree might show more causes and effects. For example, the Puritan founder effects probably also influenced modern academia (in part, spearheaded by a few institutions in New England) which again, EA heavily draws from. Other secular institutions might also be influenced by osmosis, and produce downstream effects.
It seems difficult to believe these attitudes just disappeared without affecting other movements, culture, and society. The Puritan legacy also seems to have a track record of being quite influential.
FYI, if people want to look into what aliens might value, an interesting direction might be to think about convergent evolution. One (the only?) existing book on the topic: The Zoologist’s Guide to the Galaxy/ Quanta magazine article. Geoffrey Miller mentioned related work in a comment a few months ago.
Is this pointless speculation? I suspect knowing about what aliens might value would be useful in understanding how to better implement Evidential Cooperation in Large Worlds (ECL) (right now, right here, on Earth) although some people may disagree with me on that.
I’ve also encountered thinking that this could help avoid/reduce conflicts with aliens (which may motivate work on it from various longtermist perspectives).
I guess this kind of stuff would be particularly suited for people with an evolutionary biology/related field background but it also seems like people can pick these things up quickly/use AI assistants to help out.
I was brought up in a very religious environment. After reading this comment I’m reflecting on what I’m finding off-putting about that upbringing:
The idea that there is a clear divide between good and evil.
The idea that there are unforgivable sins/heresies.
The idea that sexual things are bad, or are particularly bad.
Laying claim to humility and being the underdog even though one’s group has a lot of power.
The idea that arguing against sacred beliefs is bad.
Shaming those who have sinned and demanding that they repent.
The idea that everything considered evil must/will be punished severely.
… and more.
I find myself agreeing with much of the comparison that the comment makes.
I noticed that replies to ‘Community’ shortform posts aren’t automatically tagged ‘Community’. Maybe it’s worth fixing this?
Related:
A powerful speech from the same activist:
I find it odd that many people’s ideas about other minds don’t involve, or even contradict, the existence of some non-arbitrary function that maps a (finite) number of discrete fundamental physical entities (assuming physics is discrete) in a system to a corresponding number of minds (or some potentially quantifiable property of minds) in that same system.
I have intuitions (which could be incorrect) that “physics is all there is” and that “minds are ultimately physical,” and it feels possible, in principle, to unify them somehow and relate “the amount of stuff” in both the physical and mental domains through such a function.
To me, this solution (“count all subsets of all elements within systems”) proposed by Brian Tomasik appears to be among plausible non-arbitrary options, and it could also be especially ethically relevant. Solutions such as these that suggest the existence of a very large number of minds imply moral wagers, e.g. to minimize possible suffering in the kinds of minds that are implied to be most numerous (in this case, those that comprise ~half of everything in the universe), which might make them worth investigating further.
Even if physics is continuous rather than discrete, it still seems possible that there could be a mapping from continuous physics to discrete minds. (disclaimer: I don’t know much physics, and I haven’t thought much about how it relates to the philosophy of mind.)
This is all speculative and counterintuitive. On the other hand, common-sense intuitions developed through evolution might not accurately represent the first-person experiences, or lack thereof, of other systems. They seem to have instead evolved because they helped model complicated systems relevant to fitness by picturing them as similar to one’s own mind. Common-sense intuitions aren’t necessarily reliable, and counterintuitive conclusions could potentially be true.
I’m skeptical about the value of slowing down leading AI labs primarily because it likely reduces the influence of the values of EAs in shaping the deployment of AGI/ASI. Anthropic is the best example of a lab with people who share these values, but I’d imagine that EAs also have more overlap with the staff at OpenAI and DeepMind than actors who would catch up because of a slowdown. And for what it’s worth, the labs were founded with the stated goal of benefiting humanity before it became far more apparent that current paradigms have a high chance of resulting in AGI with the potential of granting profit/power to their human operators and investors.
As others have noted, people and powerful groups outside of this community and surrounding communities don’t seem to be interested in consequentialist, impartial, altruistic priorities like creating a positive long-term future for humanity, but are instead more self-interested. Personally I’m more downside-focused, but I think it’s relevant to most EAs that other parties wouldn’t be as willing to dedicate a large amount of resources towards creating large amounts of happiness for others, and because of that, the reduction of influence of the values of EAs will result in a considerable loss of expected future value.
EDIT (2024-05-19): When I wrote this I had in mind Anthropic > OpenAI > DeepMind but Anthropic > DeepMind > OpenAI seems more sensible now. Unclear where to insert various governments/militaries/politicians/CEOs into this ranking.
Leopold Aschenbrenner makes some good points for “Government > Private sector” in the latest Dwarkesh podcast.
A while back I wrote that I agreed with the observation that some of (new wave) EA’s norms seem similar to those of the religion imposed on me and others as children. My current thinking is that there may actually be a link connecting the culture in parts of Protestantism and some of the (progressive) norms EA adopts, along with an atypical origin that probably deserves more scrutiny. The “link” part might be more apparent to people who’ve noticed a “god-shaped hole” in the West that makes some secular movements resemble certain parts of religions. The “origin” part might be less apparent but it’s been discussed by Scott Alexander before. So, this theory isn’t all that original.
Essentially: Puritans, as one of four major cultures originating from the UK, exert huge founder effects on America, which both influences parts of itself as well as other countries for better or worse → Protestant culture gradually changed to be more socially judgmental in some ways etc. → More recently, people increasingly reject the existence of a God but keep elements of the culture of that religion → EA now draws heavily from nth generation ex-Protestants/Protestant-adjacents who also tend to be more active in trying to change society (other people’s actions) and approach it with some of the same inherited attitudes
That is one causal chain but a tree might show more causes and effects. For example, the Puritan founder effects probably also influenced modern academia (in part, spearheaded by a few institutions in New England) which again, EA heavily draws from. Other secular institutions might also be influenced by osmosis, and produce downstream effects.
It seems difficult to believe these attitudes just disappeared without affecting other movements, culture, and society. The Puritan legacy also seems to have a track record of being quite influential.