Pause AI /â Veganish
Lets do a bunch of good stuff and have fun gang!
Pause AI /â Veganish
Lets do a bunch of good stuff and have fun gang!
If âhow do you deal with itâ means âhow do you convince yourself it is false, or that things some EA orgs are contributing to are still okay given itâ, I donât think this is a useful attitude to have towards troubling truths.
Well said and important :)
I donât really understand this stance, could you explain what you mean here?
Like Sammy points out with the Hitler example, it seems kind of obviously counterproductive/ânegative to âsave a human who was then going to go torture and kill a lot of other humansâ.
Would you disagree with that? Or is the pluralism you are suggesting here specifically between viewpoints that suggest animal suffering matters and viewpoints that donât think it matters?
As I understand worldview diversification stances, the idea is something like: if you are uncertain about whether animal welfare matters, then you can take a portfolio approach where with some fraction of resources, you try to increase human welfare at the cost of animals and with a different fraction of resources you try to increase animal welfare. The hope being that this nets out to positive in âworldâs where non-human animals matterâ and âworldâs where only humans matterâ.
Are you suggesting something like that or is there a deeper rule against ânot concluding that the effects of other peopleâs lives are net negativeâ when considering the second order effects of whether to save them that you are pointing to?
Note that the cost-effectiveness of epidemic/âpandemic preparedness I got of 0.00236 DALY/â$ is still quite high.
Point well-taken.
I appreciate you writing and sharing those posts trying to model and quantify the impact of x-risk work and question the common arguments given for astronomical EV.
I hope to take a look at those more in depth some time and critically assess what I think about them. Honestly, I am very intrigued by engaging with well informed disagreement around the astronomical EV of x-risk focused approaches. I find your perspective here interesting and I think engaging with it might sharpen my own understanding.
:)
Interesting! This is a very surprising result to me because I am mostly used to hearing about how cost effective pandemic prevention is and this estimate seems to disagree with that.
Shouldnât this be a relatively major point against prioritizing biorisk as a cause area? (at least w/âo taking into account strong long termism and the moral catastrophe of extinction)
Fictional Characters:
I would say I agree that fictional characters arenât moral patients. Thatâs because I donât think the suffering/âpleasure of fictional characters is actually experienced by anyone.
I take your point that you donât think that the suffering/âpleasure portrayed by LLMs is actually experienced by anyone either.
I am not sure how deep I really think the analogy is between what the LLM is doing and what human actors or authors are doing when they portray a character. But I can see some analogy and I think it provides a reasonable intuition pump for times when humans can say stuff like âIâm sufferingâ without it actually reflecting anything of moral concern.
Trivial Changes to Deepnets:
I am not sure how to evaluate your claim that only trivial changes to the NN are needed to have it negate itself. My sense is that this would probably require more extensive retraining if you really wanted to get it to never role-play that it was suffering under any circumstances. This seems at least as hard as other RLHF âguardrailsâ tasks unless the approach was particularly fragile/âhacky.
Also, Iâm just not sure I have super strong intuitions about that mattering a lot because it seems very plausible that just by âshifting a trivial mass of chemicals aroundâ or ârearranging a trivial mass of neuronsâ somebody could significantly impact the valence of my own experience. Iâm just saying, the right small changes to my brain can be very impactful to my mind.
My Remaining Uncertainty:
I would say I broadly agree with the general notion that the text output by LLMs probably doesnât correspond to an underlying mind with anything like the sorts of mental states that I would expect to see in a human mind that was âoutputting the same textâ.
That said, I think I am less confident in that idea than you and I maybe donât find the same arguments/âintuitions pumps as compelling. I think your take is reasonable and all, I just have a lot of general uncertainty about this sort of thing.
Part of that is just that I think it would be brash of me in general to not at least entertain the idea of moral worth when it comes to these strange masses of âbrain-tissue inspired computational stuffâ which are totally capable of all sorts of intelligent tasks. Like, my prior on such things being in some sense sentient or morally valuable is far from 0 to begin with just because that really seems like the sort of thing that would be a plausible candidate for moral worth in my ontology.
And also I just donât feel confident at all in my own understanding of how phenomenal consciousness arises /â what the hell it even is. Especially with these novel sorts of computational pseudo-brains.
So, idk, I do tend to agree that the text outputs shouldnât just be taken at face value or treated as equivalent in nature to human speech, but I am not really confident that there is ânothing going onâ inside the big deepnets.
There are other competing factors at this meta-uncertainty level. Maybe Iâm too easily impressed by regurgitated human text. I think there are strong social /â conformity reasons to be dismissive of the idea that theyâre conscious. etc.
Usefulness as Moral Patients:
I am more willing to agree with your point that they canât be âusefullyâ moral patients. Perhaps you are right about the ârole-playingâ thing and whatever mind might exist in GPT, produces the text stream more as a byproduct of whatever it is concerned about than as a âtrue monologue about itselfâ. Perhaps the relationship it has to its text outputs is analogous to the relationship an actor has to a character they are playing at some deep level. I donât personally find âsimulatorsâ analogy compelling enough to really think this, but I permit the possibility.
We are so ignorant about nature of a GPTsâ minds that perhaps there is not much that we can really even say about what sorts of things would be âgoodâ or âbadâ with respect to them. And all of our uncertainty about whether/âwhat they are experiencing, almost certainly makes them less useful as moral patients on the margin.
I donât intuitively feel great about a world full of nothing, but servers constantly prompting GPTs with âyou are having fun, you feel greatâ just to have them output âyayâ all the time. Still, I would probably rather have that sort of world than an empty universe. And if someone told me they were building a data center where they would explicitly retrain and prompt LLMs to exhibit suffering-like behavior/âtext outputs all the time, I would be against that.
But I can certainly imagine worlds in which these sorts of things wouldnât really correspond to valenced experience at all. Maybe the relationship between a NNâs stream of text and any hypothetical mental processes going on inside them is so opaque and non-human that we could not easily influence the mental processes in ways that we would consider good.
LLMs Might Do Pretty Mind-Like Stuff:
On the object level, I think one of the main lines of reasoning that makes me hesitant to more enthusiastically agree that the text outputs of LLMs do not correspond to any mind is my general uncertainty about what kinds of computation are actually producing those text outputs and my uncertainty about what kinds of things produce mental states.
For one thing, it feels very plausible to me that a ânext token predictorâ IS all you would need to get a mind that can experience something. Prediction is a perfectly respectable kind of thing for a mind to do. Predictive power is pretty much the basis of how we judge which theories are true scientifically. Also, plausibly itâs a lot of what our brains are actually doing and thus potentially pretty core to how our minds are generated (cf. predictive coding).
The fact that modern NNs are âmere next token predictorsâ on some level doesnât give me clear intuitions that I should rule out the possibility of interesting mental processes being involved.
Plus, I really donât think we have a very good mechanistic understanding of what sorts of âtechniquesâ the models are actually using to be so damn good at predicting. Plausibly non of the algorithms being implemented or âthings happeningâ are of any similarity to the mental processes I know and love, but plausibly there is a lot of âmind-likeâ stuff going on. Certainly brains have offered design inspiration, so perhaps our default guess should be that âmind-stuffâ is relatively likely to emerge.
Can Machines Think:
The Imitation Game proposed by Turing attempts to provide a more rigorous framing for the question of whether machines can âthinkâ.
I find it a particularly moving thought experiment if I imagine that the machine is trying to imitate a specific loved one of mine.
If there was a machine that could nail the exact I/âO patterns that my girlfriend, then I would be inclined to say that whatever sort of information processing occurs in my girlfriendâs brain to create her language capacity must also be happening in the machine somewhere.
I would also say that if all of my girlfriendâs language capacity were being computed somewhere, then it is reasonably likely that whatever sorts of mental stuff goes on that generates her experience of the world would also be occurring.
I would still consider this true without having a deep conceptual understanding of how those computations were performed. Iâm sure I could even look at how they were performed and not find it obvious in what sense they could possibly lead to phenomenal experience. After all, that is pretty much my current epistemic state in regards to the brain, so I really shouldnât expect reality to âhand it to me on a platterâ.
If there was a machine that could imitate a plausible human mind in the same way, should I not think that it is perhaps simulating a plausible human in some way? Or perhaps using some combination of more expensive âbrain/âmind-likeâ computations in conjunction with lazier linguistic heuristics?
I guess Iâm saying that there are probably good philosophical reasons for having a null hypothesis in which a system which is largely indistinguishable from a human mind should be treated as though it is doing computations equivalent to a human mind. Thatâs the pretty much same thing as saying it is âsimulatingâ a human mind. And that very much feels like the sort of thing that might cause consciousness.
I appreciate you taking the time to write out this viewpoint. I have had vaguely similar thoughts in this vein. Tying it into Janusâs simulators and the stochastic parrot view of LLMs was helpful. I would intuitively suspect that many people would have an objection similar to this, so thanks for voicing it.
If I am understanding and summarizing your position correctly, it is roughly that:
The text output by LLMs is not reflective of the state of any internal mind in a way that mirrors how human language typically reflects the speakerâs mind. You believe this is implied by the fact that the LLM cannot be effectively modeled as a coherent individuals with consistent opinions; there is not actually a single âAI assistantâ under Claudeâs hood. Instead, the LLM itself is a difficult to comprehend âshoggothâ system and that system sometimes falls into narrative patterns in the course of next token prediction which cause it to produce text in which characters/ââmasksâ are portrayed. Because the characters being portrayed are only patterns that the next token predictor follows in order to predict next tokens, it doesnât seem plausible to model them as reflecting an underlying mind. They are merely âimages of peopleâ or something; like a literary character or one portrayed by an actor. Thus, even if one of the âmasksâ says something about itâs preferences or experiences, this probably doesnât correspond to the internal states of any real, extant mind in the way that we would normally expect to be true when humans talk about their preferences or experiences.
Is that a fair summation/âreword?
Adjacent to this point about how we could improve EA communication, I think it would be cool to have a post that explores how we might effectively use, like, Mastodon or some other method of dynamic, self-governed federation to get around this issue. I think this issue goes well beyond just the EA forum in some ways lol.
Good suggestion! Happy Ramadan! <3
Just for the sake of feedback, I think this makes me personally less inclined to post the ideas and drafts I have been toying with because it makes me feel like they are going to be completely steamrolled by a flurry of posts by people with higher status than me and it wouldnât really matter what I said.
I donât know who your target demo here is and it sounds like âflurry of posts by high status individualsâ might have been your main intention anyways. However, please note, that this doesnât necessarily help you very much if you are trying to cultivate more outsider perspectives.
In any case, youâre probably right that this will lead to more discussion and I am interested to see how it shakes out. I hope youâll write up a review post or something to summarize how the event went because itâs going to be hard to follow that many posts about different topics and the corresponding they each generate.
I am very unclear on why research that involves game theory simulations seems dangerous to you. I think Iâm ignorant of something leading you to this conclusion. Would you be willing to explain your reasoning or send me a link to something so I can better understand where youâre coming from?
Could you expound on this or maybe point me in the right direction to learn why this might be?
I tend to agree with the intuition that s-risks are unlikely because they are a small part of possibility space and that nobody is really aiming for them. I can see a risk that systems trained to produce eudaimonia will instead produce â1 x eudaimonia, but I canât see how that justifies thinking that astronomic bad is more likely than astronomic good. Surely a random sign flip is less likely than not.
Sure thing! I donât think itâll be all that polished or comprehensive since it is mostly intended to help me straighten out my reasoning, but I would be more than happy to share it.
Thank you for the survey info! I was favorably surprised by some of those results.
Thank you so much! This is exactly the sort of thing I am looking for. Iâm glad there is high quality work like this being done to advance strategic clarity surrounding TAI and I appreciate you sharing your draft.
I hadnât heard about Ayuda Efectiva, but it looks like a great introductory resource and Iâll definitely send it to her. Reaching out to those groups might also be a good idea. I appreciate the help!
Hey everybody!
One of my friends is interested in learning more about EA and I am trying to find good resources to recommend to her. The thing is, her English is only so-so; her preferred language is Spanish. I found a couple websites that give brief overviews of some EA ideas, but I am having a hard time finding comprehensive EA texts in Spanish.
Does anyone know of any EA resources in Spanish that could be helpful?
Thanks!
I think the money goes a lot further when it comes to helping non human animals then when it comes to helping humans.
I am generally pretty bought into the idea that non human animals also experience pleasure/âsuffering and I care about helping them.
I think it is probably good for the long term trajectory of society to have better norms around the casual cruelty and torture inflicted on non-human animals.
On the other hand, I do think there are really good arguments for human to human compassion and the elimination of extreme poverty. I am very in favor of that sort of thing too. GiveDirectly in particular is one of my favorite charities just because of the simplicity, compassion, and unpretentiousness of the approach.
Animal welfare wins my vote not because I disfavor human to human welfare, but just because I think that the same amount of resources can go a lot further in helping my non human friends.