I recently found out about the EA Gather town and I really like it, can that be linked here? It doesn’t easily show up in the online spaces link https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge
VeryJerry
Re: assumption 1, “The underlying effect of life events is exactly the same”, what if that’s actually not the case? A couple brainstorming ideas on ways it’s not
maybe some new environmental factor, like microplastics or hormone disruptors or something is changing the way we experience good and bad events, making them less salient?
maybe more hyper salient stuff like junk food, or emotional experiences from media like movies, is affecting how we experience those things?
(Idk how to indent on mobile) for example with movies, maybe vicariously experiencing an intense event, accompanied with a music score and everything else, leaves the real life event feeling dull in comparison? I’ve heard Sam harris touch on a similar point, where it used to be you only really got an up close, face-to-face experience with someone by actually being close to them, and you’re “implicated” in it, your actions affect them and how they see you, whereas a movie you get part of the feeling of intimacy without being implicated, you can be slobbing it up on the couch and the result is the same
(Another indent) perhaps other forms of ragebait in the news and social media are more salient than life events, leaving actually frustrating things to have less of an impact?
afaik depression rates are increasing, maybe depressed people experience things less saliently? And we see effects of that across the spectrum even for “subclinical” depression?
Maybe if you know you’ll be mostly ok even if a bad thing happens, whether from social safety nets or good planning or whatever, then it happening is less salient? Or for a good thing, being ok before it happens makes it less exciting, you’re going from not-ok to ok, rather than ok to better
I’m sure there are others, but those are the main things I could think of. Not sure if they’re true or not though
I really hope whoever gets this also cares about ai being morally aligned with all sentient beings, not just humans. Cc @Ronen Bar do you know anybody in the ai moral alignment space who would be able to put forward a good proposal?
Thanks for the response! That sounds pretty good, I know I’ve definitely given stuff to friends and said “Donate to X instead of paying me”, so this seems like a good way to enable that. Agree with other comments that direct-to-charity seems way more trustworthy
My first thought is, “is this money going to the person being tipped, or to a charity? If the latter, which charity?”. After a minute of poking around the site and thinking about it, my next question is who’s the target audience/donor? Is it somebody who doesn’t know where to donate their money, but trusts somebody else (whoever they “tip”) to donate it well?
I think a lot of people use cultured meat not being available as an excuse to keep eating animal flesh, and when it is available, they’d just find a new excuse to not change
AGI by 2028 is more likely than not
Ai 2027
Just applied!
I don’t judge people for having a different eating pattern than me (i eat like 90% plenny shake 😅), but I do judge people who aren’t vegan. That question tripped me up a bit, I think I answered somewhat agree but in the spirit of it I probably should’ve answered strongly agree
How can we unhypocritically expect AI superintelligence to respect “inferior” sentient beings like us when we do no such thing for other species?
How can we expect AI to have “better” (more consistent, universal, compassionate, unbiased, etc) values than us and also always only do what we want it to do?
What if extremely powerful, extremely corrigible AI falls into the hands of a racist? A sexist? A speciesist?
Some things to think about if this post doesn’t click for you
Don’t forget other species have been experiencing experiences before we even existed, and still do. I’d be surprised if humans make up any significant fraction of the total experiences experienced, for example we kill more non-human animals on factory farms each year than the total number of humans who’ve ever lived.
I was just thinking about writing a post like this after listening to https://www.astralcodexten.com/p/introducing-ai-2027 and especially the end where they’re talking about getting into blogging, and thinking about the massive blind spot Rationalists seem to have for sentientism. I’m particularly interested in ways to get involved and help push this cause forward. Especially as someone who frankly, feels pretty helpless with the mass scale of non-human suffering and mass amount of human apathy towards it, as well as the many flaws in the current animal rights movement.
Yeah, that part I’m less sure about, especially since it’s in large part a subset of aligning ai to any goals in the first place. I plan to write a post soon on what makes different values “better” or “worse” than others, maybe we can set up a brainstorming session on that post soon? I think that one will be much more directly applicable to AI moral alignment