Developing my worldview. Interested in meta-ethics, epistemics, psychology, AI safety, and AI strategy.
Jack R
Berkeley group house, spots open
Conditioning on the alien intelligence being responsible for recent UFO/UAP discussions/evidence, then they are more advanced than us. If they are more advanced than us, they are most likely much more advanced than us (e.g. the difference between now and 1 AD on earth is cosmologically very small, but technologically pretty big)
Would you say that incorporating the “advice” of one of my common sense moral intuitions (e.g. “you should be nice”) can be considered part of a process called “better evaluating the EV”?
And here
Yeah that makes sense
Re: 19, part of why I dont’ think about this much is because I assume that any alien intelligence is going to be much more technologically advanced than us, and so there probably isn’t much we can do in case we don’t like their motives
7. Reminded me of Mark Xu’s similar criticism
Yeah, that’s pretty much what I was imagining. Though I think the best insight is going to come from a more deliberate effort to seek out a different worldview since e.g. the people with the most different worldviews aren’t going to be in EA (probably far from it).
For instance political ideologies
How various prominent ideologies view the world, e.g. based on in-depth conversations
I think I understood that’s what you were doing at the time of writing, and mostly my comment was about bullets 2-5. E.g. yes “don’t care about future people at all” leads to conclusions you wouldn’t endorse, but what about discounting future people with some discount rate? I think this is what the common-sense intuition does, and maybe this should be thought of as a “pure value” rather than a heuristic. I wouldn’t really know how to answer that question though, maybe it’s dissolvable and/or confused.
[Linkpost] Can lab-grown brains become conscious?
[Question] What domains do you wish some EA was an expert in?
Some thoughts:
You can think of the common-sense moral intuition (like Josh’s) as a heuristic rather than “a pure value” (whatever that means) - subtly tying together a value with empirical beliefs about how to achieve that value
Discarding this intuition might mean you are discarding empirical knowledge without realizing it
Even if the heuristic is a “pure value,” I’m not sure why it’s not allowed for that value to just discount things more the farther they are away from you. If this is the case, then valuing the people in your cases is consistent with not valuing humans in the very far future.
And if it is a “pure value,” I suppose you might say that some kind of “time egalitarianism” intuition might fight against the “future people don’t matter as much” intuition. I’m curious where the “time egalitarianism” intuition comes from in this case, and if it’s really an intuition or more of an abstract belief.
And if it is a “pure value,” perhaps the intuition shouldn’t be discarded or completely discarded since agents with utility functions generally don’t want those utility functions changed (though this has questionable relevance).
My primary reaction to this was “ah man, I hope this person doesn’t inadvertently annoy important people about AI safety being important, hurting the reputation of AI safety/longtermism/EA etc”
Thanks for sharing!
A counterpoint I thought of: it seems like if there is no consequential coevolution happening between humans and other mammalian species, then perhaps AI could grow on such fast time scales that humans can’t hope to have meaningful coevolution.
Does it seem dubious to you because the world is just too chaotic? How would you describe the reasons for your feeling about this?
Not sure, but it feels like maybe being targeted multiple times by a large corporation (e.g. Pepsi) is less annoying than being targeted by a more niche thing
I’ve been taking a break from the EA community recently, and part of the reasoning behind this has been in search of a project/job/etc that I would have very high “traction” on. E.g. the sort of thing where I gladly spend 80+ hours per week working on it, and I think about it in the shower.
So one heuristic for leaving and exploring could be “if you don’t feel like you’ve found something you could have high traction on and excel at, and you haven’t spent at least X months searching for such a thing, consider spending time searching”
To me it seems there is, yes. For instance, see this Harvard professor and this Stanford professor talk about aliens.