[removed]
lauren
lauren’s Quick takes
I would appreciate the moderators revealing the identity of OP’s main account. shame and poor taste don’t matter when someone is simply malicious—active defense is needed, which means knowing who to not invite to interact.
(though I’d also note—based on OP’s temp account username, I infer they’re intending this to be a threat of the future; by implication from “Ozais cometh”, I suspect they’re claiming something along the lines of to anticipate that the handmaids’ tale will soon come true. This is a popular belief among people who are inclined to write manifestos like this on reddit. it’s quite possible OP is simply new here in the first place. An EA project to counter this sort of activism more directly, but in ways that take a very different approach to those standard in today’s politics, could be very promising, if it was sufficiently creative. the activists who promote this sort of violence are rather creative in their approach, and are, as you can see, motivated by believing that might makes right in sexual interaction. There is a real network of belief and behavior backing the “men of culture”al violence.)
it does now, yup!
That post is not public at the time I make this comment, I think.
Here’s an automatic transcription of it with automatic speaker separation.
Perhaps that link could be in the main post, or perhaps the contents of the transcription even could be copied into the post body.
edit: and while I’m at it, here’s one for the vegetarianism video.
The difference between eugenics and transhumanism is consent. eugenics cannot be rehabilitated; The word is irrevocably bound to refer to incredible consent violation, to the point where even calling it merely a consent violation dishonors the people who died at the hands of the Nazis. heal genetic diseases, don’t violate and murder people. do not commit eugenics.
just so we’re clear—self driving cars are, in fact, one of the key factors pushing timelines down, and they’ve also done some pretty impressive work on non-killeveryone-proof safety which may be useful as hunch seeds for ainotkilleveryoneism.
they’re not the only source of interesting research, though.
also, I don’t think most of us who expect agi soon expect reliable agi soon. I certainly don’t expect reliability to come early at all by default.
mild tangent, but ultimately not really a tangent -
The whole *point* of civilization is moving away from a state of base natural anarchy, where your value is tied to your capability
yeah, maybe; but anarchy.works. non-authoritarianism, as the word was originally meant, is about forming stable multiscale bonds of non-dominating microsolidarity. non archy has worked very well before; in order to work well, there has to be a large cooperation bubble that prevents takeover by authority structures.
that isn’t what you meant, of course—you meant destructive chaos, the meaning usually expected from the word. but I claim that it is worth understanding why the word anarchy has such strong detractors and supporters, and learning what the underlying principles of those ethics are.
Strongly agreed with the point actually being made by the word in this context, and with the entire comment to which I reply, I just wanted to comment on the word as used.
this approach to reasoning assumes authorities are valid. do not trust organizations this way. It is one of effective altruism’s key failings. how can we increase pro-social distrust in effective altruism so that authorities are not trusted?
Question from a friend who isn’t super familiar with EA’s monetization-focused approach to welfare economics:
How are costs for Animal Welfare calculated? A happy animal doesn’t generate any money.
Anyone feel able to explain this from scratch concisely?
If those are the doors, well, I feel like we’re pretty doomed. Perhaps there are one or more doors that fix the criticisms both straw people apply to each other.