Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
Hmm, although I think I get what you mean, I’m not sure how it could actually be true given that (preference) utility functions are scale and offset invariant, so the extent of an agent’s caring can only be described relative to the other things they care about?
It’s sort of implicitly saying “I think your that your time, and your development, is worth much less than mine”. I wish we were the kind of community where people would say that to my face, then we could have a conversation and find out whether that’s really true.
That’s especially easy to do where I live, we don’t have factory farming here (cows go for “finishing” at a grain feed only at the end of their life, for a short time, too short for serious stomach problems). Their lives kinda seem positive on net.
However, the conservation issues are worse, methane emissions are high, and runoff from farming messes up the streams and lakes, threatening many native fish species. [realizes I’m talking to brian tomasik] Evolution spent billions of years creating the species. I don’t think we’ll ever create anything quite like them. There’s a sacred kind of beauty in them: They were real, they’d be one of the few things we knew that weren’t created by us or our peers. They were created by the thing that created us.
We’ll find them very beautiful one day.
Losing them wouldn’t be such a tragedy if we could just record the epigenomes and the womb environments, so that we could reconstruct them in a more humane form, later.
But I don’t think we’re doing that?
(Can the womb environment be figured out from the epigenomes and simulations?)
You should only put approximately zero weight on anecdotes that got to you through a sensationalism-maximizing curation system with biases you don’t understand, which I hope this wasn’t? Regardless, the anecdotes are mostly just meant to be clarifying examples of the kind of thing effect that I am trying to ask about, I don’t expect people to pass them along or anything.
I decided not to talk about biological plausibility, because I don’t get the impression pharmacology or nutrition follows reductive enough rules that anyone can reason a-priori about it very well. It will surprise us, it will do things that don’t make sense. I actually wrote out a list of evolutionary stories that end up producing this effect, some of them are quite interesting, but I decided not to post them, because it seemed like a distraction.
I guess I’ll post some theories now though:
- This sort of phantasic creativity was not useful in pre-industrial societies, because there was no way to go far beyond the social consensus reality and prove that you were right and do anything with it (that’s only the case today because of, basically, the investment cycle, and science and technology, which took hundreds of years to start functioning after it was initially proposed). The body needed an excuse to waste creatine, so in sapiens, it only did it when we ate an abnormal amount of meat, but sudden gluts of meat would occur frequently enough for the adaptation to be maintained.
- Or maybe eating lots of meat/fish was kind of the norm for millions of years for dominant populations (I can cite this if you’re that interested). And maybe there’s a limit to how fast the body can replenish brain-creatine (investigate this assumption, Gregger seemed implicitly sus about it). In that case, we might have an effect where the brain implements creatine frugality by lowering our motivation to think in phosphocreatine-burning ways, which then may lead to a glorification of that frugality, which then becomes sticky. This could be a recurring class of motivation disorder that might even generalize to early AI, so I’d find it super interesting.
The note about choline in wheat is interesting. I wonder if it’s bioavailable? I think I can remember situations where I’ve craved meat/eggs and thought “but some good bread would also do”.
Oh, but, I went and dug a little bit (because I take my immediate close friends’ reports seriously) and it turns out that the “choline” in wheat is a pretty different molecule? betaine? https://veganhealth.org/choline/#rec Looking at it, I don’t think it would be a big stretch to say that the resemblence between betaine and choline is only a little bit closer than the resemblence between phytoestrogen and estrogen… but maybe they mean that they’ve been observed to actually have similar effects?
Regardless, that page also says that you’re probably not naturally going to get the AI from what you’re eating (although you personally eat a lot of bean burritos, right? So maybe you actually are (and btw, I think you’re very high in pragmatic-creativity! (You’re the only person aside from me who ever designed a puzzle for crycog, and you did it immediately verbally after playing it. Your distributed computing stuff also seemed pretty great.))). This is the case for a lot of nutrients, but for choline it might actually matter quite a bit to be below AI.
are multi-stage (sequentially dependant?) breakthroughs more impressive than a similar number of breakthroughs that aren’t sequentially dependant or that happen far apart in time from each other?
Yes, because… it means they couldn’t have been finding low-hanging fruit. When one problem leads to another, you don’t get to wander off and look for easier ones, you have to keep going down one of these few avenues of this particular cave system. So if someone solved a contiguous chain of problems you can be sure that some of those were probably genuinely really hard. It also requires them to develop their own understanding of something that nobody could help them with, and to internalize that deeply enough to keep going.
Sequences like this occur naturally in real-world projects, so if they’re avoiding them it’s kinda telling.
more are needed until something valuable can be produced produced?
?. More are needed before we can make a judgement. I’d believe that lots of value can be produced without any of these big leaps.
“genius” doesn’t seem usefully precise to me? (Is a genius even still a genius once they’ve found their way into a part of the world where their level of pragmatic creativity is ordinary?)
I’m looking for a sort of… ability to go for extended, rapid, complicated traversals of broad unfamiliar territories in your head, alone, without getting lost, and to find something of demonstrable value that no one has ever seen before. That kind of thing.
That list might be a good start, but I don’t know. Can you show us examples of divergent, multi-stage, needle in a haystack breakthroughs that those people made while they were years into a vegan diet? I haven’t looked closely at really any of these peoples’ work, and there’s a kind of relevant reason for that. A lot of them (Singer, MacAskill) are mostly apologists. They work mainly with familiar premises in highly legible ways. The reason most people read them is the reason I don’t read them, and the reason I am concerned that a vegan diet tends to limit the ways people can think.
Tomasik is an interesting example though, I’ve gotten the sense that he has that character, but haven’t seen any intense output from him. Recommend some?
Examples of EA-adjacent people who I’d consider to have this quality include Yudkowsky, Wei Dai, Vanessa Kosoy, Robin Hanson (none were vegan during their breakthroughs afaict?).
It might be worth asking which way the causality’s running here. A very EA-charitable answer might say something like: “Being humble and accountable (which leads to doing less risky, more legible, and more approachable work) probably raises a person’s inclination to become vegan.” (It’s kind of interesting that, as far as I can tell, long time vegans, the Brahmins, would argue the opposite causality: “Being vegan decreases the mode of passion, but that’s good for your spiritual path.”)
It wasn’t my prior at all. It was a response to observations that took many years to arrive at, it is also a response to some empirical evidence, which I mentioned.
I think it would have gotten better answers there, but I’d guess that if this health trap is real, LWers are much less likely to fall into it (they’d notice the loss), and veganism isn’t really as on-topic there.
EA Auckland Research Gang Chat got curious about the details of ramanujan’s diet. This account of his medical history during his time in England has a lot of cool details.
It seems like practicing Brahmins are not supposed to eat egg, but he seems to have eaten some when he was at this sanitorium and couldn’t get anything better. Interesting note:
Rajasic foods [forbidden] are foods that increase ones mode of passion which is also an obstacle in their spiritual path.
It does seem to me that if you want to avoid flights of passion, you’ll avoid the mode of thought I associate with being well stoked with creatine.
I’m curious, how long have they been living under these rules? (Historical evidence seems somewhat unclear, “No Brahmin, no sacrifice, no ritualistic act of any kind ever, even once, is referred to” in any Indian texts between third century BCE and the late first century CE. He also states that “The absence of literary and material evidence, however, does not mean that Brahmanical culture did not exist at that time, but only that it had no elite patronage and was largely confined to rural folk, and therefore went unrecorded in history”)
we tech/ea/ai people are overly biased in the actual relevance of our own field (I’m CS student)?
You can just as easily say that global institutions are biased about the relevance of their own fields, and I think that is a good enough explanation: Traditional elite fields (press, actors, lawyers, autocrats) don’t teach AI, and so can’t influence the development of AGI. To perform the feats of confidence that gains or defends career capital in those fields, or to win the agreement and flattery of their peers, they have to avoid acknowledging that AGI is important, because if it’s important, then none of them are important.
But, I think this dam will start to break, generally. Economists know better than the other specializations, they have the background in decision theory to know what superintelligence will mean, and they see what’s happening in industry. Military is also capable of sometimes recognizing and responding to emerging risks. They’re going to start to speak up, and then maybe the rest of the elite will have to face it.
Theory: Seeing other people happily doing something is a stronger signal to the body about whether it’s okay than proprioception error. Consider, seeing other people puke is gonna make you want to puke. Maybe seeing people not puke at all does the opposite.
I’m pretty sure rationality and rationalization read the same though? That’s sort of the point of rationalization. The distinction, whether it is sampling the evidence in a biased way, is often outside of the text.
I actually think EA is extremely well positioned to eat that take, digest it, become immensely stronger as a result of it, remain standing as the only great scourge-pilled moral community, immune to preference falsification cascades, members henceforth unrelentingly straightforward and authentic about what their values are, and so, more likely to effectively pursue them, instead of fake values that they don’t hold.
Because:
Most movements operate through voting. We instead tend to operate through philanthropy (often anonymized philanthropy) and career change, which each require an easily quantifiable sacrifice, they’re much closer to being unfakable signals of revealed preference.
EA is sort of built on the foundation of rationalism where eating nasty truths and accepting nasty truthtellers is a norm.
A lot of that theory also makes negotiating peace between conflicting factions easier. Statistics and decision theory form a basis and an intro to economic theory and cooperative bargaining theory, for instance. And the orthogonality thesis, the claim that an intelligent thing can also have values that conflict with ours, is also the claim that a person with values that conflict with ours can be intelligent (and so worthy of respect)!
Oh thank you, I might. Initially I Had Criticisms, but as with the FLI worldbuilding contest, my criticisms turned into outlines of solutions and now I have ideas.
I was more interested in the obesity analogy and where that might lead, but I think you only ended up doing a less productive recapitulation of Bostrom’s vulnerable world hypothesis.
I think “knowledge explosion” might be a more descriptive name for that, I’m not sure it’s better strategically (do you really want your theory to be knee-jerk opposed by people who think that you want to generally halt the production or propagation of knowledge?)
Knowledge Obesity though… I’d be interested in talking about that. It’s a very good analogy. Look at twitter, it’s so easy to digest, extremely presentist, hot takes, conspiracy theories, sounds a lot like the highly processed salted fat and sugar of information consumption, to me.
The places where the analogy breaks are interesting. I suspect that it’s going to be very hard to develop standards and consensus about healthy information diets because because modernity relies on specialization, we all have to read very different things. Some people probably should drink from the firehose of digestible news. Most of us shouldn’t, but figuring out who should and shouldn’t and how they should all fit together is like, the biggest design problem in the world and I’ve never seen anyone aspire to it. The people who should be doing it, recruiters or knowledge institutions, are all reprehensibly shirking their duty in one way or another.
I’m interested in the claim that the networks (and so, the ideas) of textual venues are going to stay the same as the the networks of voice venues. It’s possible, there’s a large overlap between oral and textual conversation, but there are also divergences, and I don’t know if it’s clear yet whether those will grow over time or not.
Voice dialog can traverse topics that’re really frustrating and aversive in text. And I find that the people I enjoy hanging out with in VR are a bit different from the ones I enjoy hanging out with in text. And very different in terms of who I’d introduce to which communities. The social structures haven’t had time to diverge yet and that most of us are most oriented in text and don’t even know the merits of voice and how to use them.
But yeah, I think it’s pretty likely that text and voice systems are never going to come far apart. And I’m planning on trying to hold them together, because I think text (or at least, the text venues that are coming) is generally more wholesome than voice and voice could get really bad if it splits off.
The claim that people aren’t going to change… I don’t think that’s true. VR makes it easy to contact and immerse oneself in transformative communities. Oddly… I have experienced doing something like that in a textual online community (we were really great at making people feel like they were now a different kind of person, part of a different social network, on a different path), but I think VR will tend to make that a lot more intense, because there was a limit to how socially satisfying text relationships can be, and with VR that limit kind of isn’t there.
Understandable. I wish I’d put more thought into the title before posting, but I vividly remember having hit some sort of nontrivial stamina limit.
I think this could benefit from being expanded. I can only assume you’re referring to the democratization of access to knowledge. It’s not at all obvious why this is something we need to prepare for or why it would introduce any non-obvious qualitative changes in the world rather than just generally making it go a bit faster.
I believe I could do this. My background is just writing, argument, and constitution of community, I guess.
An idea that was floated recently was an interactive site that asks the user a few questions about themselves and their worldview then targets an introduction to them.
I’m not sure how strong the need actually is, though. I get the impression that, EA is such a simple concept (reasoned evidenced moral dialog, earnest consequentialist optimization of our shared values) that most misunderstandings of what EA is are a result of deliberate misunderstanding, and having better explanations wont actually help much. It’s as if people don’t want to believe that EA is what it claims to be.
It’s been a long time since I was outside of the rationality community, but I definitely remember having some sort of negative feeling about the suggestion that I can be better at foundational capacities like reasoning, or in EA’s case, knowing right from wrong.
I guess a solution there is to convince the reader that rationality/practical ethics isn’t just a tool for showing off for others (which is zero-sum, and so we wouldn’t collectively benefit from improvements in the state of the art), and that being trained in it would make their life better in some way. I don’t think LW actually developed the ability to sell itself as self-help (I think it just became a very good analytic philosophy school). I think that’s where the work needs to be done.
What bad things will happen to you if you reject expected a VNM axiom or tell yourself pleasant lies? What choking cloud of regret will descend around you if you aren’t doing good effectively?
A utility function can’t say anything else, in decision theory. Total caring is, roughly speaking, conserved.
The psuedo utility functions that a hedonic utilitarian projects onto others can introduce more caring for one thing without reducing their caring for other things, but they’re irrelevant in this context. (and if you ask me, a preference utilitarian, they’re not very relevant in the context of utilitarianism either, but never mind that.)