Hello! These are great questions, and I’ll do my best to explain this from a technical perspective. I’m not the best at explaining complex things, so please ask if you want me to clarify anything.
I’ve noticed there’s often quite a big difference in how people think about AI. When you interact with AI through chat, it feels very much like talking to another person, and that’s intentional since these systems were designed to communicate naturally. But under the hood, from a programmer’s perspective, AI works quite differently from how our minds work.
The key thing is that there’s no reward and punishment in AI systems in the same way there is for humans, animals, insects, or even plants. When programmers created these AI systems, they didn’t build a “brain” that actually experiences rewards or punishments. Instead, AI is more like an incredibly advanced search engine that is very, VERY good at predicting what words should come next in a sentence.
Think of it this way: AI has been trained on huge amounts of text. It’s been trained on basically the entire internet, books, newspapers, articles, you name it. When you ask it something, it finds the most likely response based on all that training data. It’s not actually “thinking” about the answer or feeling any specific way about it.
Like, imagine a dictionary. When you look up the word “puppy”, the dictionary page itself doesn’t feel happy or excited about teaching you that word. When you look up something sad, the page doesn’t feel sad. It’s a piece of paper, a tool. It’s just showing you information. AI works similarly, but it’s so much better at showing you information and presenting it in the form of a natural, human-like conversation.
AI systems can’t experience happiness or suffering because, at their core, they are electrical circuits processing data. When you flip a light switch, the switch doesn’t feel happy about turning on the light, right? It’s just completing an electrical connection. AI works in the same way, except instead of a couple of wires with two possible results, you have billions of wires with a very large number of possible results.
Some people argue that AI might be sentient. While I respect that this is still being debated, from a technical standpoint current AI systems aren’t complex enough for that kind of experience. They don’t have physical senses to feel pain or pleasure the way humans do. We understand pain and happiness because they come from our physical bodies reacting to different situations. Computer circuits are just ways to transport information and data. They do not react to the information that is being transported.
That said, maybe someday we’ll develop truly sentient machines, or maybe we’ll have to rethink what we mean by sentience if technology and machines evolve in ways we can’t yet understand or predict.
Now this is an exciting topic, and I’m glad you’ve decided to share this with the EA forum.
I really agree with the core idea of living “like you only have 10 years left”, which to me speaks about living with intention and some sort of “aware urgency” (where you’re aware of the limited time you have in your life, and the general narrowing of choices as time goes by) rather than going with the flow. I honestly think more people should adopt this way of living. It’s a good reminder to be intentional and to stop wasting time.
But I do have to disagree with some points which, in my opinion, kind of do more harm to the argument rather than good.
The idea that accelerating your personal speed somehow translates to better outcomes is a rather bold assumption, because speed (or even optimization) isn’t the same as impact. There’s no real argument for why consuming more inputs or rejecting anything “slow” leads to better thinking, better judgment, or better decisions. In fact, the symptoms you described at some point in your article are the things that degrade decision quality.
The framing of the “fast world vs. slow world” creates a false binary. It works if you want to simplify some things for the sake of the argument, but you shouldn’t do that if you base the rest of your ideas on it. Also, from personal experience, any serious attempt to engage with complex problems requires not just urgency, but stability. Because you do need feedback loops, error correction, reflection, and to be able to course-correct at any given time based on concrete information, because these problems usually don’t have a one-and-done solution. I think speed-running through these kinds of situations will bring “tech debt” (or the mental equivalent of it) along with it.
I also think what you’re describing isn’t really speed, it’s just some degree of lack of prioritization. Because it describes reacting to urgency by cramming in more input, not by deciding what is actually a priority.
But I’m definitely with you on the need to treat time seriously.