I find it interesting that you feel like promoting of the fast world mindset might be rude or cause a backlash because to me that feels like a mainstream view. A lot of advice on how to cope with AI is essentially equivalent to “you need to try harder”, maybe with some qualifiers of what that might exactly look like.[1]
I’d say that I am hyper-prioretising Slow World because it is what makes life worth living. And if there is not much life left, it is even more important to have good experiences while it is possible?
I don’t care much about things that I consider somewhat trivial. These include hanging out with friends at the pub, people getting married, or stuff like that.
I care about the Big Things (the “Big Questions” in philosophy, politics, morality, physics, biology, psychology, big historical trends, technology), and I care about them on a global or even cosmic scale
I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?
RE: “I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?”
Perhaps it has to do with the level of ambition. Let’s talk about a particular value to narrow down the discussion. Some people see “caring for all sentient beings” as an extension of empathy. Some others see it as a logical extension of a principle of impartiality or equality for all. I think I am more in this second camp. I don’t care about invertebrate welfare, for example, because I am particularly empathetic towards them. Most people find bugs to be a bit icky, particularly under a magnifying glass, which turns off their empathy.
Rather, they are suffering sentient beings, which means that the same arguments for why we should care about people (and their wellbeing/interests/preferences) also apply to these invertebrates. And caring about, say, invertebrate welfare, requires a use of reason towards impartiality that might sometimes make you de-prioritize friends and family.
Secondly, I also have a big curiosity about understanding the universe, society, etc. which makes me feel like I’m wasting my time in social situations of friends and family when the conversation topics are a bit trivial.
As I repeat a bit throughout the post, I realize I might be a bit of an psychological outlier here, but I hope people can also see why this perspective might be appealing. Most people are compartimenalizing their views on AI existential risk to a level that I’m not sure makes sense.
Fast vs Slow
I find it interesting that you feel like promoting of the fast world mindset might be rude or cause a backlash because to me that feels like a mainstream view. A lot of advice on how to cope with AI is essentially equivalent to “you need to try harder”, maybe with some qualifiers of what that might exactly look like.[1]
I’d say that I am hyper-prioretising Slow World because it is what makes life worth living. And if there is not much life left, it is even more important to have good experiences while it is possible?
I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?
That was my impression for example from “Planning a career in the age of A(G)I—w Luke Drago, Josh Landes & Ben Todd” event in April.
RE: “I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?”
Perhaps it has to do with the level of ambition. Let’s talk about a particular value to narrow down the discussion. Some people see “caring for all sentient beings” as an extension of empathy. Some others see it as a logical extension of a principle of impartiality or equality for all. I think I am more in this second camp. I don’t care about invertebrate welfare, for example, because I am particularly empathetic towards them. Most people find bugs to be a bit icky, particularly under a magnifying glass, which turns off their empathy.
Rather, they are suffering sentient beings, which means that the same arguments for why we should care about people (and their wellbeing/interests/preferences) also apply to these invertebrates. And caring about, say, invertebrate welfare, requires a use of reason towards impartiality that might sometimes make you de-prioritize friends and family.
Secondly, I also have a big curiosity about understanding the universe, society, etc. which makes me feel like I’m wasting my time in social situations of friends and family when the conversation topics are a bit trivial.
As I repeat a bit throughout the post, I realize I might be a bit of an psychological outlier here, but I hope people can also see why this perspective might be appealing. Most people are compartimenalizing their views on AI existential risk to a level that I’m not sure makes sense.