(and I’d love others to embrace it). However, I am making choices that are different from your own, so I guess I put some of them here in the comments to highlight that being onboard with the principle will yield different results based one’s preferences.
treating spending time with friends as last opportunity ever
this usually results in making effort to fly to meet them
offering to pay friends and family to visit me (symmetric with one above but less conventional)
getting driver’s license with the expectation that it would only be meaningful to use for a few years not a lifetime
saying what I think and writing it down publicly, including writing these comments now
trying to be less risk-averse in general
I guess I also have an underlying intuition that we are about to enter the period of turmoil, so I am trying to take advantage of functioning infrastructure like commercial flights or mortgages while it lasts.
I hope you won’t mind me asking you a few Hamming questions in the spirit of hyper-prioritization. (Feel free to ignore this, it feels like quite aggressive move for me ask, I’d be happy to chat 1:1 too.)
- Why becoming a public intellectual falls under “things that will greatly positively change the world or your personal life”? For you personally and for others.
- Why is it worth to try to be a polymath in 21st century?
To answer the two questions: For me as a philosopher, I think this is where I can have greatest impact, compared to writing technical stuff on very niche subjects, which might probably not matter much. Think how the majority of the impact that Peter Singer, Will MacAskill, Toby Ord, Richard Chappell, or Bentham’s Bulldog have been a mix of new ideas and public advocacy for them. I could say similar thing about other types of intellectuals like Eliezer Yudkowsky, Nick Bostrom, or Anders Sandberg.
I think polymathy is also where the comparative advantage often lies for a philosopher. Particularly for me, I’m not so good at technical topics that I would greatly excel at a niche thing such as population ethics. I can, however, draw from other fields and learn how particular moral intuitions might be unreliable, for example. And what might feel like a advocating for a relatively small change in moral beliefs (e.g. what we do about insect suffering, or the potential suffering of digital minds) could change future societies greatly.
Yet I don’t disregard specializing into one thing. I’m currently working on my PhD, which a very specialized project.
And I would give very different advice if I was working on AI safety directly. If that were the case, maybe digging deep into a topic to become a world expert or have a breakthrough might be the best way to go.
I also found it hard to short-timeline-pill family and friends, and I try when asked about advice for the future but mostly so I feel I am being true to myself, not to convince anyone. [1]
It is quite impressive how avoidant people are of this topic, even when trying to philosophise about alternatives to capitalism or deciding what to do when faced with golden handcuffs when their startup gets acqui-hired.
I find it interesting that you feel like promoting of the fast world mindset might be rude or cause a backlash because to me that feels like a mainstream view. A lot of advice on how to cope with AI is essentially equivalent to “you need to try harder”, maybe with some qualifiers of what that might exactly look like.[1]
I’d say that I am hyper-prioretising Slow World because it is what makes life worth living. And if there is not much life left, it is even more important to have good experiences while it is possible?
I don’t care much about things that I consider somewhat trivial. These include hanging out with friends at the pub, people getting married, or stuff like that.
I care about the Big Things (the “Big Questions” in philosophy, politics, morality, physics, biology, psychology, big historical trends, technology), and I care about them on a global or even cosmic scale
I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?
RE: “I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?”
Perhaps it has to do with the level of ambition. Let’s talk about a particular value to narrow down the discussion. Some people see “caring for all sentient beings” as an extension of empathy. Some others see it as a logical extension of a principle of impartiality or equality for all. I think I am more in this second camp. I don’t care about invertebrate welfare, for example, because I am particularly empathetic towards them. Most people find bugs to be a bit icky, particularly under a magnifying glass, which turns off their empathy.
Rather, they are suffering sentient beings, which means that the same arguments for why we should care about people (and their wellbeing/interests/preferences) also apply to these invertebrates. And caring about, say, invertebrate welfare, requires a use of reason towards impartiality that might sometimes make you de-prioritize friends and family.
Secondly, I also have a big curiosity about understanding the universe, society, etc. which makes me feel like I’m wasting my time in social situations of friends and family when the conversation topics are a bit trivial.
As I repeat a bit throughout the post, I realize I might be a bit of an psychological outlier here, but I hope people can also see why this perspective might be appealing. Most people are compartimenalizing their views on AI existential risk to a level that I’m not sure makes sense.
Hi Rafael, thanks for the post!
I have a few thoughts to share, I will post them as separate comments to help structure discussion.
I think I am personally
Living like we only have 5 years left
(and I’d love others to embrace it). However, I am making choices that are different from your own, so I guess I put some of them here in the comments to highlight that being onboard with the principle will yield different results based one’s preferences.
So here is me putting money where my mouth is[1]:
having kids
treating spending time with friends as last opportunity ever
this usually results in making effort to fly to meet them
offering to pay friends and family to visit me (symmetric with one above but less conventional)
getting driver’s license with the expectation that it would only be meaningful to use for a few years not a lifetime
saying what I think and writing it down publicly, including writing these comments now
trying to be less risk-averse in general
I guess I also have an underlying intuition that we are about to enter the period of turmoil, so I am trying to take advantage of functioning infrastructure like commercial flights or mortgages while it lasts.
and I guess putting mouth back there too?
Hamming questions
I hope you won’t mind me asking you a few Hamming questions in the spirit of hyper-prioritization. (Feel free to ignore this, it feels like quite aggressive move for me ask, I’d be happy to chat 1:1 too.)
- Why becoming a public intellectual falls under “things that will greatly positively change the world or your personal life”? For you personally and for others.
- Why is it worth to try to be a polymath in 21st century?
To answer the two questions: For me as a philosopher, I think this is where I can have greatest impact, compared to writing technical stuff on very niche subjects, which might probably not matter much. Think how the majority of the impact that Peter Singer, Will MacAskill, Toby Ord, Richard Chappell, or Bentham’s Bulldog have been a mix of new ideas and public advocacy for them. I could say similar thing about other types of intellectuals like Eliezer Yudkowsky, Nick Bostrom, or Anders Sandberg.
I think polymathy is also where the comparative advantage often lies for a philosopher. Particularly for me, I’m not so good at technical topics that I would greatly excel at a niche thing such as population ethics. I can, however, draw from other fields and learn how particular moral intuitions might be unreliable, for example. And what might feel like a advocating for a relatively small change in moral beliefs (e.g. what we do about insect suffering, or the potential suffering of digital minds) could change future societies greatly.
Yet I don’t disregard specializing into one thing. I’m currently working on my PhD, which a very specialized project.
And I would give very different advice if I was working on AI safety directly. If that were the case, maybe digging deep into a topic to become a world expert or have a breakthrough might be the best way to go.
short timeline pill
I also found it hard to short-timeline-pill family and friends, and I try when asked about advice for the future but mostly so I feel I am being true to myself, not to convince anyone.
[1]
It is quite impressive how avoidant people are of this topic, even when trying to philosophise about alternatives to capitalism or deciding what to do when faced with golden handcuffs when their startup gets acqui-hired.
Fast vs Slow
I find it interesting that you feel like promoting of the fast world mindset might be rude or cause a backlash because to me that feels like a mainstream view. A lot of advice on how to cope with AI is essentially equivalent to “you need to try harder”, maybe with some qualifiers of what that might exactly look like.[1]
I’d say that I am hyper-prioretising Slow World because it is what makes life worth living. And if there is not much life left, it is even more important to have good experiences while it is possible?
I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?
That was my impression for example from “Planning a career in the age of A(G)I—w Luke Drago, Josh Landes & Ben Todd” event in April.
RE: “I am curious, why do you care about Big Things without small things? Are Big Things not underpinned by values of small everyday things?”
Perhaps it has to do with the level of ambition. Let’s talk about a particular value to narrow down the discussion. Some people see “caring for all sentient beings” as an extension of empathy. Some others see it as a logical extension of a principle of impartiality or equality for all. I think I am more in this second camp. I don’t care about invertebrate welfare, for example, because I am particularly empathetic towards them. Most people find bugs to be a bit icky, particularly under a magnifying glass, which turns off their empathy.
Rather, they are suffering sentient beings, which means that the same arguments for why we should care about people (and their wellbeing/interests/preferences) also apply to these invertebrates. And caring about, say, invertebrate welfare, requires a use of reason towards impartiality that might sometimes make you de-prioritize friends and family.
Secondly, I also have a big curiosity about understanding the universe, society, etc. which makes me feel like I’m wasting my time in social situations of friends and family when the conversation topics are a bit trivial.
As I repeat a bit throughout the post, I realize I might be a bit of an psychological outlier here, but I hope people can also see why this perspective might be appealing. Most people are compartimenalizing their views on AI existential risk to a level that I’m not sure makes sense.