I found him very credible and convincing. And he was making some strong arguments. My lizard brain thought, “This guy is talking seriously; he’s not wearing any silly hats or making goofy facial expressions.”
Gave me somewhat less belief in his main claims because it made me sort of think, “This guy is the sort of person who is inclined to believe far-fetched, radical, ~pessimistic theories.”
Demotivated me because, fundamentally, if this life and this world is a simulation, it simply does not seem as important to try to preserve. This may be related to the “drop in a bucket” effect. According to the theory, when people see that the problem of dealing with is just a very small part of a much larger problem, it makes them see it as less valuable.
(Also: 3. A simulation also makes the universe of cause and effect seem a lot less clear. You ask me to believe some unintuitive things about AIs far surpassing human intelligence taking over the light cone. Then you’re suggesting I take steps to try to slow down its progress. But if I’m actually in some simulation, it could be that when this really strange thing happens, the simulators change the rules of the game or something.)
I’m not sure that immediately jumping to critiquing the messaging makes sense here.
If hearing someone’s strong belief in simulation theory lowers your trust in their AI safety views, the ‘obvious’ first step seems to be to lower your trust in Yampolsky’s AI safety views.
And if the simulation theory makes AI safety feel less motivating, the ‘obvious’ first step seems to be to reduce your motivation in proportion to how credible you find the theory.
Don’t talk about simulation theory if you want people to be concerned about AI safety?
I watched Roman Yampolsky on Diary of a CEO https://www.youtube.com/watch?v=UclrVWafRAI&list=WL&index=2
I found him very credible and convincing. And he was making some strong arguments. My lizard brain thought, “This guy is talking seriously; he’s not wearing any silly hats or making goofy facial expressions.”
Then at https://www.youtube.com/watch?v=UclrVWafRAI&list=WL&index=2&t=3370s He was asked about simulation theory, and he said he thinks it’s almost certain that we’re living in a simulation. This diverted me in two ways, neither of which were necessarily rational.
Gave me somewhat less belief in his main claims because it made me sort of think, “This guy is the sort of person who is inclined to believe far-fetched, radical, ~pessimistic theories.”
Demotivated me because, fundamentally, if this life and this world is a simulation, it simply does not seem as important to try to preserve. This may be related to the “drop in a bucket” effect. According to the theory, when people see that the problem of dealing with is just a very small part of a much larger problem, it makes them see it as less valuable.
(Also: 3. A simulation also makes the universe of cause and effect seem a lot less clear. You ask me to believe some unintuitive things about AIs far surpassing human intelligence taking over the light cone. Then you’re suggesting I take steps to try to slow down its progress. But if I’m actually in some simulation, it could be that when this really strange thing happens, the simulators change the rules of the game or something.)
I’m not sure that immediately jumping to critiquing the messaging makes sense here.
If hearing someone’s strong belief in simulation theory lowers your trust in their AI safety views, the ‘obvious’ first step seems to be to lower your trust in Yampolsky’s AI safety views.
And if the simulation theory makes AI safety feel less motivating, the ‘obvious’ first step seems to be to reduce your motivation in proportion to how credible you find the theory.
Fair, I guess that’s getting a bit more towards soldier rather than scout mentality.
But I suspected that this part of my reaction was largely emotional and irrational. Especially the ‘drop in the bucket’ aspect.