Note: This post is super-quickly written and mostly for reference.
Note 2: This is not my phrase, I first heard it from Sydney von Arx. But I’m not sure if her use of the phrase is the same as mine.
What I mean by the phrase “getting intimate with reality”
you can come up with project ideas abstractly, from first principles.
e.g., “most AI researchers aren’t convinced of AI safety. Therefore we should set up lots of 1-1s with AI researchers to convince them.”
however, this is usually a bad idea if you aren’t intimately familiar with the people, organisations, and systems you are reasoning about. Aka, the slice of reality under consideration
in this case, the relevant slice of reality to understand is ML academia and industry, the researchers themselves, past attempts at having such 1-1 conversations, generally trying to convince someone 1-1 etc.
you could get more intimate with this slice of reality by: being an AI researcher in academia or industry, talking to these researchers, talking to people who’ve tried having such 1-1s before, trying to convince someone in a 1-1 on this or another issue
what goes wrong if you’re not intimate with reality?
you miss most of the laws and dynamics that govern the slice of reality. E.g., you might think the main dynamic that governs convincing AI researchers is presenting them with arguments they’ve never heard before. In reality, many AI researchers have heard of AI safety, but the arguments weren’t presented to them in a high-fidelity, in-depth, and thoughtful way. (At least this is one plausible dynamic that’s governing their opinion of AI safety.)
this goes to say, you’re simply bad at reasoning about project ideas for this slice of reality
This is more true the more complex the given slice of reality is, and the simpler your current model of it is.
Humans are very complex. Try imagining how bad you would be at coming up with an idea for a date, if you modelled dating as “mate-finding for reproduction” and tried reasoning from first principles (as opposed to relying on your extensive experience with and knowledge about humans and using your rich model of all the laws and dynamics that govern their behaviour)
more examples
a slice of reality you’re probably very intimate with is your own life and problems. This is why other people’s advice is often well-meant but not helpful—they just don’t understand your reality as well as you do
maybe you are intimate with reality in the domain of EA uni group organising. You are better than others at predicting what sorts of events would work and what laws and dynamics govern event attendance. Things you’ve done to get so intimate with reality might include: testing a bunch of events, asking people which events they liked/would like, talking to other organisers, reading on the forum about uni group strategy
maybe you are not intimate with reality in the domain of running a farm. You can imagine from first principles that you will need workers and machinery and seed. But how much of your day-to-day will actually be spent doing administration and funding proposals and dealing with authorities, you don’t know. What sort of culture the workers will have and what their incentive landscape will be for completing tasks on time and properly, you don’t know. Will it be a very chill community or will you have to worry about peace-making between the staff every day? How often will something break or it’s raining too much and it’s killing your crops or policy makes it such that you take a 20% cut in profit? You have no idea. Reasoning from first principles won’t get you very far here.
What I mean by the phrase “getting intimate with reality”
Link post
Note: This post is super-quickly written and mostly for reference.
Note 2: This is not my phrase, I first heard it from Sydney von Arx. But I’m not sure if her use of the phrase is the same as mine.
What I mean by the phrase “getting intimate with reality”
you can come up with project ideas abstractly, from first principles.
e.g., “most AI researchers aren’t convinced of AI safety. Therefore we should set up lots of 1-1s with AI researchers to convince them.”
however, this is usually a bad idea if you aren’t intimately familiar with the people, organisations, and systems you are reasoning about. Aka, the slice of reality under consideration
in this case, the relevant slice of reality to understand is ML academia and industry, the researchers themselves, past attempts at having such 1-1 conversations, generally trying to convince someone 1-1 etc.
you could get more intimate with this slice of reality by: being an AI researcher in academia or industry, talking to these researchers, talking to people who’ve tried having such 1-1s before, trying to convince someone in a 1-1 on this or another issue
what goes wrong if you’re not intimate with reality?
you miss most of the laws and dynamics that govern the slice of reality. E.g., you might think the main dynamic that governs convincing AI researchers is presenting them with arguments they’ve never heard before. In reality, many AI researchers have heard of AI safety, but the arguments weren’t presented to them in a high-fidelity, in-depth, and thoughtful way. (At least this is one plausible dynamic that’s governing their opinion of AI safety.)
this goes to say, you’re simply bad at reasoning about project ideas for this slice of reality
This is more true the more complex the given slice of reality is, and the simpler your current model of it is.
Humans are very complex. Try imagining how bad you would be at coming up with an idea for a date, if you modelled dating as “mate-finding for reproduction” and tried reasoning from first principles (as opposed to relying on your extensive experience with and knowledge about humans and using your rich model of all the laws and dynamics that govern their behaviour)
more examples
a slice of reality you’re probably very intimate with is your own life and problems. This is why other people’s advice is often well-meant but not helpful—they just don’t understand your reality as well as you do
maybe you are intimate with reality in the domain of EA uni group organising. You are better than others at predicting what sorts of events would work and what laws and dynamics govern event attendance. Things you’ve done to get so intimate with reality might include: testing a bunch of events, asking people which events they liked/would like, talking to other organisers, reading on the forum about uni group strategy
maybe you are not intimate with reality in the domain of running a farm. You can imagine from first principles that you will need workers and machinery and seed. But how much of your day-to-day will actually be spent doing administration and funding proposals and dealing with authorities, you don’t know. What sort of culture the workers will have and what their incentive landscape will be for completing tasks on time and properly, you don’t know. Will it be a very chill community or will you have to worry about peace-making between the staff every day? How often will something break or it’s raining too much and it’s killing your crops or policy makes it such that you take a 20% cut in profit? You have no idea. Reasoning from first principles won’t get you very far here.