Your advice to talk to people is probably most important to me! I haven’t tried that a lot but when I did, it was very successful. One hurdle is not wanting to come off as too stupid to the other person (but there are also people who make me feel sufficiently at ease that I don’t mind coming off as stupid) and another is not wanting to waste people’s time. So I want to first be sure that I can’t just figure it out myself within ~ 10x the time. Maybe that a bad tradeoff. I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat. (Maybe they have the same reluctance, and both of us would be happier if he didn’t have it. Can we have a Reciprocity.io for talking about research, please? ^^)
Typing speed: Haha! You can test it here for example: https://10fastfingers.com/typing-test/english. I’ve been stagnating at ~ 60 WPM now for years. Maybe there’s some sort of distinction that some brains are more optimized toward (e.g., worse memory) or incentivized to optimize toward (e.g., through positive feedback) fewer low-level concepts and other more toward more high-level concepts. So when it comes to measures of performance that have time in the denominator, the first group hits diminishing marginal returns early while the second keeps speeding up for a long time. Maybe the second group is, in turn, less interested in understanding from first-principles, which might make them less innovative. Just random speculation.
Obvious questions: Yeah, I’ve been wondering how it can be that now a lot of people come up independently with cases for nonhuman rights and altruism regardless of distance, but a century ago seemingly almost no one did. Maybe it’s just that I don’t know because most of those are lost in history and those that are not, I just don’t know about (though I can think of some examples). Or maybe culture was so different that a lot of the frameworks weren’t there that these ideas attach to. So if moral genius is, say, normally distributed, then values-spreading could have the benefit that it increases the number of people that use relevant frameworks and thereby also increases the absolute number of moral geniuses who work within those frameworks. The values would have to be sufficiently cooperative not to risk zero-sum competition between values. I suppose that’s similar to Bostrom’s Megaearth scenario except with people who share certain frameworks in their thinking rather than the pure number of people.
Getting work done when tired: Well, to some degree I noticed that I over-update on tiredness, and then get into a negative feedback loop where I give up on things too quickly because I think I’m too tired to do them. At that point I’m usually not actually particularly tired.
Regarding talking to people to get early feedback, get up to speed in a field, etc., you might find this post useful (if you haven’t already seen it).
I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat.
I find this relatable. Relatedly, in the above-linked post, Michelle Hutchinson (the author) wrote:
Try to make the conversation concise, and to avoid going over the time allocated. I really appreciate when people do this when I’m talking to them, because it means I can focus on thinking through the ideas rather than also making sure that we’re sticking to the agenda and get to everything.
I commented that I’d slightly push back on that passage, saying:
I think it makes sense for this to be the default way one approaches conversations in which one is seeking advice. But I think a decent portion of advice-givers would either be ok with or actually prefer a more loose / lengthy / free-wheeling / non-regimented conversation.
There have been a few times when I’ve arranged to talk to someone I perceived as very busy and important, and so I’ve tried to be very conscious of their time and give them opportunities to wrap things up, but they repeatedly opted to keep talking for a surprisingly long time. And they seemed genuinely happy with this, and I ended up getting a lot of extra value out of that extra time.
So I think it’s probably good to be open to signs that one’s conversation partner is ok with or prefers a longer conversation, even if one shouldn’t assume they are.
Thanks! Yeah, I sometimes wonder about that. I suppose in rationality-adjacent circles I can just ask what someone’s preference is (free-wheeling chat or no-nonsense and to the point). Maybe that’d be a faux pas or weird in general, but I think it should be fine among most EAs?
Your advice to talk to people is probably most important to me! I haven’t tried that a lot but when I did, it was very successful. One hurdle is not wanting to come off as too stupid to the other person (but there are also people who make me feel sufficiently at ease that I don’t mind coming off as stupid) and another is not wanting to waste people’s time. So I want to first be sure that I can’t just figure it out myself within ~ 10x the time. Maybe that a bad tradeoff. I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat. (Maybe they have the same reluctance, and both of us would be happier if he didn’t have it. Can we have a Reciprocity.io for talking about research, please? ^^)
Typing speed: Haha! You can test it here for example: https://10fastfingers.com/typing-test/english. I’ve been stagnating at ~ 60 WPM now for years. Maybe there’s some sort of distinction that some brains are more optimized toward (e.g., worse memory) or incentivized to optimize toward (e.g., through positive feedback) fewer low-level concepts and other more toward more high-level concepts. So when it comes to measures of performance that have time in the denominator, the first group hits diminishing marginal returns early while the second keeps speeding up for a long time. Maybe the second group is, in turn, less interested in understanding from first-principles, which might make them less innovative. Just random speculation.
Obvious questions: Yeah, I’ve been wondering how it can be that now a lot of people come up independently with cases for nonhuman rights and altruism regardless of distance, but a century ago seemingly almost no one did. Maybe it’s just that I don’t know because most of those are lost in history and those that are not, I just don’t know about (though I can think of some examples). Or maybe culture was so different that a lot of the frameworks weren’t there that these ideas attach to. So if moral genius is, say, normally distributed, then values-spreading could have the benefit that it increases the number of people that use relevant frameworks and thereby also increases the absolute number of moral geniuses who work within those frameworks. The values would have to be sufficiently cooperative not to risk zero-sum competition between values. I suppose that’s similar to Bostrom’s Megaearth scenario except with people who share certain frameworks in their thinking rather than the pure number of people.
Getting work done when tired: Well, to some degree I noticed that I over-update on tiredness, and then get into a negative feedback loop where I give up on things too quickly because I think I’m too tired to do them. At that point I’m usually not actually particularly tired.
(Sorry for barging in on this thread :D)
Regarding talking to people to get early feedback, get up to speed in a field, etc., you might find this post useful (if you haven’t already seen it).
I find this relatable. Relatedly, in the above-linked post, Michelle Hutchinson (the author) wrote:
I commented that I’d slightly push back on that passage, saying:
Thanks! Yeah, I sometimes wonder about that. I suppose in rationality-adjacent circles I can just ask what someone’s preference is (free-wheeling chat or no-nonsense and to the point). Maybe that’d be a faux pas or weird in general, but I think it should be fine among most EAs?