I’m feeling pretty good about this one, but—as always—if you think I’m focusing in the wrong areas or that I should focus more in one particular area, I like to know. Your comments last quarter were helpful in making me focus more on learning.
Also, if anyone has any suggestions for improving areas I feel weak at (consistency in general; consistency in exercising, sleeping, and socializing in particular), I’d love to hear them.
Things sound like they’re going well. Here are some thoughts:
make sure to get extra feedback on future startup ideas who have different startup and life experience to you. Assume your ideas are terrible, and frequently unsalvageably so and the challenge is to find out why. Until significant and diverse feedback is incorporated, one’s prior would be that this is the case for the stealth startups too.
have you considered meeting online with people who want to “study data science for good”? That’s something I’d like to see, and that I think some altruists might be motivated by: brayden, Alex Robson, Marek Duda and occasionally me to name a few, and one could even usefully recruit high-impact analytic, altruistic people.
The brand-new Johns Hopkins applied machine learning course in R is great and you’ve probably reached the appropriate level for it.
The main thing that I think would increase your impact is still meeting people in SF, to move your understanding of some EA and rationality conceptd to the gut level and practice implementing them. Although there are no reliable generators of planning or prioritisation insights, it’s a good candidate.
to move your understanding of some EA and rationality concepts to the gut level
Do you have any particular concepts in mind that you think I might be missing?
Presumably neither of us know most of the things that are known about EA and rationality… You probably know more about EA than rationality, more about animals than tech risks, and more about EA theory than EA orgs? One insight that I picked up in my travels is that in a certain sense, asteroid detection is the most ‘robust’ cause, since we know a lot more about how to do it, compared to entering a complex human system like global poverty. An interesting meditation on whether we should pivot to asteroid deflection, whether we want ‘robustness’, and what people mean by ‘robustness’.
Seems like another uncharitable implicit argument against the EAs known for favouring robustness (GiveWell, the Vancouverites, people skeptical about leafleting and metacharities and xrisk on those grounds). I’ve heard experts say the most important parts of asteroid detection are fully funded. If they weren’t people would generally accept funding them as a priority.
I’m not trying to say folks who espouse robustness are fools—Until I encountered it, I had not thought of this line of reasoning myself. As I understand it, the point is that sometimes the connotations of such words lead in different directions from if we thought more carefully. Yes, >1km asteroid detection is well-covered now. So is next thing to move onto is asteroid deflection? You can see how an argument would run, that since physical annihilation is so final and well-understood, it wins on robustness grounds...
I’m feeling pretty good about this one, but—as always—if you think I’m focusing in the wrong areas or that I should focus more in one particular area, I like to know. Your comments last quarter were helpful in making me focus more on learning.
Also, if anyone has any suggestions for improving areas I feel weak at (consistency in general; consistency in exercising, sleeping, and socializing in particular), I’d love to hear them.
I’m open to any comments, really.
Things sound like they’re going well. Here are some thoughts:
make sure to get extra feedback on future startup ideas who have different startup and life experience to you. Assume your ideas are terrible, and frequently unsalvageably so and the challenge is to find out why. Until significant and diverse feedback is incorporated, one’s prior would be that this is the case for the stealth startups too.
have you considered meeting online with people who want to “study data science for good”? That’s something I’d like to see, and that I think some altruists might be motivated by: brayden, Alex Robson, Marek Duda and occasionally me to name a few, and one could even usefully recruit high-impact analytic, altruistic people.
The brand-new Johns Hopkins applied machine learning course in R is great and you’ve probably reached the appropriate level for it.
The main thing that I think would increase your impact is still meeting people in SF, to move your understanding of some EA and rationality conceptd to the gut level and practice implementing them. Although there are no reliable generators of planning or prioritisation insights, it’s a good candidate.
Good luck!
Yeah, that’s good advice. Sort of like a project pre-mortem.
-
Sounds good, but I’m not sure what we’d do. Any suggestions?
-
I’ll have to give it a look through. A lot on my “to learn” plate. :)
-
Yeah, I agree. I’ll have to come visit sometime, either for the EA Summit or for an impromptu trip.
-
Do you have any particular concepts in mind that you think I might be missing? Certainly I have some, if not, many, but curious what you think.
Presumably neither of us know most of the things that are known about EA and rationality… You probably know more about EA than rationality, more about animals than tech risks, and more about EA theory than EA orgs? One insight that I picked up in my travels is that in a certain sense, asteroid detection is the most ‘robust’ cause, since we know a lot more about how to do it, compared to entering a complex human system like global poverty. An interesting meditation on whether we should pivot to asteroid deflection, whether we want ‘robustness’, and what people mean by ‘robustness’.
Seems like another uncharitable implicit argument against the EAs known for favouring robustness (GiveWell, the Vancouverites, people skeptical about leafleting and metacharities and xrisk on those grounds). I’ve heard experts say the most important parts of asteroid detection are fully funded. If they weren’t people would generally accept funding them as a priority.
I’m not trying to say folks who espouse robustness are fools—Until I encountered it, I had not thought of this line of reasoning myself. As I understand it, the point is that sometimes the connotations of such words lead in different directions from if we thought more carefully. Yes, >1km asteroid detection is well-covered now. So is next thing to move onto is asteroid deflection? You can see how an argument would run, that since physical annihilation is so final and well-understood, it wins on robustness grounds...