Some of this depends on what our goal is here. Is it to maximize ‘prediction’ and if so, why? Or is it something else? … Maybe to identify particularly relevant associations in the population of interest.
For prediction, I agree it’s good to start with the largest amount of features (variables) you can find (as long as they are truly ex-ante) and then do a fancy dance of cross-validation and regularisation, before you do your final ‘validation’ of the model on set-aside data.
But that doesn’t easily give you the ability to make strong inferential statements (causal or not), about things like ‘age is likely to be strongly associated with satisfaction measures in the true population’. Why not? If I understand correctly:
The model you end up with, which does a great job at predicting your outcome
… may have dropped age entirely or “regularized it” in a way that does not yield an unbiased or consistent estimator of the actual impact of age on your outcome. Remember, the goal here was prediction, not making inferences about the relationship between of any particular variable or sets of variables …
… may include too many variables that are highly correlated with the age variable, thus making the age coefficient very imprecise
… may include variables that are actually ‘part of the age effect you cared about, because they are things that go naturally with age, such as mental agility’
Finally, the standard ‘statistical inference’ (how you can quantify your uncertainty) does not work for these learning models (although there are new techniques being developed)
By the way, in this years’ post—or, better yet, see the dynamic document here, in our predictive models we use elastic-net and random-forest modeling approaches with validation (cross-fold validation for tuning on training data, predictive power and model performance measured on set-aside testing data).
For missing data, we do a combination of simple imputations (for continuous variables) and ‘coding non-responses as separate categories’ (for categorical data).
Thanks Greg, I appreciate the feedback.
Some of this depends on what our goal is here. Is it to maximize ‘prediction’ and if so, why? Or is it something else? … Maybe to identify particularly relevant associations in the population of interest.
For prediction, I agree it’s good to start with the largest amount of features (variables) you can find (as long as they are truly ex-ante) and then do a fancy dance of cross-validation and regularisation, before you do your final ‘validation’ of the model on set-aside data.
But that doesn’t easily give you the ability to make strong inferential statements (causal or not), about things like ‘age is likely to be strongly associated with satisfaction measures in the true population’. Why not? If I understand correctly:
The model you end up with, which does a great job at predicting your outcome
… may have dropped age entirely or “regularized it” in a way that does not yield an unbiased or consistent estimator of the actual impact of age on your outcome. Remember, the goal here was prediction, not making inferences about the relationship between of any particular variable or sets of variables …
… may include too many variables that are highly correlated with the age variable, thus making the age coefficient very imprecise
… may include variables that are actually ‘part of the age effect you cared about, because they are things that go naturally with age, such as mental agility’
Finally, the standard ‘statistical inference’ (how you can quantify your uncertainty) does not work for these learning models (although there are new techniques being developed)
By the way, in this years’ post—or, better yet, see the dynamic document here, in our predictive models we use elastic-net and random-forest modeling approaches with validation (cross-fold validation for tuning on training data, predictive power and model performance measured on set-aside testing data).
For missing data, we do a combination of simple imputations (for continuous variables) and ‘coding non-responses as separate categories’ (for categorical data).