Thanks. I should say that I didn’t mean to endorse stepwise when I mentioned it (for reasons Gelman and commenters note here), but that I thought it might be something one might have tried given it is the variable selection technique available ‘out of the box’ in programs like STATA or SPSS (it is something I used to use when I started doing work like this, for example).
Although not important here (but maybe helpful for next time), I’d caution against using goodness of fit estimators (e.g. AIC going down, R2 going up) too heavily in assessing the model as one tends to end up with over-fitting. I think the standard recommendations are something like:
Specify a model before looking at the data, and caveat any further explanations as post-hoc. (which sounds like essentially what you did).
Split your data into an exploration and confirmation set, where you play with whatever you like on the former, then use the model you think is best on the latter and report these findings (better, although slightly trickier, are things like k-fold cross validation rather than a single holdout).
LASSO, Ridge regression (or related regularisation methods) if you are going to select predictors ‘hypothesis free’ on your whole data.
(Further aside: Multiple imputation methods for missing data might also be worth contemplating in the future, although it is a tricky judgement call).
Some of this depends on what our goal is here. Is it to maximize ‘prediction’ and if so, why? Or is it something else? … Maybe to identify particularly relevant associations in the population of interest.
For prediction, I agree it’s good to start with the largest amount of features (variables) you can find (as long as they are truly ex-ante) and then do a fancy dance of cross-validation and regularisation, before you do your final ‘validation’ of the model on set-aside data.
But that doesn’t easily give you the ability to make strong inferential statements (causal or not), about things like ‘age is likely to be strongly associated with satisfaction measures in the true population’. Why not? If I understand correctly:
The model you end up with, which does a great job at predicting your outcome
… may have dropped age entirely or “regularized it” in a way that does not yield an unbiased or consistent estimator of the actual impact of age on your outcome. Remember, the goal here was prediction, not making inferences about the relationship between of any particular variable or sets of variables …
… may include too many variables that are highly correlated with the age variable, thus making the age coefficient very imprecise
… may include variables that are actually ‘part of the age effect you cared about, because they are things that go naturally with age, such as mental agility’
Finally, the standard ‘statistical inference’ (how you can quantify your uncertainty) does not work for these learning models (although there are new techniques being developed)
By the way, in this years’ post—or, better yet, see the dynamic document here, in our predictive models we use elastic-net and random-forest modeling approaches with validation (cross-fold validation for tuning on training data, predictive power and model performance measured on set-aside testing data).
For missing data, we do a combination of simple imputations (for continuous variables) and ‘coding non-responses as separate categories’ (for categorical data).
Thanks. I should say that I didn’t mean to endorse stepwise when I mentioned it (for reasons Gelman and commenters note here), but that I thought it might be something one might have tried given it is the variable selection technique available ‘out of the box’ in programs like STATA or SPSS (it is something I used to use when I started doing work like this, for example).
Although not important here (but maybe helpful for next time), I’d caution against using goodness of fit estimators (e.g. AIC going down, R2 going up) too heavily in assessing the model as one tends to end up with over-fitting. I think the standard recommendations are something like:
Specify a model before looking at the data, and caveat any further explanations as post-hoc. (which sounds like essentially what you did).
Split your data into an exploration and confirmation set, where you play with whatever you like on the former, then use the model you think is best on the latter and report these findings (better, although slightly trickier, are things like k-fold cross validation rather than a single holdout).
LASSO, Ridge regression (or related regularisation methods) if you are going to select predictors ‘hypothesis free’ on your whole data.
(Further aside: Multiple imputation methods for missing data might also be worth contemplating in the future, although it is a tricky judgement call).
Thanks Greg, I appreciate the feedback.
Some of this depends on what our goal is here. Is it to maximize ‘prediction’ and if so, why? Or is it something else? … Maybe to identify particularly relevant associations in the population of interest.
For prediction, I agree it’s good to start with the largest amount of features (variables) you can find (as long as they are truly ex-ante) and then do a fancy dance of cross-validation and regularisation, before you do your final ‘validation’ of the model on set-aside data.
But that doesn’t easily give you the ability to make strong inferential statements (causal or not), about things like ‘age is likely to be strongly associated with satisfaction measures in the true population’. Why not? If I understand correctly:
The model you end up with, which does a great job at predicting your outcome
… may have dropped age entirely or “regularized it” in a way that does not yield an unbiased or consistent estimator of the actual impact of age on your outcome. Remember, the goal here was prediction, not making inferences about the relationship between of any particular variable or sets of variables …
… may include too many variables that are highly correlated with the age variable, thus making the age coefficient very imprecise
… may include variables that are actually ‘part of the age effect you cared about, because they are things that go naturally with age, such as mental agility’
Finally, the standard ‘statistical inference’ (how you can quantify your uncertainty) does not work for these learning models (although there are new techniques being developed)
By the way, in this years’ post—or, better yet, see the dynamic document here, in our predictive models we use elastic-net and random-forest modeling approaches with validation (cross-fold validation for tuning on training data, predictive power and model performance measured on set-aside testing data).
For missing data, we do a combination of simple imputations (for continuous variables) and ‘coding non-responses as separate categories’ (for categorical data).