Often times, to me it seems, machine learning models reveal solutions or insights that, while researchers may have known them already, are actually closely linked to the problem it’s modelling. In your experience, does this happen often with ML? If so, does that mean ML is a very good tool to use in Effective Altruism? If not, then where exactly does this tendency come from?
Often times, to me it seems, machine learning models reveal solutions or insights that, while researchers may have known them already, are actually closely linked to the problem it’s modelling. In your experience, does this happen often with ML? If so, does that mean ML is a very good tool to use in Effective Altruism? If not, then where exactly does this tendency come from?
(As an example of this ‘tendency’, this study used neural networks to find that estrogen exposure and folate deficiency were closely correlated to breast cancer. Source: https://www.sciencedirect.com/science/article/abs/pii/S0378111916000706 )