Making Sense of Machine Learning
We introduce a framework that demystifies how machine learning models “think” about investing.
It is easier to trust machine learning predictions when we understand them better
Machine learning (ML) enables powerful algorithms to analyze financial data in new and exciting ways. But this excitement is often tempered by fear that investors don’t really understand why a model behaves the way it does. We need to move beyond this “black box” stigma. We propose a framework that demystifies the predictions from any ML algorithm. Our approach computes what we call a “fingerprint” for a given model’s linear, nonlinear, and interaction effects that drive its predictions — and ultimately its investment performance. In a real-world case study applied to currency return predictions, we find that popular ML models like neural network and random forest think in ways that do indeed make sense, and which we can begin to understand. These fingerprints empower investors to describe and probe the similarities and differences across ML models, and to extract genuine insight from machine-learned rules.