Machine learning and bubbles

Chris Dillow discusses why management jobs seem immune to automation despite all the excitement about AI. The problem, I think, is that the current state-of-the-art is very poorly suited to making strategic decisions, for reasons that are inherent in the way it works, and in the nature of the decisions themselves.

So we’re training our model on a pile of data – perhaps recently originated mortgages, why not – and we want to use it to decide how to allocate investments. Presumably, what we want to know is which ones are likely to pay off, so the objective function is the return on investment. By adjusting the weights on the links in its neural network, the machine-learning model will identify which features of the training data set correlate with its objective – i.e. whatever seems to predict the best rate of return.

The problem is that we’re doing this in 2006. The machine learner cannot possibly learn that no-documentation, jumbo-LTV lending in the suburbs of Las Vegas is not a good idea, because the crash hasn’t happened yet. So far, the 2004 vintage of loans are doing just fine, and because they are unusually large, have higher interest rates, and attract big arrangement fees, they are really great earners. Even if we remember that the rates reset in two years’ time and impose a lag, the market is hugely liquid and the 2002 vintage refinanced out in 2004 without trouble. If we trained it on a much longer data set, that might help, but it would have to be very long indeed and some of the data would just be unavailable as the relevant asset classes didn’t exist yet.

There is an important distinction here. We generally understand knowledge to be founded on causation, but there is no notion of causation here, just association. Without causality, there can be no foresight. The reason why backpropagating neural networks dominate the field of AI/ML is precisely because trying to program causal knowledge into models turned out to be unworkable. Someone who actually knows what a mortgage is could say that a loan-to-value ratio of 90% is dangerously high and needs special underwriting. An ML model could learn this from data, but only in hindsight (it turns out theirs is 20/20, too). We could train the model to surface loan-level data like that (this is Ajay’s objection here) but this is irrelevant – the problem was not that basic metrics like LTV were unavailable, it was that people weren’t enforcing simple, if-then rules like going easy on the 90%+ loans. It wasn’t data that was lacking, it was self-discipline.

The thing about a bubble is that the stuff in the bubble is going up. It’s not an illusion that the prices are rising or that people are getting rich. If you decide to be guided purely by evidence, like our model will be, your problem is that the overwhelming majority of the evidence is itself wrong. In some sense, reality itself is at variance with reality. It may be a condition of any decision worthy of the name that it is necessary to reject some of the evidence – if the right course was evident, there would not be much of a decision to make. Setting your expectations based on what has happened recently is not a stupid idea – what do you do, sir? – but it is a fragile one.

For an industrial example of this, we could think of Motorola about the same time. The overwhelmingly strong evidence available to its directors was that people loved the RAZR V3 phones – they were selling in huge numbers! celebrities showed them off! and the company was making money hand over fist! Automated advice would almost certainly have been to double down still more, for the reasons I have just given, and in fact that is what they did, proliferating celebrity endorsements and silly colours, and signing up more and more contract manufacturers to fill the world with product, of increasingly patchy quality. What happened is history.

It’s occasionally observed that as well as actual bots churning out Internet propaganda, there are quite a lot of people who respond to them by behaving like bots. In the end, a working definition of a bubble is what happens when people act like machines would.

1 Comment on "Machine learning and bubbles"


  1. Good piece.

    I think it’s worth noting that so far ML doesn’t look that close to making decisions from scratch in any field. We’re right now excited by systems that can look at lots of data and notice things – but the reality is we’re saying we know what the decision would be – and we write it into the system for when the correlations are noticed. One can think about building meta-systems made up of many ML-subsystems, but explainability is a big issue here, which is why the attempts at it so far haven’t gone well.

    The essence of corporate life has been to reduce the decision making autonomy of lower level jobs and processes, so it can be encoded into SAP or whatever… but… (a) we all know how well that has gone… and (b) it largely didn’t stop where it stopped because of power (although of course power is a thing), it stopped because actually for a lot of decisions there isn’t that much data to go on. This may change in an IoT etc. world, but it’s not all clear we are there yet.

    Reply

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.