However, alongside this excitement, when this advanced technology is not used responsibly, or the right guardrails aren’t in place, we see major issues surface that impact business continuity and society at large. This was exactly what happened in the Zillow and Unity Software cases, which saw massive financial losses, extended layoffs, and temporary loss of trust, all due to a technological snafu that went unnoticed. The most recent issue I want to shine a light on comes from Equifax, which recently reported that it issued lenders wrong credit scores on millions of U.S. consumers.
What is Equifax?
Equifax is one of the largest consumer credit reporting agencies, comprising the “Big Three” along with Experian and TransUnion. In other words, Equifax collects account information from various creditors and sells reports to their customers about the financial history of different companies or individuals.
What Happened with Equifax’s Credit Score Report?
According to Freddie Mac’s June 1st alert to its clients, Equifax informed them that about 12% of credit scores released between March 17th and April 6th were incorrect. Equifax cited a “coding issue” while making changes to its server as the culprit behind the lead score error that led to a nearly 5% drop in share price.
Due to this technological snafu, the real world felt the impact, as in some cases, credit report errors showed at the very least a 25-point differential for nearly three hundred thousand consumers. This resulted in many would-be borrowers being wrongfully denied loans or issued loans at a much higher rate than their true credit score would warrant.
Machine Learning in the Financial Sector
To secure economical value, players in the financial market ingest massive amounts of data to ensure a robust decision-making process that’s efficient and most importantly – fast. This is the reason that the financial industry has been one of the biggest and earliest adopters of machine learning models for their core business.
As one of the leading consumer credit reporting agencies, Equifax’s data is used by many leveraging AI/ML. This latest issue may have created malformed input data, which could have real-world implications. Businesses that ingested the Equifax data could be looking at skewed results, and subsequently bad output.
This is a big no-no for any business looking to make a profit and create a better world by utilizing these game-changing technologies. This example, like many others, continues to highlight the need for reliable monitoring solutions to ensure these impactful models drive their intended value.
Why ML Monitoring is crucial to trusting AI
This is not the first time we’ve seen an example showcasing the essential need for businesses to monitor their model performance. Again, I highlight the Zillow and Unity Software cases mentioned earlier, which experienced losses of nearly $500 million and extensive layoffs.
The Equifax case also shows the importance of another vital aspect of validating model behaviour – the need to monitor the model’s inputs for change. In a perfect world, we would assume that the inputs we use to generate the features we feed the models would be correct and keep their behaviour over time. And while this would be ideal, data scientists and ML engineers know that this is far from reality. When a model is deployed to production, continuously tracking and monitoring are key to detecting different kinds of errors and drift in its behaviour. If your business trains its ML models or uses vulnerable external data – just like in this case with Equifax’s incorrect credit scores – it’s imperative to leverage model monitoring to ensure that any faults in your ML pipeline are alerted to relevant stakeholders and dealt with before they harm your business or user.
How Could’ve Equifax’s Case Been Detected Earlier?
As mentioned above, this issue is one of many possible issues that may cause data to drift in the inputs that the features fed into the model are based on.
There are two main approaches to detecting these kinds of cases:
- Monitor the data behaviour
- Monitor for data drift in the distribution of the input
- Monitor for relevant custom metrics based upon the inputs
2. Monitor the prediction behaviour
- Monitor for prediction drift
- Monitor for relevant custom metrics based upon the predictions
In this specific case, monitoring the total amount of true value predictions could have helped alert to the change in the behaviour of the model, which potentially denied legitimate loan requests. In addition to these two approaches, it is always best practice to monitor the behaviour of the features fed into the model as well.
Written by Nimrod Carmel, Product Manager, Aporia