FINRA Technology

Model Validation Toolkit

Validate your machine learning models before deployment and monitor them after deployment

Get Started View on GitHub

Features

Model Validation
Model Validation Toolkit contains tools to assist in the validation of machine learning models. These tools can be used in assuring the quality of a model before it goes to production. Painlessly assess statistical credibility of performance metrics in the face of small samples with the Credibility module, and check out our Metrics submodule for some of our custom performance and feature importance metrics!
Model Monitoring
Model Validation Toolkit provides functions that can be used to monitor model performance and data quality in production. Check out the Thresholding module for dynamic thresholding and false negative rate monitoring and the Supervisor module for monitoring data quality and more!
Model Interpretation
Model Validation Toolkit can turn a blackbox approach into an interpretable one. Check out the Interprenet module to train and evaluate neural networks that behave in simple ways.

Begin your journey

Check out our docs for a walkthrough guide and how to start using the product.

Get started