Some quick thoughts from Monitaur's CEO, Anthony Habayeb on the stories we're watching in this issue of the newsletter. Read the video transcript here.
This unfortunate story about the arrest of Robert Julian-Borchak Williams by the Detroit Police Department is the first known case of an arrest based on flawed facial recognition software. It highlights the very real, very personal consequences of uncontrolled, unvalidated algorithms.
The software provider's GM admitted that the company "does not formally measure the systems' accuracy or bias". Without proper ML governance and controls, it is likely to be the first of many such stories we see. It is inevitable that such high profile injustices will accelerate the regulatory pathways for machine learning models.
In April, the FTC published valuable guidance on how existing law applies to the use of AI and ML. The rules of thumb are:
The state-based standard-setting organization for insurance held three public meetings in Q1 2020 of the Accelerated Underwriting Working Group (AUWG). Expectations after those meetings include more regulatory focus on data use, algorithms development, consumer transparency, and governance/controls.
NAIC has a separate working group developing new standards for regulatory approvals of predictive models that is more specifically focused on Generalized Linear Models (GLM), but is also likely to feed into and maybe complement this AUWG work. Insurance companies will need to retool their processes and systems to adhere to emerging regulator expectations, but regulators will also need to upskill their knowledge to effectively evaluate ML systems.
An excellent summary and analysis of FDA actions on AI over the last year. Most of the attention thus far has centered on premarket certification, whereas the post-market pieces are still in motion. Looking at it from an assurance lens, a couple of key takeaways jump out:
A quick but valuable read covering:
This wide-ranging and accessible article dives into the movement for "Explainable AI", exploring the practical, psychological, and regulatory dimensions of explainability. Explainability is a prerequisite for ML assurance, and counterfactuals are emphasized as core to the next step: auditability.
Money quote: "In the absence of clear auditing requirements, it will be difficult for individuals affected by automated decisions to know if the explanations they receive are in fact accurate or if they’re masking hidden forms of bias."
For a deeper introduction to Machine Learning Assurance (MLA)
Download the white paper