Some quick thoughts from Monitaur's CEO, Anthony Habayeb on the stories we're watching in this issue of the newsletter. Read the video transcript here.
The National Association of Insurance Commissioners (NAIC), the primary regulatory body of insurance in the United States, took a positive step toward enhanced regulation of artificial intelligence and machine learning systems. The core principles it established are represented in the acronym FACTS:
While not explicitly a regulatory agency, the National Institute of Standards and Technology (NIST) has massive reach and influence, so its request for public comment on newly created principles for explainable AI shows the current groundswell. The title of this article undersells some of the deeper content covered after the principles themselves, which align with many other groups' conceptions: evidence for every output; understandable to users; reflective of systems' processes; and operating within design constraints. Jonathon Phillips, one of the authors, expounds on challenges in the field of AI explainability, since different users have different expectations of comprehensibility. And he also questions the reliability of human explanations, even imagining a future in which machines strengthen our human capabilities in this area.
The Consumer Financial Protection Bureau (CFPB) wants to broaden access to credit in the United States, and the institution is seeking comment on a wide range of related topics. Curiously, artificial intelligence and machine learning are framed solely as potential solutions to the unequal access to credit and inequity that exists in society. Of course, it is just as likely – if not more so – that AI and ML will contribute to inequity and inequality without proper safeguards and regulatory enforcement.
From the Institute for Ethical AI & Machine Learning, this set of principles for the responsible development of ML systems is well worth your time. Not only is the language very easy to understand for the level of conceptual complexity, but each principle is further explored through practical, accessible, and appropriate examples. The first principle of "Human augmentation", effectively keeping a human in the review process of ML systems, is laudable and necessary, though perhaps undercut by the allowance of its temporary status. Our prevailing opinion is that an evolving model should always have a human-in-loop since models degrade and environments change. Similarly, while the value of "Reproducible operations" is recognized, reproducibility is seen as primarily a technical need. We believe that reperformance by objective, non-technical audiences creates optimal assurance and guarantee of responsible use of ML.
Researchers at Twitter announced the discovery of bias in named entities that form the linguistic foundation for a wide range of online properties, from search engines to knowledgebases. While most bias and fairness research focuses on a single variable like gender or race, this project explored the intersection of race and gender in the named entity recognition models. The bias derives, at least in part, from bias in the training data, an issue that compounds itself, as underrepresented classes of individuals are excluded now, which in turn excludes them from future training data sets.
Despite this insightful work, Twitter still struggles with the problem of bias itself, as revealed this week alongside claims of bias in Zoom's background isolation algorithm.
Chad Jenkins, a leader in the field of robotics and AI as well as founder of BlackInComputing.org, calls for a new level of attention and accountability across institutions in this important read. The lack of diversity in ML and AI research is deeply entrenched in the academic institutions, thanks to how research is funded at a federal level. He argues for the need for political leaders to commit to accountable action to ensure more inclusion in funding research grants. And for those operating in the academic institutions, "you can first look at your own organization and your own working environments and see whether you are living up to the civil rights statutes."
Thought leader and AI SaaS CEO Cindy Gordon makes a compelling case that executives and company boards need to pay much closer attention to how their organizations deploy AI. With the increased regulatory attention and the drumbeat of exposures, the uppermost echelons of corporations must learn to examine how they manage risk around their fast-scaling systems. It is essential that these leaders educate themselves about how AI and ML work in their organizations and ensure that they have explainability built into their governance functions. She also argues for the importance of leadership engaging in a broader discussion about the need for explainable AI across industries and geographies.
For a deeper introduction to Machine Learning Assurance (MLA)
Download the white paper