Some quick thoughts from Monitaur's CEO, Anthony Habayeb on the stories we're watching in this issue of the newsletter. Read the video transcript here.
This brief written by legal experts at Cooley discusses the potential impact on the insurance industry that would occur with the passage of the draft Data Accountability and Transparency Act of 2020. Now that Democrats are in control of Congress, Senator Sherrod Brown and his co-sponsors may have more opportunity to pass privacy-focused legislation; regardless regulators like the NAIC and state DOI's are moving in this direction.
In this same vein, research institutions have published many compelling pieces recently. Stanford's HAI summarized the AI provisions from the National Defense Authorization Act, which requires a briefing to Congress that will educate lawmakers on the topic of AI and assigns NIST to develop an AI Risk Management Framework. Meanwhile, the Brookings Institute anticipates 2021 to be a "foundational year" for governance of AI and argues for the revival of the US Office of Technology Assessment to lead that effort.
Brookings also published this excellent paper that covers the current state of AI development, stakeholders, and regulation across the globe. The backdrop is China's desire to use AI to bolster its geopolitical and economic power, and combatting its resources and capabilities will require a coordinated global efforts from the US, EU, and other Asian powers. Moreover, a dearth of regulatory coordination across borders will create barriers to AI delivering upon its potential for good, a patchwork of local legislation that will inhibit developers, and protectionist impulses that can proliferate into other areas of the economy. The authors call out "assessing AI risk, developing international AI standards, and to the conformity assessment of AI products" as key areas for unified attention. They also believe that NIST's cybersecurity framework provides a useful model for including broader international policies like those being pursued by the World Economic Forum, which has a launched a wide-ranging multi-national AI project recently as well.
Abhishek Gupta, one of the founders of the Montreal AI Ethics Institute, has published a series of valuable blog posts lately that render down the most important issues and knowledge for those looking to learn about AI ethics and related topics. This short but important piece highlights a dimension of inclusion and diversity that goes largely ignored in the sometimes airy discussions about AI and morality: the importance of the general public's voice. Poignantly, he begins with the fact that we will all feel the effects of AI in our lives in the near future, whether we choose to engage with that reality or not. He argues that it is incumbent upon not only citizens to learn about AI and get involved with organizations but also experts and operators to seek out the wisdom of the crowd. After all, groupthink can infect every community and is most dangerous within the confines of those with deep, specific, and specialized knowledge. Incorporating consumers can also help scale efforts to monitor harms from artificial intelligence as it happens.
From a legal perspective, the use of predictive analytics escalates a wide range of issues that companies need to educate themselves on, not just in the chief counsel's office but across departments and functions. Authors Patrick Hall and Ayoub Ouederni of boutique law firm BHN.ai provide an incisive look (with illustrative examples) at the dimensions of legal risk that can emerge:
On the first topic, Google launched a fascinating online tool where you can play with different conceptions of fairness and how they manifest in ML models.
A growing movement of organizations with their roots in open source technology has entered the fray of competing principles and definitions of trustworthy AI, and the Linux Foundation is the most recent group flexing its muscles. We saw in December the Mozilla Foundation express its desire to lean in to AI guidelines, so now we have effectively the front- and back-end of the technology stack behind ML and AI represented in the larger activist community around AI. This continues the trend that we noted in our last issue of the developer and academic communities bristling at the power and influence of the largest platforms and users of AI – for all intents and purposes Google, Amazon, and Facebook – in the future of R&D in the field.
As you might expect from a representative of the Federal Reserve Board, famous for its milquetoast statements from which prognosticators write novels, this kickoff to a symposium on AI in banking was largely informational in nature and provided few clues about the direction of specific regulations. The collection of resources from the footnotes alone is a valuable resource for deeper learning. That said, Governor Brainard did cover the Fed's own experimentation with AI for the purposes of oversight and their desire to learn about the applications of explainability for AI more broadly. He stated specifically that "we are exploring whether additional supervisory clarity is needed to facilitate responsible adoption of AI". Read into that what you will.
Last month, the US Food and Drug Administration released the Artificial Intelligence/Machine Learning-Based Software as a Medical Device (SaMD) Action Plan, the next step in its regulation of intelligent health and healthcare products. The program, which includes heavyweights like Apple and Fitbit/Google among others, has yet to move deep into true learning applications, rightly being wary of freeing the black box to do what it wants with people's health. This Action Plan largely accords with the advice covered in our last newsletter issue for medical image AI. Of particular interest is the FDA's desire to engage with partners in research "to develop methods capable of improving ML algorithms, including ways to eliminate bias and increase generalizability to ensure the algorithm is well-suited for a racially and ethnically diverse patient population", a response to the challenges of obtaining and sharing data for these populations for training of ML models.
For a deeper introduction to Machine Learning Assurance (MLA)
Download the white paper