Some quick thoughts from Monitaur's CEO, Anthony Habayeb on the stories we're watching in this issue of the newsletter. Read the video transcript here.
The Trump Administration issued an Executive Order on December 3rd to finish its guidelines around how the federal government should promote and utilize artificial intelligence. This EO ratified concepts that were previously published, including the Section 3 principles that echo the developing consensus on AI across public and private entities. It is unknown what the incoming Biden Administration will do in response to this EO at this stage. We find the attention to understanding, responsibility, transparency, and accountability directly in line with the principles of Machine Learning Assurance, provided that they are implemented well of course.
This announcement follows a recent groundswell of attention to AI within the US federal government. The State Department has announced cooperative agreements with the UK and Ireland; the Department of Defense has published its AI education strategy; and most recently the Government Accountability Office called for an AI oversight framework to help its auditors work with inspectors general on such projects. Still, some observers worry that these efforts are too laissez faire by prioritizing "risk assessment and cost-benefit analyses" before any regulatory considerations.
Meanwhile on the other side of the Atlantic, Members of the European Parliament resoundingly adopted proposals for AI, robotics, and software that are headed for the 2021 legislative agenda. This effort tackles frameworks for ethical principles, legal liability, and intellectual property rights. In comparison with the above Executive Order from the outgoing Trump administration, the EU's principles have decided focus on human and social concerns. Alongside vital topics like safety, transparency, and accountability, they emphasize the need for "human-centric and human-made AI" and include addressing bias and discrimination, the right to redress decisions made by AI, and social and environmental responsibility. Additionally, high-risk "self-learning technologies" should enable both human oversight and the ability to restore full human control in the case of ethical breach.
The legal framework set out establishes civil liability for damages from high-risk AI technologies and suggests the need for operators to obtain insurance to cover their activities. By providing legal certainty while building public trust and civic protections, the end-goal is to foster innovation and develop a leadership position in the field of intelligent systems as competition in AI research burgeons across the world, most notably China, which has published the most new academic papers this year.
Building on the topic of Responsible AI in our last issue, we share this summary study that examines the practical dimension of driving responsible AI as practitioners and corporations. (Fair warning: you will have to create an account to view the whole article; we find MIT Sloan's Management Review one of the best resources in the field.) A survey of key stakeholders in AI/ML projects demonstrated the immature approach that many organizations deploy today, since KPIs tilt toward productivity and revenue instead of risk mitigation, reputational harm, and compliance needs. It is clear that implementing successful programs for responsible use of AI will require cross-functional engagement from a diverse set of departments, including not just technical teams and business owners but also legal, communications, and human resources. As these collaborations mature, systematic approaches and solutions that enable transparency into AI's black box will emerge that will replace the crude hand-off of a screenshot of a suspect transaction, an example reported in the background study.
In this follow-up in a series on AI challenges, McKinsey thought leaders lay out an approach that enterprises should pursue with their AI/ML projects in much the same way as they do with all their initiatives: risk management by design. Organizations should plan for risk concurrently while developing projects, as well as ensure a consistent practice across teams and projects. Understanding risk when it comes to AI/ML systems requires some unique considerations, liabilities, and domain knowledge to ensure that they are consistent with the company's values, risk profile, and compliance needs. They emphasize the importance of "standards, testing, and controls...embedded into various stages of the analytics model’s life cycle, from development to deployment and use".
This interview covers a wide range of topics with Regina Barzilay, MIT CSAIL professor and recent winner of the Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity. She observes that most of the AI in production today is low-stakes and that the next step is integrating AI into higher-stakes, higher value problems in regulated environments. Addressing explainability of complex AI systems, Barzilay points to a future when explanations are likely to exceed human intelligence using the apt metaphor of dog trying to explain what it smells to a human with an inferior sense of smell.
In a similar vein, a new research paper complicates how we should view the utility of explainability in AI decisions. Much like humans don't inherently trust others' explanations, users respond to explainable AI with skepticism even when provided insight, re-emphasizing the importance of keeping humans in the loop in every step of these systems usage.
Another in the growing chorus around algorithmic bias, this examination sums up an impressive array of mishaps and research to show that teenagers are especially impacted by AI and ML technologies. While their intense technological habits play a role in this vulnerability, author, Harvard doctoral student, and Spotify visiting researcher Avriel Epps-Darling makes the larger point that "everyone knows that human opinions are subjective, but algorithms operate under the guise of computational objectivity, which obscures their existence and lends legitimacy to their use." She argues that more attention to the developmental and psychological needs of teenagers – as distinct from those of adults – should be incorporated into emerging policies in the realm of data privacy and algorithmic bias.
For a deeper introduction to Machine Learning Assurance (MLA)
Download the white paper