Some quick thoughts from Monitaur's CEO, Anthony Habayeb on the stories we're watching in this issue of the newsletter. Read the video transcript here.
After California's landmark passing of the CPRA, which expands digital privacy and adds enforcement teeth to the CCPA, a number of other US states are making similar moves. Virginia's legislature passed its version, called the Consumer Data Protection Act (CDPA), which mirrors California's law and the European GDPR that informed it. Global law leader Dentons encapsulates the contents nicely in this short overview, noting differences like the consumers' right to appeal denials of their privacy requests and the lack of provision for a private right of action, focusing enforcement instead on the opportunity to cure problems before penalties are levied.
Nevada already has a law in this vein in place that some experts view as more stringent that California's. Washington and Illinois lawmakers are taking another swipe at similar legal frameworks that, unlike Virginia, include a private right to action and suspension of operations if problems are not cured. We noted in our last issue that federal legislation may come to vote this year, and we will see if California continues to lead the nation to larger consumer protection in the area of digital privacy.
KPMG recently surveyed a large body of executives at US firms about their deployment of AI over the past year and discovered a huge uptick. Financial services saw the greatest increase at 37%, perhaps triggering more attention from regulators covered in our last issue and the request for comment covered below. Despite high confidence that AI helped during the pandemic, there was a pervasive concern that faster adoption is introducing more risks to the business than they might like. One of the report authors, Rob Dwyer, summarizes: "We are seeing very high levels of support this year across all industries for more AI regulation...a more robust regulatory environment may help facilitate commerce. It can help remove unintended barriers that may be the result of other laws or regulations, or due to lack of maturity of legal and technical standards." Financial services again led the increase at 27% more desiring government involvement.
In a similar vein, Stanford's HAI published their annual State of AI report, an exhaustive look at developments through 2020. This summary article captures the trends, which also included record corporate investment, soaring attention to ethics in research, rising worries about regulatory compliance, and huge strategic interest and investment across the globe.
"Yes, you probably are" is the conclusion of the team at BCG GAMMA, an arm of the Boston Consulting Group focused on AI. They surveyed a thousand "large organizations" investing in AI and found that Responsible AI (RAI) programs are lagging. Even in the 35% of companies that believe they have fully implemented an RAI program, the BCG team determined that only 46% had accurately estimated their actual progress to that end-goal. On a positive note, they discovered that across all levels of program maturity companies are making strides in the most important and challenging area of driving fairness, equity, and human-machine collaboration. The summary slideshow is a quick and engaging overview of their learnings.
BCG also delves into what RAI looks like in practice, mirroring how the dialogue around Responsible AI has begun to exit from the conceptual world of principles and goals. More and more voices and companies are moving to the next stage of action-oriented and tactical discussion of how to achieve one of the key ends of RAI: continuous assurance that AI and ML systems are operating under controls and policies that are systematically tracked, monitored, and documented.
In the age of Big Data, the owners of the biggest data have distinct advantages as they build ML- and AI-based products. Smaller players, startups, and projects with a high sensitivity to data privacy cannot leverage the massive data sets to build products that address larger audiences and scope that would maximize their opportunity. Thus, the rise of synthetic data – large datasets generated by ML models that resemble smaller "real" datasets to which the developers already have access. On the surface, synthetic data would seem to level the playing field and reduce monopolistic power wielded by the larger platforms; however, as this piece and other articles point out, using synthetic data increases risk because synthesizing from biased data might cause algorithms to both accentuate the bias and create a false sense of security due to Automation Bias, a powerful human cognitive bias prevalent in the field of AI. The immediate solution to bias in synthetic data remains the same: deploy controls across the model's full lifecycle, enable continuous monitoring of your models in production, and perform frequent audits of every model decision with full reproducibility.
The circles of the broader social conversation about the risks of AI and ML keep expanding. While we've recently highlighted those in the research, NGO, and developer communities, a new entrant has entered the discussion: labor unions. The Trades Union Congress (TUC), a large federation of trade unions in England and Wales, has published a manifesto establishing core principles that mirror much of what we've seen from other public and private groups. Francis O'Grady of the TUC concludes, "Make no mistake. AI can be harnessed to transform working lives for the better. But without proper regulation, accountability and transparency, we risk it being used to set punishing targets, rob workers of human connection and deny them dignity at work."
The TUC's focus on the intersection of workers' rights and internal use of AI for hiring and human resources parallels accelerating interest in this area of late. Back in October, the American Bar Association expanded upon AI creating potential risks under existing employment laws for company counsel to monitor. SAFELab at Columbia University argues that social workers are important voices to ensure that the most vulnerable and affected communities are protected by AI regulation.
The recently published FDA Action Plan for Software as a Medical Device (SaMD) has sparked a flurry of opinions and market activity. This piece lays out a lucid argument for the greater good that AI will do for medical products if it is properly managed, governed, and standardized across developers. The background, however, is illuminating: only 7 of approved products provide race and only 13 provide gender in the makeup of their datasets.
In related news, 3 industry groups for radiologists are calling on the US HHS to throw out a midnight rule from the outgoing Trump appointees that would exempt developers from oversight and presents outsized risks to patient safety. The Consumer Technology Association (CTA), consisting of 52 companies working in healthtech, announced an initiative to define a common lexicon for AI so industry stakeholders can understand, communicate, and further development. "Transparency and a common language will be key to enable the proper and safe functioning of AI," said Pat Baird, regulatory head of global software standards at Philips and co-chair of the working group.
Five agencies that focus on regulation of financial services in the US are seeking public comment about the use of artificial intelligence across the industry. They are:
For a deeper introduction to Machine Learning Assurance (MLA)
Download the white paper