Ensuring AI Governance: The Time to Act is Now!

BreezeML
7 min readNov 17, 2023

--

On October 30, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI), signaling the future establishment of comprehensive standards that govern the utilization and advancement of AI across various industries in the United States. The Executive Order (EO) does not directly impose specific regulations. Rather, it mandates federal agencies, including the Departments of Commerce, Energy, and Homeland Security to formulate standards and guidance on AI that will be drafted into law at a later time and consequently create lasting implications for businesses that utilize the technology.

Since the issuance of the EO, many companies that use AI in their products and services have asked us at BreezeML about when they should start to worry about AI compliance. Should they start now? Or should they wait until the U.S. Congress passes a formal piece of AI-specific legislation?

The framing of such questions underscores a popular belief that efforts to regulate AI are a recent development that has yet to tangibly impact companies utilizing the technology. However, a closer examination of existing legislation reveals a different story: the use of AI has implications for compliance with existing policies that have been in effect since the big data boom from a decade ago. Indeed, many industries such as financial services, insurance, healthcare and medical devices, and digital advertising and marketing are already subject to a unique set of laws that impact the ways in which companies acting in those sectors are allowed to utilize AI. In many cases, enterprises have incorporated such laws into their internal policies in an attempt to self-regulate and prevent the occurrence of compliance violations and subsequent lawsuits following the public’s discovery of those transgressions.

Financial Services

The financial industry has integrated machine learning into crucial tasks such as fraud detection, loan prediction, and anti-money laundering for over a decade. Due to the substantial impact of these tasks, their machine learning models, predominantly simple linear models, have been subjected to stringent audits, encompassing both longstanding and recent regulatory measures. For instance, the Fair Housing Act, enacted in 1968, explicitly prohibits biases in mortgage determination. Despite being in effect for over fifty years, this law directly pertains to the utilization of models for mortgage predictions, a common practice in most banks. Additional instances include regulations from the Securities and Exchange Commission (SEC) and the Federal Trade Commission (FTC), which mandate advisory firms to establish a robust risk management and governance framework, ensuring that AI is employed in the best interest of investors and devoid of bias.

Insurance

Insurance providers are grappling with challenges similar to those faced by the banking industry, where biases in algorithms can result in serious legal repercussions. The insurance sector is subject to robust regulation through various laws, including the Unfair Trade Practices Model Act, Corporate Governance Annual Disclosure Model Act, and Property and Casualty Model Rating Law. These regulations, which were enacted long before the EO, mandate, at a minimum, that decisions made by insurers are not inaccurate, arbitrary, capricious, or unfairly discriminatory. Evidently, AI introduces the potential for increased risk in producing inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes for consumers. Recognizing this, the National Association of Insurance Commissioners (NAIC) has recently issued a Model Bulletin that mandates all insurers to formulate, implement, and maintain a written program (referred to as an “AIS Program”) for the responsible use of AI systems involved in or supporting decisions related to regulated insurance practices. The AIS Program is expected to be designed to mitigate the risk of adverse consumer outcomes.

Healthcare and Medical Devices

In recent years, the FDA has undertaken extensive efforts to formulate sound practices and regulations governing the utilization of AI in healthcare services and medical devices. In a recent regulatory framework proposal addressing AI-based Software as a Medical Device (SaMD), the FDA sought to establish a robust framework for evaluating and approving resubmissions for model upgrades. Traditionally, models in medical devices approved by the FDA were required to be “locked” upon approval. This proposed framework introduces a “model lifecycle regulatory approach,” compelling manufacturers to establish a governance system capable of continuously monitoring the lifecycles of the models used in their AI/ML devices and managing associated risks. Each submission for a model upgrade must demonstrate reasonable assurance of safety and effectiveness.

Digital Advertising and Marketing

The advertising industry operates at the forefront of user data, historically employing such data to train models for market analysis, talent and customer identification, and lead generation. Companies in this sector face extensive regulation under numerous consumer privacy laws, including GDPR (EU), CCPA (California), CFAA (U.S. Federal), and various other federal and state laws; although these laws have ramifications across sectors, their effects are notably more pronounced for digital advertising and marketing, primarily because of their extensive reliance on data brokers. For instance, the California Delete Act stipulates the development of interfaces enabling users to explicitly opt out or erase their data from such companies. In response to user requests, these companies bear the responsibility to not only eliminate the user data from their own systems, but also explicitly request data brokers, who may possess the data, to remove it as well. Handling user data is a highly intricate task due to the substantial variations in consumer data retention policies across different countries (and even different states in the U.S.). Training a model in a manner compliant with certain laws may inadvertently violate others.

AI Governance is Becoming a Prerequisite for Enterprise Sales

The necessity for AI governance, particularly in sectors like healthcare and financial services, existed well before discussions on regulating AI garnered significant attention with the widespread adoption of LLM-related applications, where the training of such models involves acquiring substantial data. Thus, instead of viewing the EO as a seminal moment for AI regulation in the U.S., it should be understood as a logical extension of preceding regulations that have strived to prevent corporate actors from violating users’ rights to data privacy and engaging in automated, algorithm-based decision-making that carries out harmful or discriminatory practices against any consumers.

While overarching governmental regulations remain somewhat ambiguous and have yet to be translated into legislation containing a set of defined rules, specific industries have already received explicit directives to establish governance for enhanced risk management. In response to these directives, companies like Google, IBM, Airbnb, and CVS have already instituted AI oversight councils that evaluate not just their internal AI-related risks, but also the AI solutions they might purchase from third-party vendors. The AI oversight councils at such companies require that vendors implement internal AI governance practices and showcase their capacity to consistently comprehend and address AI risks as prerequisites for considering their solutions. Consequently, vendors’ lack of demonstrable AI governance has started to have an increasingly large impact on their sales revenue, which is why they have begun to implement safeguards throughout their model development process to enhance their brand’s reputation, increase their product appeal, and ensure compliance with ever-evolving regulations.

So, What Does It Take to Achieve Compliance with AI Regulations?

While the regulations are all aimed at preventing the malevolent use of AI, a closer examination of the wording in these regulations reveals a more profound directive for AI companies: furnish concrete evidence demonstrating the absence of risk throughout the entire developmental lifecycle of an AI model. Put differently, regulations are keying in on “how did a model come to be”? For instance, the EU AI Act, Article 17 mandates that AI systems must be thoroughly documented, encompassing all operations conducted to deploy them. Similarly, the Model Bulletin from the NAIC asserts that the regulatory body can request information about the data used in developing a specific model or AI system, including details on the data source, provenance, and quality.

These requirements dictate the use of a governance framework that can meticulously track all data contributing to a model and all the operations performed on that data. This effort extends beyond merely treating each model as a static entity and manually documenting it. Moreover, given the dynamic nature of machine learning, where continuous improvement is driven by the most current user data, it becomes crucial to acknowledge that models are in a state of constant evolution. Treating each model as a static artifact overlooks the interconnections between various model iterations, fundamentally failing to address provenance questions related to the genesis of a model.

Making Compliance a Breeze

BreezeML offers an AI governance platform that serves as a nexus between compliance/legal teams and data scientists. The platform employs a “governance from the ground up” methodology, enabling compliance teams to effortlessly specify and continually monitor governance policies over every AI workflow in their organization without relying on manual and tedious coordination with data science teams. By offering real-time guardrails during the development of a model, the reluctance from the data science team to incorporate compliance-related checks is reduced. BreezeML integrates with common MLOps tools and data stores to track end-to-end model provenance. It issues warnings upon detecting any risk of violating AI regulations, provides explanations and potential mitigation strategies, and supports instant audit reports that document both the history and current state of compliance.

For more information, please contact info@breezeml.ai or click here to request a demo.

--

--

BreezeML
BreezeML

Written by BreezeML

BreezeML is an AI governance platform that serves as a nexus between compliance/legal teams and data scientists.

No responses yet