OvalEdge Blog - our knowledge about data catalog and data governance

What is AI Governance?

Written by OvalEdge Team | Jun 14, 2024 2:37:46 PM

AI adoption has some associated operational risks. To manage these risks, businesses must implement AI governance. In this article, we explain how AI governance is a broader concept than data governance and why data governance is a crucial first step towards achieving it.

AI governance is the process of managing and mitigating the operational risks associated with the use of AI techniques in various business value chains. When we talk about operational risks in this context, we’re referring to two particular aspects: compliance risks and business risks. 

For example, AI could pose a compliance risk if it failed to follow the specific rules and processes required for fraud detection. In another scenario, if a chatbot deployed by an organization began generating offensive outputs, this would pose a risk to the business. Collectively, these examples constitute operational failures because the AI tool operationalized to carry out specific actions failed to do so.
 
With AI technologies becoming increasingly ingrained into the fabric of modern business architecture, the potential fallout from AI-driven errors is growing in severity. While the accuracy, efficiency, and power of the technology are improving exponentially, we are not yet in a position where we can fully trust AI products to operate without human oversight. 

AI technologies make split-second decisions at a rate that is impossible to replicate manually. In many cases, if just one of those decisions is incorrect, the ramifications of the error can be enormous for both the user and the business owner.

In one well-documented incident, Microsoft's Tay chatbot, which was launched on Twitter, had to be withdrawn after just 24 hours when it began generating racist, sexist, and anti-Semitic tweets. The chatbot demonstrated this behavior after appropriating the practices of other users on the social media platform.

In 2018, Amazon discontinued an AI recruitment tool it had implemented because it was demonstrating bias toward female candidates. The problem arose from the fact that the company's computer models were trained on a decade of resume data, the majority of which was submitted by male candidates.

In November 2021, the real estate company Zillow was forced to abandon its AI-driven home valuation tool, Zillow Offers. The tool was overpricing homes, many of which were purchased by Zillow, leading to a $304 million inventory write-down.

Ultimately, AI implementation isn't without risk. That's why it's important to instill a comprehensive AI governance framework that mitigates the operational risks associated with AI-driven business activities.

Key Principles of AI Governance

AI governance can be divided into five distinct areas: fairness, transparency, data security, privacy, and a human-first approach. As such, every AI governance strategy must address each of these principles. 

Fairness

AI is not inherently biased. However, biases can arise when AI systems are fed discriminatory information or trained on narrow data sets. That's why organizations must train AI algorithms on diverse data sets to ensure the output is fair and non-discriminatory.

In February 2024, users of Google’s Gemini AI tool reported that it was generating racially insensitive and factually incorrect images, including depicting German Nazi soldiers as Black and Asian people.

Transparency

AI technologies are making increasingly important decisions that impact the trajectory of business outcomes and the end user. To that end, organizations must be fully transparent when it comes to explaining the scope of the AI technologies they use and their potential risks. 

In 2022, Air Canada was forced by a Canadian court to pay damages and tribunal fees to a passenger after an AI chatbot operated by the airline gave incorrect information regarding compensation for extenuating circumstances. 

Data security

Cybersecurity practices must be built into AI governance frameworks from day one. As with all data intensive technologies, if nefarious actors gain access to AI systems the fallout can be devastating to a business. 

The National Institute of Standards and Technology (NIST) recently observed how AI could be vulnerable to prompt injection. In this scenario an attacker either enters a text prompt that causes a Large Language Model (LLM) to carry out unauthorized actions or the data that an LLM draws from is manipulated by a third-party. 

Data privacy

Data protection is always a concern for businesses, but when you introduce AI, and the billions of data points it consumes, the challenge of retaining user privacy is amplified considerably. 

In May 2024, the European Data Protection Board (EDPB) found that measures by OpenAI to ensure ChatGPT complied with the GDPR were insufficient. 

Human-first approach

AI's primary purpose is to address the needs of human operators. This means that the technology must recognize multiple, human-specific factors, such as psychological impact, user experience, experiential outcome, and more. When AI fails to take a human-first approach and address the needs and goals of human consumers, the technology can become obsolete. 

In March, Inflection AI, a leading AI chatbot company at the time, saw a massive upheaval. Two of its three co-founders and most staff members left the company for Microsoft's AI division. The company failed to deliver a well-defined use case in a saturated chatbot market. 

How is AI Governance Different From Data Governance?

Components of a Value-chain Activity

A good way of exploring the differences between AI and data governance is to look at the value chain. Each value chain activity has six primary components:

  1. Inputs (e.g. data) to drive decisions for that value-chain element (e.g. fraud detection at a bank)
  2. Models that use the inputs and generate decisions
  3. Outputs in the form of decisions
  4. Various software and hardware systems that aid the business activity
  5. The various business processes involved
  6. Policies that guide, oversee, and override decisions. Regarding AI, the ability to oversee and override is crucial

In this context, data governance is concerned with the input element, the data.

Data can be considered AI-ready if:

  • It is of high-quality 
  • It is centralized 
  • It is classified (for privacy information and biases)
  • Metadata is well-curated 

Aside from standard data quality checks, to be AI-ready, data must be carefully curated and ethically governed to ensure it is unbiased, accurate, and representative of a diverse section of society. 

Data governance also needs to focus on the output element in AI-driven business activities. For example, the output from a chatbot or the decisions generated by an AI black box in fraud detection must be examined continuously, or at least periodically.

Input/output data governance must be applied at two different stages: during AI model training and at deployment. During the training stage, an organization can assess the quality of data inputs and assess the rationality and compliance of model outputs. Next, divergences during the deployment stage can be monitored, both for inputs, and outputs.

So, data governance encompasses two of the primary components that constitute a value chain activity. AI governance has a much wider remit. It spans all six elements.

AI models must be consistently monitored for consistency, accuracy, and ethical output. Regarding transparency, AI governance ensures that models don't become black boxes. Instead, model performance is tracked, cataloged, and reported, helping to expedite issue resolution. 

It's important to recognize that AI systems are different from AI models. Take a popular AI company like OpenAI. Here, the company’s GenAI chatbot, ChatGPT, is an AI system that provides outputs based on user interactions. OpenAI's GPT-4 LLM is the AI model that powers these outputs. 

These front-facing systems need careful governance, too, including guidelines and policies that determine, from a moral and operational standpoint, what information an AI system will provide, to whom, and how. They also require design features that support these goals. 

In other words, AI governance centers on governing not just the data but also the tools, policies, and processes required to make AI products and systems secure, ethical, and business-safe. 

AI Governance Implementation Challenges

Companies are eagerly experimenting with AI to revolutionize their operations and gain a competitive edge. Many are looking at the various applications of AI, but others are also expressing concerns with misinformation and disinformation. To steer AI in the right direction, organizations must first understand the nuances and challenges of implementing AI governance.

Along with ethical guardrails and resources training, implementing AI governance comes with several data challenges:

  • Complexity: AI systems are often complex and difficult to understand.
  • Data quality issues: Ensuring high-quality, unbiased data is challenging.
  • Regulatory compliance: Keeping up with evolving regulations requires a continuous effort.

Overcome AI Governance Challenges With a Data Governance Framework

Organizations must understand that the data they own and manage is fundamental to enabling them to pivot to AI initiatives. You can think of data as a magic key that unlocks the AI governance puzzle. Ultimately, by focusing on key areas of data governance, organizations can expect reliable AI outcomes.

  • Data quality: Data governance frameworks enhance data accuracy, completeness, and enable bias mitigation, ensuring reliable AI model training.
  • Data privacy and security: Measures like access controls and encryption protect sensitive information and ensure privacy compliance.
  • Improving transparency and accountability: Tracking data lineage and taking ownership increases transparency and accountability in AI systems.
  • Policies and processes: Implementing clear policies and procedures for data governance ensures consistent and compliant AI operations.

How Can OvalEdge Help? 

Data governance is a standalone step within AI governance. To become AI-ready, data must meet several unique requirements, which are achieved through data governance.

Related Post:  AI Data Readiness vs Traditional Data Quality

OvalEdge is a company focused on AI data readiness. Our data governance solutions can help you start on the journey of AI governance by tackling the crucial first and last stages of a value chain activity. 

We enable you to overcome input data issues at the source, as early as possible, before they infiltrate the data, models, and systems that constitute your AI ecosystem. Our tools enable you to catch and remedy input data issues early before they cause any potential flaws in the effectiveness of your AI output.

Our approach also focuses on output data governance.

 

Book a demo with us  to find out:

  1. How OvalEdge can help you make your data ready for AI governance
  2. Why only a dedicated data governance tool like OvalEdge can ensure all of your data is AI-ready
  3. How our specialized team will support your AI implementation efforts