The AI Risk Small and Midsize Boards Are Missing

Most directors remain unaware of how AI systems can degrade over time — driven by the very training data they rely on.

Artificial intelligence is rapidly integrating into the operational backbone of small and midsize businesses (SMBs). Marketing automation, financial modeling, vendor screening and HR analytics are just a few areas where AI-enabled tools are now standard. For many companies, these systems are embedded in widely adopted platforms or SaaS solutions, making AI a silent but influential presence in daily decision-making.

Yet, amid the enthusiasm and perceived efficiency gains, a critical risk is being overlooked by many private company boards: AI degradation driven by its own training data. Most directors are not yet attuned to the fact that the very systems companies rely on to enhance performance may be quietly eroding in quality over time.

This is not a technical quirk. It is a governance concern with direct implications for strategic oversight, business continuity and fiduciary responsibility.

The Emerging Risk of Synthetic Data Feedback

- Advertisement -

AI systems, particularly those that generate language, images or code, are typically trained on massive datasets sourced from the Internet. Historically, this training material was authored by people — journalists, researchers, professionals and others who produced content with original thought and diverse context.

But that foundation is shifting. Increasingly, the content being scraped for AI training is itself generated by other AI systems. This recursive process, known as synthetic data feedback, creates a loop where AI models learn not from new human insight, but from previously generated AI output.

Over time, this leads to several compounding effects:

  • Reduced originality and nuance. The models begin to repeat high-frequency patterns, reducing their ability to respond to edge cases or complex inquiries.
  • Increased error propagation. Mistakes embedded in earlier AI content can become normalized and magnified.
  • Loss of depth. The AI’s ability to process rare or context-rich scenarios weakens, even as its output remains syntactically fluent and seemingly authoritative.

Researchers at the University of Oxford explored this phenomenon in a 2023 study titled The Curse of Recursion, in which they demonstrated how recursive training degrades model integrity. The study introduced the concept of model collapse, a process where AI systems lose the diversity and complexity required for high-performance reasoning and begin to perform poorly on tasks that demand subtlety or specificity.

For directors, the implications are far-reaching. AI models that appear stable may in fact be hollowing out beneath the surface. Systems once reliable for fraud detection or risk analysis may be losing efficacy, all while maintaining the appearance of functionality.

The Real-World Impact on Private Companies

This degradation is not confined to theoretical models or fringe use cases. It is occurring now in commercially available tools used by finance teams, legal departments, HR managers and compliance officers. In the context of small and midsize companies, many of which rely heavily on third-party providers and do not have in-house AI expertise, the risk is compounded by limited visibility into how tools are trained and updated.

Many directors are operating under the assumption that AI tools improve over time as they “learn.” In some areas, this remains true. But, when training data becomes saturated with synthetic content, the learning process becomes self-referential and ultimately less effective. In a compliance platform, this might mean failure to flag outlier transactions. In an HR analytics tool, it could translate into missed signals on workforce dynamics or the unintentional reinforcement of bias.

Boards that are not actively monitoring this issue may believe they are gaining a technological edge when in fact they are relying on systems that are growing less reliable with each iteration.

The Growing Threat of Intentional Data Poisoning

While synthetic feedback represents an unintentional degradation of model quality, boards must also be aware of a more deliberate threat: data poisoning. This refers to the intentional insertion of false, biased or manipulative content into public datasets with the goal of influencing how AI models behave.

Recent studies have shown how small injections of toxic data can distort model output in meaningful ways. One study from the University of East London reported that targeted poisoning could reduce fraud detection performance by more than 20%.

This has profound implications for SMBs. Consider the risk if a competitor or activist group intentionally floods the Internet with skewed reviews, misleading sentiment indicators or fabricated claims. If those materials are scraped into training datasets, they can influence the behavior of AI models used by your own company or by your vendors and partners.

Companies may unknowingly ingest and act on distorted information embedded in algorithms that appear neutral on the surface. Without governance mechanisms in place to assess the provenance and quality of training data, the business may be exposed to manipulated intelligence that shapes strategic decisions.

Why the Boardroom Must Act

Directors are not expected to be machine learning experts. Just as they do not need to code to oversee cybersecurity, they do not need to build AI models to govern their use. The board’s role is to ensure material risks are surfaced, evaluated and mitigated. Today, AI model integrity belongs on that list.

For private companies, especially those that do not operate with large technical teams, this is a governance blind spot. The assumption that purchased tools or vendor platforms are fully reliable is no longer defensible. Directors must ensure leadership is asking the right questions and taking proactive steps to evaluate AI-related risk exposure.

Questions Boards Should Be Asking

To address this risk, directors should integrate AI oversight into their broader risk governance frameworks. The following questions can serve as a starting point:

  • Where is AI currently being used in our business? Are there embedded systems we rely on without realizing it?
  • Do our tools incorporate AI-generated content in their training data? If so, how is that content vetted for accuracy and relevance?
  • What controls are in place to monitor system performance and identify signs of degradation or drift?
  • Who is accountable for AI tool oversight in our organization? Does that responsibility rest with the CFO, chief operating officer (COO) or a designated technology advisor?
  • Are vendors disclosing how they protect against synthetic feedback and data poisoning?

If these questions cannot be answered confidently, the board should initiate a structured review of AI use across the enterprise.

Governance Actions Boards Can Take Now

AI oversight does not need to be burdensome. With the right structure, SMBs can establish effective governance using existing leadership resources. Consider the following actions.

Designate ownership. Assign responsibility for AI oversight to a senior executive. In many companies, this may be the CFO or COO, particularly if the technology is embedded in financial and operational tools.

Demand transparency from vendors. Require clear disclosure from providers regarding the origin of their training data, how models are maintained and what controls exist to ensure quality.

Establish an annual review. Incorporate AI tool evaluation into the board’s annual audit or operational risk assessment. This review should include system usage, performance benchmarks and alignment with company strategy.

Prioritize verified data sources. Favor solutions that rely on high-integrity internal data or verified third-party datasets. Avoid platforms that depend heavily on unlabeled or open public sources, which are most susceptible to feedback and poisoning risks.

Stay informed. Encourage continuing education on emerging AI governance topics. Directors can benefit from short briefings, webinars or expert sessions that surface new developments and regulatory considerations.

A Window of Opportunity

Synthetic feedback and data poisoning are not speculative concerns. They are current-state risks that affect tools many companies already use. For private boards, this is an opportunity to lead rather than react by putting governance structures in place before a failure occurs.

Companies that address this issue now will be better positioned to build trust with stakeholders, select reliable partners and ensure AI functions as an asset rather than a silent liability. The board’s role is to safeguard the company’s future. That includes safeguarding the systems shaping its decisions.

About the Author(s)

Kate Motonaga

Kate Motonaga, CFE, is audit committee chair of Fleet Science Center, finance committee member of ORCID, and CFO and head of enterprise risk management of Public Library of Science.


Related Articles

Navigate the Boardroom

Sign up for the Private Company Director weekly newsletter for the latest news, trends and analysis impacting public company boardrooms.