Artificial intelligence

Mend Releases AI Security Governance Framework: Covering Inventory, Risk Phase, AI Supply Chain Security, and Growth Model

There is a pattern playing out in almost every engineering organization right now. A developer installs GitHub Copilot to quickly deploy code. The data analyst starts querying the new LLM reporting tool. The product team quietly embeds the third-party model into the feature branch. By the time the security team hears about any of them, AI is already working in production – processing real data, affecting real systems, making real decisions.

That gap between how quickly AI enters an organization and how slowly governance reaches it is exactly where the risk resides. According to the new guidelines for the practical framework ‘AI Security Management: An Effective Framework for Security and Development Teams,’ from Mend, many organizations are not equipped to close it. It doesn’t assume you have a mature security system already built around AI. Assuming you’re an AppSec lead, engineering manager, or data scientist trying to figure out where to start – and build a playbook from there.

Inventory problem

The framework begins with the key premise that governance is impossible without visibility (‘you cannot govern what you cannot see’). To ensure this visibility, it broadly defines ‘AI assets’ to include everything from AI development tools (such as Copilot and Codeium) and third-party APIs (such as OpenAI and Google Gemini), to open source models, AI features in SaaS tools (such as Notion AI), internal models, and autonomous AI agents. In order to solve the problem of ‘Shadow AI’ (unauthorized or uninstalled security tools), the framework emphasizes that finding these tools should be a non-punitive process, ensuring that developers feel safe to disclose them.

A Risk Classification System That Really Scales

The framework uses a risk classification system to categorize AI deployments instead of treating them as equally risky. Each AI asset receives a score from 1 to 3 across five dimensions: Data Sensitivity, Decision Authority, System Access, External Exposure, and Supply Chain Origin. The total score determines the dominance required:

  • Category 1 (Low Risk): Scores 5–7, requiring only routine safety reviews and light monitoring.
  • Category 2 (Moderate Risk): Scores 8–11, which trigger advanced reviews, access controls, and quarterly behavioral reviews.
  • Category 3 (High Risk): Scores 12–15, which warrant a full security assessment, design review, continuous monitoring, and a ready-to-deploy playbook for incident response.

It is important to note that the risk level of a model can change significantly (eg, from Level 1 to Level 3) without changing its underlying code, based on integration changes such as adding write access to the production database or exposing it to external users.

Less Privilege Does Not Stand in IAM

The framework emphasizes that most AI security failures are due to poor access control, not flaws in the models themselves. To counter this, it mandates that the principle of least privilege be applied to AI systems—just as it would be applied to human users. This means that API keys should be limited to specific resources, shared credentials between AI and human users should be avoided, and read-only access should be automated when write access is unnecessary.

Output controls are equally important, as AI-generated content may automatically leak data by reconstructing or identifying sensitive information. The framework requires output filtering of controlled data patterns (such as SSNs, credit card numbers, and API keys) and insists that AI-generated code be treated as trusted input, subject to the same security scans (SAST, SCA, and secret scans) as human-written code.

Your Model is a Shopping Chain

When you use a third-party model, you inherit the security posture of whoever trained it, any dataset it learned from, and any dependencies integrated with it. The framework introduces the AI ​​Bill of Materials (AI-BOM) — an extension of the general SBOM concept to model artificial objects, datasets, optimization inputs, and conceptual infrastructure. Full AI-BOM documentation model name, version, and source; references to training data; fine-tuning data sets; all software dependencies required to run the model; Infrastructure components considered; and the known vulnerability of their maintenance status. Several emerging regulations – including the EU AI Act and the NIST AI RMF – clearly reference supply chain documentation requirements, making the AI-BOM useful for compliance regardless of which framework your organization aligns with.

Overseeing Threats Traditional SIEM Can’t Catch

Standard SIEM rules, network-based anomaly detection, and endpoint monitoring do not handle the failure modes targeted at AI systems: rapid injection, model drift, behavioral manipulation, or jailbreak attempts at scale. The framework describes three different layers of monitoring required by AI workloads.

In the model layer, teams should look for indications of rapid injection of user input, attempts to extract system information or model configuration, and significant changes in output patterns or confidence scores. In the integration layer of the application, important signals from the AI ​​are sent to sensitive sinks – database writes, external API calls, command execution – and high-volume API calls that deviate from the base application. At the infrastructure layer, monitoring should include unauthorized access to model artifacts or training data storage, as well as unexpected exits from external AI APIs that are not on the authorized list.

Build Policy Teams to Take Action

The policy section of the framework describes six key components:

  • Tool Approval: Maintain a list of pre-approved AI tools that teams can adopt without further review.
  • Tiered Review: Use a tiered approval process that remains simple for low-risk situations (Tier 1) while maintaining a more in-depth review of Tier 2 and Tier 3 assets.
  • Data Management: Establish clear rules that distinguish between internal AI and external AI (third-party APIs or managed models).
  • Code Security: It requires AI-generated code to be reviewed for security like human-written code.
  • Disclosure: It mandates that AI integration be announced during architecture review and threat modeling.
  • Prohibited Uses: Clearly specify prohibited uses, such as training models on customer data controlled without consent.

Governance and Enforcement

An effective policy requires clear ownership. The framework provides accountability for all four roles:

  • AI Security Owner: Responsible for maintaining an approved AI list and escalation of high-risk cases.
  • Development Teams: Responsibility for declaring the use of an AI tool and submitting AI-generated code for security review.
  • Procurement and Legal: It focuses on reviewing vendor contracts for adequate data protection terms.
  • Official Appearance: You are required to sign a risk acceptance for high-risk applications (Section 3).

Long-lasting enforcement is achieved by using the tool. This includes implementing SAST and SCA scanning on CI/CD pipelines, implementing network controls that prevent egress from unauthorized AI environments, and implementing IAM policies that restrict AI service accounts to have the necessary permissions.

Four Stages of Maturity, One Honest Diagnosis

The framework closes with an AI Security Maturity Model organized into four categories — Emerging (Ad Hoc/Awareness), Developing (Defined/Active), Controlling (Managed/Active), and Leading (Advanced/Flexible) — that map directly to the NIST AI RMF, OWASP AIMA, ISO/IEC 42001 Act and EU Act. Most organizations today sit in Stage 1 or 2, a framework that is framed not as a failure but as an accurate reflection of how quickly AI adoption has gone mainstream.

Each phase change comes with a clear priority and business outcome. Moving from Incipient to Development is the first visibility task: implement an AI-BOM, assign ownership, and implement an initial threat model. Moving from Development to Management means automating surveillance pipelines – rapid system hardening, CI/CD AI testing, policy enforcement – ​​to deliver consistent security without slowing development. Reaching the Lead category requires continuous validation through automated red teaming, AIWE (AI Weakness Enumeration) scores, and runtime monitoring. At that point, security stops being a bottleneck and starts to power the speed of AI adoption.

A comprehensive guide, including self-assessments that demonstrate your organization’s AI growth against NIST, OWASP, ISO, and EU AI Act controls in less than five minutes, is available for download.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button