Securing AI Systems

Image: Depositphotos

AI has brought significant advances in automation, decision-making, and content generation, but these benefits carry inherent risks that demand robust security measures. AI security spans data privacy, model integrity, adversarial robustness, and regulatory compliance. This article examines the primary threat vectors targeting AI systems, the key domains requiring protection, and the security controls organizations should put in place to address them.

Threat Vectors

Common attacks targeting AI-integrated systems include:

·         Backdoor and Evasion Attacks: Adversaries embed hidden triggers during training or manipulate inputs to cause the model to produce incorrect or harmful outputs.

·         Poisoning and Tampering: Malicious data introduced into training sets degrades model accuracy or alters behavior in ways that are difficult to detect.

·         Inference and Extraction Attacks: Adversaries probe model outputs to reconstruct private training data or accumulate enough input-output pairs to replicate the model entirely.

·         Model Inversion and Theft: Attackers reverse-engineer outputs to expose sensitive training details or create unauthorized copies of the model for malicious use.

·         Prompt Injection: Crafted inputs manipulate model behavior to bypass guardrails and generate harmful or unauthorized responses.

  • Data Exfiltration and Leakage: Attackers gain restricted access and steal data from the model

·         Insufficient Attack Testing: Models are not rigorously tested against adversarial scenarios remain vulnerable to attack methods that were never anticipated during development.

These threats underscore the need for security embedded throughout the full AI lifecycle - from development and training through deployment and ongoing operation.

Domains Impacted

Securing the Data: The data collection and management phase requires organizations to aggregate large volumes of information while granting access to a broad set of stakeholders including data scientists, engineers, and developers. Concentrating sensitive data in a single environment introduces significant exposure, particularly when intellectual property or PII is involved. Data discovery and classification, encryption across storage and transmission, and rigorous key management are foundational requirements. IAM solutions enforce least-privilege access, ensuring no single stakeholder holds unconstrained access to the platform or its underlying data. Building a security-conscious culture among technical teams is equally important to sustaining these controls over time.

Securing the Model: Model development introduces attack surfaces rarely present in traditional software environments. Organizations frequently build on publicly available pretrained models that were not designed with enterprise security in mind. Continuous vulnerability scanning across the full AI/ML pipeline - including API integrations and plug-in dependencies - is necessary to detect malware, corruption, and configuration weaknesses. Role based access control (RBAC) over models, artifacts, and training datasets limits exposure and reduces the impact of any compromise.

Securing the Usage: During live inference, adversarial actors may attempt prompt injection and other manipulations to bypass built-in guardrails, causing reputational harm or enabling theft of model capabilities. Effective defenses require continuous monitoring of both inputs and outputs, with detection capabilities tuned specifically to AI attack patterns such as model evasion and extraction. Machine learning detection and response (MLDR) solutions can surface and escalate these threats within existing security operations workflows.

Securing the Infrastructure: The infrastructure underpinning AI workloads must be hardened as a foundational security layer. Rigorous network security, access control, encryption, and intrusion detection must be applied across all environments - including distributed and multi-cloud deployments - using controls purpose-built for AI rather than simply repurposed from traditional IT frameworks.

Governance As organizations delegate increasingly consequential decisions to AI, ensuring models remain aligned with their intended purpose is an ongoing responsibility. Outputs must be auditable, factually grounded, and free from material bias. IP indemnification for both base models and fine-tuned derivatives is an important consideration, forming part of a broader commitment to responsible and accountable AI deployment.

The above domains need to be secured from the threat vectors described. The diagram below illustrates where the domains fall in a sample end-to-end enterprise built on AI. 

Figure 1 Securing AI across the Enterprise

Security Controls for GenAI Applications

Some of the integrated controls for a comprehensive GenAI security architecture includes the following:

·         Identity and Access Management (IAM): Governs authentication, RBAC, and SSO across the platform. In GenAI environments this extends to AI agents, which must authenticate and operate within clearly defined, scoped permission boundaries and not just human users.

·         Data Security: Security must be embedded into the data architecture from the outset, limiting exposure at every stage from ingestion through processing and storage, rather than applied as an afterthought.

·         Encryption and Secrets Management: Data must be encrypted at rest and in transit throughout its entire lifecycle Secrets Management is a core requirement as these are the keys to the Data and must be protected at all times. In cloud environments, Customer Managed Encryption Keys (CMEKs) ensure organizations retain direct control rather than delegating that responsibility to a provider.

·         Security Monitoring: Current environments provide controls inherited from traditional IT monitoring which has short comings. It is essential to have access to real-time telemetry across multi-cloud and hybrid environments, with detection rules calibrated to AI-specific patterns including adversarial inputs, model manipulation, and data pipeline anomalies.

·         Logging and Audit: Centralized, tamper-resistant log collection across models, APIs, infrastructure, and user interactions, retained and query able to support incident investigation, regulatory compliance, and operational troubleshooting needs to be implemented.

·         Configuration Management:  It is essential to have active governance of application and infrastructure configurations to prevent drift from approved security baselines and reduce the attack surface introduced by misconfigured components across complex environments.

·         Incident Management: There needs to be structured processes to detect, contain, and recover from events with minimal disruption, explicitly accounting for AI-specific failure modes such as model degradation and data integrity compromise which extends beyond conventional IT outages.

·         Data Activity and Posture Monitoring: It is critical to provide continuous visibility into how data is accessed and used within GenAI systems, enabling detection of unauthorized queries, unusual access patterns, and policy violations that support both security response and compliance obligations.

Securing generative AI is not a problem that can be solved through isolated point solutions. These controls are most effective when operating as a unified, integrated system with each layer reinforcing the others. As enterprise reliance on AI deepens, a piecemeal approach may close immediate gaps but will not deliver the sustained, enterprise-scale security posture these systems will ultimately require.


By Utpal Mangla, John Thomas. and Joel George

Utpal Mangla

Utpal Mangla

Utpal Mangla (MBA, PEng, CMC, ITCP, PMP, ITIL, CSM, FBCS) is a General Manager responsible for Telco Industry & EDGE Clouds in IBM. Prior to that, he ( utpalmangla.com ) was the VP, Senior Partner and Global Leader of TME Industry’s Centre of Competency. In addition, Utpal led the 'Innovation Practice' focusing on AI, 5G EDGE, Hybrid Cloud and Blockchain technologies for clients worldwide. In his role as senior executive in business with P&L responsibility and thought leader in emerging technologies, Utpal’s mission is to fuel growth by building, scaling and implementing differentiated competitive market service solution offerings to meeting business imperatives of our customers. Under Utpal's leadership, IBM recently achieved the mission of scaling to make "Watson AI Impact 1.5 Billion Consumers” and creation of “Industry Blockchain platforms”. Utpal is a Master inventor and is at the forefront in making Hybrid Cloud and 5G/EDGE real for enterprises globally Utpal has been with IBM (and PwC) since 1998. With 20+ years of experience, Utpal is a highly motivated & dynamic leader who thrives in challenging environments. He is reputed for his trust, problem solving and organizational skills. Recipient of numerous client excellence awards, he is recognized as “IBM Top Talent" Utpal is a regular speaker at industry forums, univ and business conferences globally, including MWC, THINK, TMForum, Dreamforce, Cannes, Fierce 5G and CEM Telecoms. With 50+ articles, Utpal contributes to industry blogs, analyst reports and emerging marketplace trends. He has been quoted in Fortune, Bloomberg, GSMA, LF and BusinessWire. Utpal is an active contributor & member of FORBES council, AI Think Tank at Cognitive World, is current chair of ISSIP Strategy Council, member of CompTIA’s IoT Advisory leadership and was on board of ATIS. Utpal is also member of IBM’s Executive Partner Promotion committee, Talent Ecosystem & 5G EDGE Acceleration teams. Utpal is on advisory boards of Penn State Univ and Rochester Institute of Tech. An active STEM volunteer and P-TECH mentor dedicated to ‘Pathways in Technology, Early College”, Utpal supports education outreach initiatives through Univ of Toronto and Prof. Engineers Ontario. Utpal holds Bachelor’s degree in Computer Science Engg from Pune Univ (with highest honours) and MBA from Northwestern Univ’s Kellogg Graduate School of Management. He completed executive studies at Harvard Business School’s strategic leadership, Wharton School’s financial value creation and Stanford Business School's entrepreneurial leadership programs


John Thomas

John Thomas is part of IBM's automation sales team and has prior experience working with a startup that scaled from $0 to $20M annual revenue and successful mid-size companies that were acquired by IBM. He has generated millions in pipeline and collaborated with customers, partners and IBM teams to help organizations modernize their IT environment through infrastructure automation and security solutions.


Joel George

Joel George is a Solutions Engineer at IBM where he helps clients modernize their operations through AI, automation, and security solutions. A Computer Science honors graduate of UT Dallas and National Merit Scholar, he has built his career across Fortune 500 companies with hands-on experience in data engineering, machine learning, cloud infrastructure, and AI security. He brings both a deep technical and client-facing perspective to the opportunities and challenges of AI in the enterprise.