Dear CIO,
The proliferation of Artificial Intelligence (AI), large language models (LLMs), and AI Agents presents transformative opportunities for the enterprise. This evolution also introduces significant data security challenges. This document outlines the importance of Confidential Computing as a foundational technology to enable secure AI innovation while protecting our most valuable data assets.
However, when most CIOs begin exploring the topic of Confidential Computing, they tend to focus primarily on hardware attestation while often overlooking the broader aspects of end-to-end processing in most modern applications. This oversight is critical when handling new AI applications that transport data between RAG-based vectors, foundational LLMs, and accelerator-based processors such as GPUs and TPUs.
Best Regards,
John, Your Enterprise AI Advisor

The Imperative of Confidential Computing in the AI Era
An Overview of Confidential Computing

Traditional encryption methods protect data at rest (e.g., in databases) and in transit (e.g., over a network). However, a critical vulnerability exists when data is in use, that is, when it is being processed or computed. During this phase, data is typically decrypted in memory, exposing it to potential threats from the underlying infrastructure, including the operating system, hypervisor, and even system administrators.
Confidential Computing addresses this gap by protecting data while it is being processed. It utilizes hardware-based Trusted Execution Environments (TEEs), also known as secure enclaves. These TEEs are isolated areas within a CPU that encrypt data in memory and prevent unauthorized access or modification, even from privileged software or administrators. Data is loaded into the enclave, processed in an encrypted state, and decrypted only within the secure confines of the TEE before results are output. Key hardware technologies enabling TEEs include Intel Software Guard Extensions (SGX) and AMD Secure Encrypted Virtualization (SEV).
Traditional Encryption vs. Confidential Computing:
Phase | Traditional Encryption | Confidential Computing |
At Rest | ✅ | ✅ |
In Transit | ✅ | ✅ |
In Use | ❌ | ✅ |
Confidential computing extends zero-trust principles to the processor level, offering security even from the infrastructure provider.
The Heightened Importance in the Age of AI, LLMs, and Agents 🤖
The rise of AI, LLMs, and sophisticated AI agents has made confidential computing and confidential applications more critical than ever. These technologies get their power from vast amounts of data, sometimes including an organization's most sensitive and proprietary information.
Increased Data Exposure: AI models, particularly LLMs, require access to extensive datasets for training and inference. If this data is sensitive, processing it without adequate protection creates a significant risk.
Value of Proprietary Data: An enterprise's competitive advantage often lies in its unique, proprietary data. Using this data to train custom AI models or inform AI agents can unlock substantial value, but only if it can be done securely.
Complex Processing Pipelines: AI workflows, especially those involving agents, often involve multiple stages of data processing, increasing the potential points of vulnerability if data is not protected throughout its lifecycle.
Regulatory and Compliance Demands: As AI is adopted more broadly, regulatory scrutiny regarding data privacy and security is intensifying. Confidential computing provides a robust mechanism to help meet these compliance obligations when handling sensitive data.
Without the ability to protect data in use, organizations may be forced to limit the scope of their AI initiatives, rely on less valuable, anonymized, or public data (which can degrade AI model quality and reliability), or accept an unacceptably high level of risk. Confidential computing offers a path to innovate with AI using high-quality, sensitive data without these compromises.
End-to-End Data Protection: A Classified Approach
It's helpful to consider a data classification model to manage data security in AI workflows effectively. For simplicity, let's use a color-coded system:
🟦 Blue Data (Public): Information that is publicly available and carries no confidentiality concerns.
🟩 Green Data (Internal/Low Sensitivity): Data intended for internal use that has low sensitivity and would cause minimal impact if inadvertently disclosed.
🟨 Yellow Data (Confidential/Medium Sensitivity): Sensitive data that, if compromised, could cause moderate damage to the organization, its reputation, or its customers. This includes proprietary business information, customer data, and internal strategic plans.
🟥 Red Data (Restricted/High Sensitivity): The organization's most critical data. Unauthorized disclosure could lead to severe financial loss, legal penalties, reputational damage, or individual harm. This includes highly sensitive intellectual property, PII/PHI, financial records, and security credentials.
The Principle: While Blue and Green data may be processed in various environments with standard security controls, Yellow and Red classified data should ideally not leave the organization's controlled, secure environment. However, modern AI, particularly when leveraging cloud platforms or collaborating with external entities, often necessitates processing sensitive data outside these traditional perimeters.
This is where Confidential Computing TEEs become crucial. If Yellow or Red data must be processed externally (e.g., in a public cloud or even a private cloud for AI model training or by a third-party AI agent), doing so within a TEE ensures that the data remains encrypted and protected even while in use. This approach maintains the confidentiality and integrity of the data, as it is isolated from the cloud provider, administrators, and other tenants. Without TEEs, processing Yellow and Red data in such scenarios exposes it to unacceptable risks.
The goal is an end-to-end flow where sensitive data is encrypted at its source, remains encrypted in transit to the processing environment (e.g., a cloud-based TEE), stays encrypted during computation within the TEE, and only encrypted results (or results approved by policy) are delivered.
Agentic AI Processing and Multi-Party Computation (MCP) Challenges
AI agents and agentic AI architectures, including concepts like The Model Context Protocol (MCP), where multiple AI agents participate in a computation, introduce further complexities to data protection.
In an agentic system:
An initial agent might access sensitive data.
It may then pass this data, or a derivative of it, to another specialized agent for further processing.
This second agent could, in turn, interact with other agents, data stores, or LLMs.
Each step in this chain presents a potential point of data exposure if not properly secured. If Yellow or Red data is involved, traditional security measures are insufficient because the data needs to be decrypted for each agent to perform its task.
Confidential computing can address these challenges by enabling each step of the agentic workflow within a TEE. This ensures that:
Data passed between agents can remain encrypted.
Each agent processes the data within its secure enclave, isolated from other agents (unless explicitly and securely shared) and the underlying infrastructure.
The integrity of each agent's computation can be attested, ensuring that only authorized and verified code is processing the sensitive data.
This creates trust throughout the entire agentic workflow, protecting sensitive data even in complex, multi-step, and potentially multi-party AI processes.
The Role of Solutions Like Opaque Systems
Platforms like Opaque Systems are designed to address these challenges by operationalizing confidential computing for AI and analytics workloads. Factually, such platforms offer capabilities that include:
Integration with TEEs: They leverage hardware-based TEEs (like Intel SGX and AMD SEV, including support for GPUs like NVIDIA H100s with TEEs) to ensure data is encrypted during processing.
End-to-End Encryption: The aim is to keep data encrypted throughout its lifecycle – at rest, in transit, and crucially, during computation within AI and machine learning pipelines. This eliminates the need to decrypt sensitive data in untrusted environments.
Support for AI Workflows: These platforms enable organizations to run analytics, machine learning (ML) model training, and AI agent workflows directly on encrypted, sensitive data. This can be done across data silos without exposing the raw data.
Verifiability and Auditability: A key aspect is providing verifiable trust. This often involves attestation, where the hardware TEE confirms its integrity and the authenticity of the code running within it before data is processed. Additionally, they can generate hardware-backed, tamper-proof audit logs for compliance and policy enforcement.
Policy Enforcement: Data policies can be enforced at the source and during computation, ensuring only policy-approved results are delivered.
Scalability: Solutions in this space often integrate with distributed computing frameworks like Apache Spark and Ray to handle large-scale AI workloads.
Ecosystem Integration: They typically provide APIs and support for common data science tools and languages (e.g., Python, SQL, notebooks) to integrate into existing data and AI ecosystems (e.g., Databricks, Snowflake, various cloud platforms like Azure, AWS, GCP).
A real-world example shows ServiceNow utilizing Opaque for confidential Retrieval Augmented Generation (RAG) and agent workflows on Azure. The company reportedly significantly improved response times, productivity, and cost reduction while securely handling sensitive data. Such platforms facilitate the use of previously inaccessible sensitive data for AI innovation.
By embedding privacy, security, and verifiable compliance directly into AI workflows, these systems allow organizations to accelerate AI adoption while maintaining control over sensitive data, even when processed in the cloud or by third-party AI agents.
As AI becomes increasingly central to our operations and strategy, the underlying security of the data driving these systems must be a priority. Confidential computing, supported by enabling platforms, offers a robust and verifiable method to safeguard our most sensitive data, even while AI, LLMs, and agents are utilizing it. This opens up new pathways for innovation without compromising security or compliance.
You can learn more about Opaque Systems here.
Additionally, I’ll be speaking at the Confidential Computing Summit in San Francisco, and would love to see you there! I will also sign copies of Rebels of Reason. You can find all the event details here.

Attend the workshop I am a part of at the Confidential Confidential Computing Summit, The New NORMAL: Normalizing AI Enterprise Architecture, join me and leaders in AI for Enterprise leaders from CrewAI, Langchain, DevOps, Nvidia, and Opaque for real tactics for running AI in your enterprise in San Francisco, on June 16th.

How did we do with this edition of the AI CIO?

Derek B. Johnson explores how the surge in AI-driven software development enables faster, cheaper creation of professional applications but introduces security risks through “vibe coding.”
David DiMolfetta writes on Anne Neuberger’s warning that U.S. critical infrastructure remains highly vulnerable to cyberattacks.
Rod Johnson argues that robust testing must become central to Gen AI development workflows.
Maria Korolov explained how AI enhances business email compromise attacks by improving impersonation tactics and data collection.
Tracy Miranda highlights the evolving role of Certificate Transparency to meet emerging security challenges.
Bond released its “Trends - Artificial Intelligence Report”.
Chris Hughes shares an AI Red Teaming Guide by OWASP AI Exchange and Cloud Security Alliance, outlining evolving red teaming practices for Agentic AI.
Habeeb Furqan reported on the collapse of Builder.ai, revealing that its touted AI-powered no-code platform was largely human-driven, with inflated revenue claims and deceptive branding.
The Artificially Intelligent Enterprise writes on why Claude 4 is a serious alternative to ChatGPT.
AI Tangle reports on Meta’s AI-Advertising tool, Samsung’s Perplexity deal, and calls to shut down xAI’s Colossus.

Dear CIO is part of the AIE Network. A network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice-a-week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn how AI works.