Dear CIO,
The intersection of AI and cybersecurity is occurring as we speak, but as organizations rush to adopt AI-driven tools and systems, the pressure is mounting to ensure these technologies are deployed securely and responsibly. Recognizing this, the National Institute of Standards and Technology (NIST) is stepping up to provide much-needed guidance for this evolving landscape. In this newsletter, I will be reviewing NIST’s Cyber AI Profile.
Best Regards,
John, Your Enterprise AI Advisor

Why NIST’s New Cyber AI Profile Deserves Your Attention
Rapidly Advancing Cybersecurity for the AI Era

NIST is currently developing a new Cyber AI Profile, which builds on its widely used Cybersecurity Framework (CSF). Think of this profile as a practical guide to help organizations understand and manage the cybersecurity risks that come with building or using AI. It’s grounded in the familiar language and structure of the CSF but tailored to the unique challenges AI introduces.
This work is a direct response to community feedback. After hosting a workshop in April 2025, NIST heard a clear message from industry leaders that they wanted more transparency and collaboration in how this guidance is created. In response, NIST is shifting away from the traditional "behind closed doors" draft process and instead will engage with the community through regular meetings and open discussions. The goal is to create a Cyber AI Profile that reflects real-world needs and is practical to implement.
In addition to the high-level profile, NIST is also working on more tactical tools, specifically, cybersecurity “control overlays” for AI systems. These overlays adapt existing NIST SP 800-53 controls to address AI-specific risks. They are being designed to be modular and lightweight, especially for the many organizations that will be AI users rather than developers. The focus is on helping these organizations implement the right controls without duplicating existing best practices for traditional software.
This work will build on several key NIST resources:
SP 800-53: Core security and privacy controls
SP 800-218A: Secure software development practices for generative AI and dual-use foundation models
AI-100-2e2025: Taxonomy and terminology for adversarial machine learning
Victoria Yan Pillitteri, who leads NIST’s Security Engineering and Risk Management Group, highlighted the importance of making this guidance actionable and grounded in real use cases. NIST is actively seeking input from across the industry to ensure the final products reflect where the most pressing needs are.
What This Means for CIOs
If your organization is adopting or scaling AI, here’s why NIST’s work matters to you:
Stay Informed: Keep tabs on the Cyber AI Profile and related guidance as it evolves. This will likely shape future compliance and best practices across industries.
Get Involved: Encourage your cybersecurity and architecture teams to participate in NIST’s Community of Interest. Having a seat at the table will help ensure your organization's needs are reflected in the final guidance.
Prepare to Adapt: Expect NIST’s work to influence future updates to core frameworks like the Risk Management Framework (RMF) and SP 800-53. Building flexibility into your risk and security programs now will make future adoption smoother.
The bottom line? NIST is helping define what secure AI adoption looks like for the enterprise. By leaning into this work now, CIOs can ensure their organizations are building on a solid and secure foundation.

How did we do with this edition of the AI CIO?

Help Net Security shares a report by SecurityScorecard that third-party breaches surged to 35.5% of all incidents in 2024.
Sam Sabin exposes how AI chatbots enable scammers to craft fluent, personalized phishing emails that evade traditional detection cues.
JFrog researchers reveal that the PyTorch model dtonala/DeepSeek-R2 on Hugging Face stealthily deploys the XMRig cryptominer/
The Canada DevOps Community of Practice welcomes OPAQUE Systems as a Gold sponsor for the June 2, 2025, DevOps for GenAI Hackathon in Ottawa. You can still register for the event here.
Mastufa Ahmed dives into a Gigamon Survey revealing 91% of organizations are making security tradeoffs in hybrid cloud environments for AI adoption.
LangChain introduces UQLM as a hallucination detection library that leverages uncertainty quantification.
Vivek Katial describes the challenge of evaluating hierarchical multi-label classification in code review theme detection.
The Artificially Intelligent Enterprise covers how AI is helping healthcare.
AI Tangle looks at DeepSeek’s update to R1, xAI’s deal with Telegram, and the NYT’s AI deal with Amazon.

Dear CIO is part of the AIE Network. A network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice-a-week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn how AI works.