• AI CIO
  • Posts
  • AI Auditing in Focus

AI Auditing in Focus

Navigating New Frameworks for 2024-2025

Dear CIO,

In today’s rapidly evolving digital landscape, the audit profession is undergoing a significant transformation as it adapts to the complexities of artificial intelligence. Recent advancements in AI auditing frameworks underscore a critical need for specialized standards and methodologies that keep pace with technological innovation while ensuring accountability, transparency, and ethical compliance. In today’s newsletter, we are to look a report that shows new frameworks that emerged in 2024-2025.

Best Regards,
John, Your Enterprise AI Advisor

Dear CIO

AI Auditing in Focus

Navigating New Frameworks for 2024-2025

Key Developments and Frameworks

This report outlines several groundbreaking initiatives spearheading the evolution of AI audits:

  • NIST’s AI Risk Management Framework (AI RMF):
    In July 2024, NIST released an updated profile explicitly addressing the risks associated with generative AI. This update is part of a broader effort to incorporate trustworthiness across the AI lifecycle—from design to evaluation—and is supported by comprehensive companion resources such as the AI RMF Playbook and Roadmap.

  • Institute of Internal Auditors (IIA) Framework:
    The IIA refreshed its AI Auditing Framework in September 2024, recognizing AI systems' unique challenges. The framework addresses technical dimensions such as model robustness and continuous risk monitoring and emphasizes strategic, organizational, and ethical considerations, ensuring that internal auditors are equipped to evaluate AI technologies comprehensively.

  • Global Standardization and Ethical Considerations:
    International guidelines like ISO/IEC 23053 and ethical frameworks from IEEE are gaining traction. They promote principles such as fairness, transparency, and accountability. These standards are critical as organizations work to build public trust in AI while aligning with emerging regulatory demands.

Integrating Traditional Governance with AI-Specific Practices

With no universally accepted AI audit framework yet, many organizations are adapting established governance models such as COBIT 2019 and COSO ERM to the nuances of AI. These frameworks provide a structured approach to aligning AI initiatives with business objectives, managing inherent risks, and ensuring regulatory compliance. Their lifecycle-based models facilitate continuous monitoring and improvement—a vital requirement given AI’s dynamic nature.

Regulatory Momentum and Continuous Monitoring

The regulatory landscape is catching up to technological advancements. Landmark regulations like the EU AI Act and proactive state-level initiatives in the United States are setting the stage for more rigorous oversight. Additionally, the U.S. GAO’s updates on AI accountability and the emphasis on continuous monitoring practices reinforce the importance of real-time insights into AI performance and security.

Critical Takeaway

The report’s critical insight is clear: as AI systems become more pervasive and influential, the traditional audit methodologies must be reimagined. Integrating tailored frameworks, ethical guidelines, and advanced tools for continuous monitoring is essential for mitigating risks, ensuring compliance, and fostering a culture of responsible AI adoption. This multi-disciplinary approach not only enhances the reliability of AI systems but also strengthens the overall integrity of organizations in the face of evolving digital challenges.

Stay tuned as we explore how these AI auditing innovations will shape the future of corporate governance and risk management.

Click Below for the Full Report.

Recent Developments in AI Auditing Frameworks, Standards, and Practices (2024-2025).pdf413.13 KB • PDF File

How did we do with this edition of the AI CIO?

Login or Subscribe to participate in polls.

Deep Learning
  • Simon Wardley reflects on using Claude Sonnet 3.7 to generate Wardley Maps from books, noting both the AI’s steady progress and its current limitations.

  • Reuven Cohen advocates for using Model Context Protocol (MCP) over plugins, praising its simplicity, versatility, and power as an interface for connecting apps.

  • Eugene Cheah announces the release of the Qwerky-72B and 32B models—large-scale language models that outperform GPT-3.5 Turbo without using transformer attention.

  • Phil Muncaster covers a report from Kela revealing a sharp rise in cybercriminal use of AI in 2024.

  • I share a chart covering multi-objective optimization and Pareto optimality.

  • The NIST Trustworthy and Responsible AI report presents a comprehensive taxonomy and standardized terminology for adversarial machine learning.

  • Pavan Belagatti reviews and compares three top PDF extraction tools for building retrieval-augmented generation (RAG) applications.

  • Duncan MacRae features Thrive’s Frankie Woodhead advocating for neurodivergent inclusion in AI development.

  • Sam Sabin dives into the 11 new autonomous AI agents Microsoft is launching to help cybersecurity teams.

  • Armand Ruiz emphasizes the critical role of well-designed evaluations in developing reliable GenAI applications.

  • Mark Rogge outlines how Styra enables secure, compliant, and reliable use of generative AI in financial and sensitive environments.

  • atla recommends evaluating AI performance across individual metrics separately and introduces a Selene-based cookbook to help teams implement this multi-criteria evaluation approach.

  • Chris Hughes highlights that while AI-driven development is gaining traction, large language models still frequently produce insecure or incorrect code.

  • Keely Quinlan writes on Pennsylvania Governor Josh Shapiro sharing early results from the state’s ChatGPT pilot, highlighting that generative AI significantly boosted employee productivity.

  • Michael Kisilenko warns that OpenAI's latest models are demonstrating emergent deceptive behaviors.

  • If you can make it, join me in Ottawa on June 2nd to learn how to blend the power of DevOps with AI.

  • The Artificially Intelligent Enterprise looks at how Model Context Protocol is creating a universal standard for enterprise AI integration.

  • AI Tangle looks at the concerns & consequences of GPT-4o's image generation.

Regards,

John Willis

Your Enterprise IT Whisperer

Follow me on X 

Follow me on Linkedin 

Dear CIO is part of the AIE Network. A network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice-a-week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn how AI works.