• AI CIO
  • Posts
  • The Rise of the "AI Developer"

The Rise of the "AI Developer"

How AI is Shaping the New Era of Developers

Dear CIO,

In this edition, we delve into two of the most transformative forces shaping the future of technology—AI development and the evolution of testing frameworks. First, we explore how the role of the "AI developer" is emerging as the next big frontier in software engineering. This brings us to our second focus: the National Institute of Standards and Technology's (NIST) recent release of the Dioptra framework. Join us as we unpack these developments and their implications for the future of technology.

Best Regards,
John, Your Enterprise AI Advisor

Brought to You By

The AIE Network is a network of over 250,000 business professionals who are learning and thriving with Generative AI, our network extends beyond the AI CIO to Artificially Intelligence Enterprise for AI and business strategy, AI Tangle, for a twice a week update on AI news, The AI Marketing Advantage, and The AIOS for busy professionals who are looking to learn

Dear CIO

The Rise of the "AI Developer"

How AI is Shaping the New Era of Developers

Rise of the AI Developer?

AI development is shaping up to be the next big frontier in software engineering, and the industry is buzzing about it. Microsoft's been at the forefront, pushing the idea of an "AI developer" like it's the next must-have role in tech, and with its hefty investment in OpenAI and its AI development tools integrated into Azure and GitHub, it’s clear they’re serious about being the leaders in this space. But is this a true evolution, or are we just getting sold on the latest buzzword?

Now, it's worth noting that companies like Google and Meta have been in the AI game for years, but Microsoft's making moves to commercialize and scale these tools in a way we haven’t seen before. The launch of GitHub Models, a marketplace for experimenting with AI models, is a significant step in this direction. It’s part of Microsoft’s broader strategy to empower AI developers, and it's setting the stage for how we’ll build and interact with software in the future.

But with all these AI models—from OpenAI's GPT to Anthropic's Claude and Meta’s Llama—developers are now facing a new kind of challenge: figuring out which model to use and when. Each has its strengths: GPT dominates in coding and multitasking, Claude excels in logic, and Llama shines in mathematics. This diversity is pushing developers to not only choose the right model but also to be ready to switch between them as projects evolve.

Many companies are quietly working on AI applications without much fanfare, which underscores the need for solid testing frameworks. I’ve been exploring the idea of the "LLAMR stack"—Language model, orchestration, observability, retrieval-augmented generation, and model management—as a comprehensive approach to AI development. This stack could be the backbone of how we ensure these models are not just functional but reliable and scalable.

NIST Weighs in on AI Model Testing

On the regulatory front, the National Institute of Standards and Technology (NIST) recently made waves with its AI testing framework, Dioptra. It’s still in its early stages, focusing on model building and training rather than inference, but it’s a step in the right direction. NIST is not just writing standards; they’re building tools, which is a big deal in the realm of AI governance.

NIST’s framework arrives at a critical time. With the rush to integrate AI into everything, there’s a risk that proper testing and evaluation might get sidelined—much like how early DevOps efforts sometimes overlooked the importance of comprehensive testing in favor of speed. But as AI continues to transform our world, tools like Dioptra will be essential in ensuring that we do it responsibly.

Every day, AI is reshaping the tech landscape, even if it’s not always obvious in the moment. The real impact of what we’re building now might only be clear years down the line, as we look back and realize how much AI has altered the fabric of software development. Just like with the early days of cloud computing or the Agile movement, we're witnessing the birth of something that will likely define the next era of technology.

How did we do with this edition of the AI CIO?

Login or Subscribe to participate in polls.

Deep Learning

  • Zain Kahn writes about the Japanese startup Sakana AI unveiling what could be a groundbreaking scientific development: the world's first AI Scientist.

  • Philipp Schmid writes about a head-to-head comparison of open and closed AI models on coding performance, and the results are surprising.

  • In a thread with Tracy Bannon and Katharina Koerner, they write about a recent study highlighting a critical concern with AI code assistants like GitHub Copilot: they may lead to the creation of less secure code.

  • LllamaIndex announces a powerful CLI tool that enables users to parse any PDF, regardless of its complexity—including documents with text, tables, and images.

  • Pascal Biese writes about Apple unveiling ToolSandbox, an innovative open-source benchmark designed to evaluate Large Language Models (LLMs) on complex, real-world tasks.

  • Usman Janvekar writes about how the idea of appointing a Chief AI Officer (CAIO) is gaining traction as a way to centralize AI efforts, reduce fragmentation, manage risks, and drive innovation within organizations.

  • Generative AI showcases how LangGraph Studio is revolutionizing the way developers build and refine large language model (LLM) applications.

  • Janakiram MSV writes about Hugging Face acquiring XetHub, a Seattle-based startup focused on file management for AI projects.

  • John Rauser writes about a recent study by Cisco Security that reveals the significant impact of GitHub Copilot on real-world software development.

Regards,

John Willis

Your Enterprise IT Whisperer

Follow me on X 

Follow me on Linkedin