Test-Long Post






Infographic: Generative AI in Government IT


From Adoption to Mastery

A Government Professional’s Guide to Generative AI in IT

By Al Crowley and Robert Bruce

Navigating the New Frontier of AI in Government Projects

Generative AI is no longer a futuristic concept; it’s a present-day reality in software development. Major tech companies like Google and Microsoft report that AI is already writing 25% to 30% of their new code, and a recent DORA survey found that 67% of developers see improved performance with AI-assisted tools. For government employees responsible for overseeing complex IT projects, this new technology presents both a monumental opportunity and a significant challenge. How can you ensure that your IT contractors are leveraging AI not just for speed, but for quality, security, and long-term value?

This guide is designed to provide you with a clear framework for understanding and managing the integration of Generative AI in your projects. We will explore the typical journey of AI adoption through four distinct stages, and then detail the essential skills and best practices—the new rules of the road—that distinguish effective, compliant AI use from risky shortcuts.

The Journey of AI Adoption

Understanding where your contractors are on their AI adoption journey is the first step to effective oversight. Adoption typically progresses through the following stages, each with its own characteristics and opportunities.

Stage 1: Air-Gapped

In this initial stage, security and policy are paramount. Developers use public AI tools like ChatGPT on separate devices not connected to government networks to ask general questions, brainstorm solutions, or debug generic code snippets.

What it Looks Like:

A developer might use their personal phone to ask an AI, “What’s the best way to sort an array in Python?” No project-specific or sensitive data is shared.

Your Role:

Acknowledge this as a low-risk starting point. Encourage transparency and ensure teams understand the strict prohibition on sharing any project data.

Stage 2: Cut & Paste

As teams gain confidence and receive initial approvals, they copy non-sensitive code snippets from the project and paste them into an AI chat interface for analysis or refactoring.

What it Looks Like:

A developer copies a non-sensitive function and asks the AI, “Can you find any bugs in this code?” or “Add comments to explain what this function does.”

Your Role:

This stage requires clear data handling guidelines. Work with your security officer to define what constitutes “non-sensitive” code to reduce friction without compromising security.

Stage 3: IDE Integrated

AI tools are integrated directly into the developer’s coding environment. The AI now has access to the broader context of the project—other files, dependencies, and documentation.

What it Looks Like:

An AI assistant provides context-specific suggestions in real-time. A developer can issue commands like, “Write a unit test for this function based on our existing test framework.”

Your Role:

Ensure any leap in productivity does not negatively impact quality or stability of your systems. Ask which integrated tools are being used and what processes are in place to ensure that experienced team members are giving a close, critical examination of any AI generated code. 

Stage 4: Autonomous Agents

At the forefront of adoption, autonomous agents perform tasks independently based on triggers or schedules. This is an evolving area, but it represents the ultimate force multiplier.

What it Looks Like:

An AI agent automatically reviews every new piece of code, flags potential security vulnerabilities, and suggests corrections before a human reviewer sees it.

Your Role:

Encourage exploration of customized AI assistants for specific, repetitive tasks. This frees up senior developers to focus on more complex strategic challenges.

AI Isn’t a Shortcut, It’s a Skill

To ensure AI delivers real, sustainable value, it must be treated as a professional skill governed by rigorous best practices. As a government stakeholder, you should expect your IT contractors to adhere to the following principles.

The “Trust, but Verify” Mandate

AI can generate code in seconds, but that code is merely a first draft. It must be subjected to the same, if not greater, scrutiny as human-written code.

  • Test-Driven Development (TDD): Insist that teams write comprehensive tests before using AI to generate code. These tests act as a contract and an objective measure of correctness.
  • Automated Code Verification: AI-generated code must not bypass your DevSecOps pipeline. It should be automatically subjected to static analysis, security scanning, and policy checks.

Human-in-the-Loop is Non-Negotiable

While AI can augment code review, it cannot replace the critical thinking and deep context of an experienced developer.

  • AI-Augmented Reviews: Use AI to catch common bugs and style issues, allowing human reviewers to focus on more significant architectural and logical issues.
  • Senior Oversight: Senior developers’ experience is more valuable than ever. They must be responsible for reviewing AI-generated code for nuance and long-term consequences.

Mastering the Language of AI: Prompt Engineering

The quality of AI output is directly proportional to the quality of the input. “Prompt engineering” is the skill of crafting precise, context-rich instructions for the AI.

  • From Vague to Valuable: A vague prompt yields generic results. A skilled prompt will produce something far closer to production-ready code.
  • Standardized Prompts: Encourage teams to document and standardize effective prompts. These become shared assets that ensure consistency and quality.

Documentation and Auditability

In regulated government environments, all code must be auditable, regardless of its origin.

  • Treat AI Interactions as Artifacts: The prompts used, the AI-generated output, and any human modifications should be stored in your version control system.
  • Secure Your Authority to Operate (ATO): A clear, auditable trail makes it easier to answer auditors and maintain stakeholder trust.

Navigating the Pitfalls & Data Insights

While the benefits are compelling, the adoption of AI comes with documented risks that require vigilance.

The Quality vs. Speed Trade-Off

Google’s DORA 2024 report issued a critical warning: teams with high AI adoption sometimes saw overall product quality decline, even as delivery speed improved. This suggests teams are using AI to write code faster but are not investing that saved time in the rigorous human-powered testing needed to ensure quality remains high. Speed without quality is not modernization.

The Risk of Uncritical Acceptance

Developers, particularly junior ones, may accept AI-generated code they don’t fully understand, which can introduce subtle but serious bugs. The emphasis on TDD and senior code review is the primary mitigation for this risk. This chart shows the self-reported impact, which must be balanced with objective quality metrics.

The Path Forward: A Cultural Shift

Integrating AI successfully is not about buying a new tool; it’s a cultural and procedural shift. The goal is not to automate developers’ jobs, but to augment their capabilities. AI should handle the repetitive, boilerplate work, freeing up your most valuable human experts to focus on solving the core mission problems. As a government leader, you can steer this shift by taking these steps.

  1. Identify Your Stage: Assess where your current projects and contractors fit in the four-stage model.
  2. Start Small & Securely: Encourage exploration at the early stages, always prioritizing adherence to security and data handling policies.
  3. Champion Clear Policies and Tools: Advocate for clear guidelines on AI usage and the approval of secure, compliant tools, whether cloud-based or on-premise.
  4. Insist on Best Practices: Make the principles of TDD, rigorous code review, and auditable documentation a contractual expectation.
  5. Focus on Augmentation: Frame the conversation around how AI can make small, expert teams more effective, not how it can replace team members.

© 2025 TCG. All rights reserved.

A Government Professional’s Guide to Generative AI in IT.