Expand Your Expertise with Our Latest Course Offerings

iLAB Software Quality Assurance & Testing

Accelerating AI System Validation in a Statewide Digital Transformation 

As part of a statewide digital modernization effort, a large public safety agency implemented a secure AI knowledge assistant to transform access to policies, procedures, and internal documentation. iLAB partnered with the agency to lead the testing, automation, and validation efforts supporting the solution’s rollout.

iLAB’s contributions included designing and executing AI-optimized test plans, improving automation frameworks through descriptive programming, and validating the system’s AI capabilities across chat, document interaction, and policy retrieval components.

This project demonstrates iLAB’s leadership in validation and verification for AI-driven software systems, ensuring secure, compliant, and high-performing deployments.

Client Background

The client oversees one of the nation’s largest correctional and law enforcement systems, managing institutional operations, staff, and regulatory documentation. To modernize its IT infrastructure, the agency launched an initiative to empower personnel with faster, smarter access to policy and procedure content, while maintaining strict compliance with state and federal security standards.

 

Client Challenges

  • Time-Consuming Test Execution – Legacy repositories slowed test cycles and reduced automation reliability.
  • Manual Policy Access – Staff lacked efficient tools to search and query policy and procedural documentation.
  • Inconsistent User Experience – Responses needed to adjust based on user roles (e.g., field officer vs. director).
  • Multilingual Accessibility – Content had to be validated across English, Spanish, and simplified formats.
  • Security & Compliance RequirementsTesting ensures compliance with CJIS and state IT standards in a GovCloud environment.

iLAB’s Solution

While the AI solution was developed by another vendor, iLAB was engaged to ensure quality, accuracy, and compliance through advanced testing methodologies.

Key focus areas included:

  • End-to-end testing of an LLM-powered application covering:
    • AI Chat Interface – Conversational Q&A for policy and document queries.
    • Document Interaction – Uploading and querying internal files using natural language.
    • Policy & Procedure Library – AI-enhanced search with role- and language-based responses.
  • AI Test Automation Tool for Software Testing – Leveraging UFT One with descriptive programming to improve automation speed and reliability.
  • Persona and Language Validation – Ensuring contextually accurate responses across roles and languages.
  • Continuous Feedback Loop – Providing iterative feedback to developers to enhance AI accuracy, including recognition of complex table data.

Each component underwent rigorous validation and verification prior to go-live

Implementation Approach

  • Descriptive Programming Optimization – Transitioned from shared repositories to descriptive models, reducing test execution time by 80%.
  • Wireframe-Based QA Engagement – Test scripts were initiated from screenshots before full system delivery, enabling early quality involvement.
  • AI Content Validation – Over 265 policy documents were uploaded and queried to ensure accurate multilingual performance.
  • Persona-Based Testing – Verified system behavior across multiple user roles and security levels.
  • Feedback Integration – Test results were fed back to developers to refine model behavior and strengthen reliability.

Technology Solutions

  • AI Knowledge Assistant – Secure, persona-aware chat and document query system.
  • UFT One – Core automation platform enabling descriptive programming and AI-based object recognition.
  • OpenText ALM – Managed test cases, traceability, and defect tracking.
  • LoadRunner – Simulated performance and load scenarios.
  • GovCloud Infrastructure – Supported secure, CJIS-compliant testing.

AI Governance and Broader Value

This initiative reflects iLAB’s broader approach to responsible AI testing across the software development lifecycle. Practices included:

  • AI-driven test optimization and modularization
  • Automated generation of test descriptions for clarity
  • Smart object recognition for visual validation
  • Excel and macro automation to streamline reporting
  • AI-supported document review for traceability and compliance

Each AI-generated output underwent human validation, and all testing aligned with strict security and privacy requirements — ensuring trustworthy, auditable AI outcomes.

Area
Test Execution Speed
Searchable Document Access
Tailored User Experience
Early Test Scripting
Compliance and Security
Lifecycle Traceability
Outcome
Automation execution reduced by 80% (5 to 1 minute) via descriptive programming
265+ documents indexed and tested for multi-language AI querying
Persona-specific responses validated and refined across user roles
Enabled testing kickoff ahead of full delivery, reducing project timelines
All testing aligned with CJIS and 60GG-2 within a GovCloud environment
All test artifacts, results, and defects tracked within OpenText ALM

Lessons Learned

  • AI Needs Human Oversight – Human validation remains essential for accuracy and compliance.
  • Early QA Delivers Results – Early test scripting accelerated timelines without compromising quality.
  • Adaptability is Key – Strategic use of both AI-driven and traditional testing improved efficiency.
  • Feedback Fuels AI Maturity – Test feedback directly improved model precision and understanding.

 Future Considerations

  • Predictive defect analytics using AI.
  • Real-time AI-powered test case generation from requirements documents.
  • Continuous integration of AI into CI/CD pipelines for dynamic testing.
  • Expansion of AI features to include audit tracking and automated content classification.

Conclusion

This project highlights how a well-governed approach to AI testing and automation can dramatically improve both software quality and operational efficiency. By applying structured methodologies and intelligent tooling, iLAB enabled the agency to deliver a secure, multilingual, and role-aware AI knowledge system—reducing test execution time by 80%, ensuring compliance in a GovCloud environment, and validating end-to-end performance of an LLM-powered application.

As organizations increasingly integrate AI into their workflows, validation and verification for AI-driven software systems have become critical to maintaining trust, accuracy, and compliance. iLAB continues to help clients bridge that gap—through AI test automation tools for software testing, continuous QA innovation, and responsible AI governance.

Ready to ensure your next AI initiative meets the highest standards for performance, security, and reliability? 

Learn how our experts can help you design, automate, and validate your AI and generative software applications with confidence.