Codurance recently conducted an internal engineering experiment to explore how modern AI coding tools could accelerate software delivery while maintaining production-grade engineering standards.
Rather than building a trivial prototype, the team chose to rebuild a real application: Codurance’s internal time sheet system.
The objective was not simply to generate code quickly. Instead, the team wanted to answer a practical question facing many technology leaders today:
Can AI meaningfully accelerate software delivery while maintaining the architecture, quality and security standards required in production systems?
The results demonstrated that when AI is combined with strong engineering guardrails, it can significantly accelerate development without sacrificing code quality.
How Codurance helps businesses use AI to safely accelerate delivery
AI coding tools are evolving rapidly, but many engineering leaders remain cautious about introducing them into real delivery environments.
Common concerns include:
Inconsistent code quality
Architectural drift
Security risks
Long-term maintainability
Difficulty integrating AI tools into production delivery workflows
At the same time, the potential productivity gains are difficult to ignore.
Codurance wanted to explore whether AI could genuinely accelerate delivery while still adhering to the engineering discipline expected in production systems.
To test this, Codurance engineers rebuilt our internal time sheet application from scratch using AI-assisted development tools.
The goal was to achieve functional parity with the existing system, including:
Calendar-based time entry
Activity and region management
User and administrative controls
Business rules and validation
Reporting and workflow functionality
The original system had previously been estimated at approximately 60 days of development effort for a single engineer.
Rather than removing engineers from the loop, the experiment focused on human-AI collaboration.
Engineers defined the architecture, tooling and development guardrails, while AI agents implemented much of the application functionality.
The experiment was conducted using a modern AI-assisted development workflow built around widely available engineering tools.
The development environment was centred on Cursor, an AI-native IDE that integrates large language models directly into the coding workflow.
Cursor allows developers to interact with a codebase through prompts, orchestrate agent-driven development tasks and automate multi-file changes across the system.
Within this environment, the team used multiple AI models:
Claude Code for reasoning across complex multi-file implementations
OpenAI Codex for autonomous feature generation and code production
Using multiple models allowed the team to leverage different strengths in reasoning, implementation and review.
The application itself was deployed to Amazon Web Services (AWS) using infrastructure-as-code tooling, ensuring the resulting system could be deployed and operated using a standard cloud delivery model.
The stack included:
A TypeScript full-stack web architecture
PostgreSQL for persistence
Automated CI/CD pipelines
Ephemeral development environments
This ensured the resulting application could be validated and operated using the same practices applied to any production system.
Three core principles guided the development process.
The system was designed to provide rapid feedback to both engineers and AI agents.
The architecture incorporated:
End-to-end type safety
Automated testing
Linting and static analysis
Infrastructure as code
Reproducible development environments
These mechanisms ensured that incorrect implementations were detected immediately.
Fast feedback loops proved essential to maintaining code quality while working with AI-generated code.
To ensure generated code met engineering standards, the team introduced a local quality-gate pipeline.
Before any feature was considered complete, it had to pass:
Type checking
Linting
Automated tests
Build validation
End-to-end verification
This pipeline ensured that the AI generated working, testable code rather than incomplete prototypes.
Rather than asking AI to rebuild the entire system in one step, engineers first extracted a structured list of system features from the original codebase.
This checklist guided the AI through implementation and ensured no functionality was missed.
Features were then implemented incrementally and reviewed by engineers before being committed.
The most striking outcome of the experiment was the combination of delivery speed and maintained engineering quality.
The original time sheet system had previously been estimated at approximately 60 developer days of effort.
Using AI-assisted development, a single engineer rebuilt the system to near feature parity in approximately 3 days.
The resulting application included:
A CI/CD pipeline
Automated unit, integration and end-to-end tests
Linting and static analysis
Automated deployments
Ephemeral environments
Full end-to-end type safety
At this stage the application was not yet fully production ready. However, engineers estimated that around one additional day of work would be required to complete the remaining refinements.
In practical terms, this represented an acceleration of over an order of magnitude (ie. more than 10 times faster) compared with the original delivery estimate.
However, speed alone was not the objective of the experiment.
Maintaining engineering quality was equally important.
To ensure the results were credible, the team evaluated the generated code using the same static analysis and code quality tools typically applied to production systems.
The AI-generated system was compared against a human-written control version of the same application, using standard software quality metrics.
The analysis showed that the AI-assisted implementation matched or outperformed the manually written version across multiple indicators, including:
Security vulnerabilities
Cyclomatic complexity
Technical debt indicators
Test coverage
The use of automated quality gates during development ensured that the generated code met the same standards expected of any Codurance production system.
The experiment demonstrated that AI-assisted development does not have to come at the expense of code quality or security, provided the right architectural and engineering controls are in place.
Several important insights emerged from the experiment:
The architecture, tooling and development guardrails were designed by experienced engineers.
Once those foundations were in place, AI was able to implement functionality extremely quickly.
AI tools perform best in environments with:
Strong type systems
Automated testing
Infrastructure as code
Clear development patterns
Without these foundations, AI adoption can introduce risk rather than reduce it.
Tight feedback loops enabled both engineers and AI agents to iterate rapidly and correct mistakes early.
This significantly reduced the cost of experimentation during development.
AI-assisted development has the potential to significantly accelerate software delivery.
However, the organisations that benefit most will not simply adopt AI tools.
They will invest in the engineering environments that allow AI to operate safely and effectively.
This includes:
Strong architectural governance
Automated testing and quality gates
Infrastructure transparency
High-quality developer experience
When these conditions exist, AI can dramatically increase the pace of delivery without compromising maintainability, security or system integrity.
Codurance works with technology leaders to create engineering environments where AI can safely accelerate delivery.
This includes:
AI Delivery Readiness Assessments: Evaluating whether engineering environments support AI-assisted development.
Platform Modernisation: Improving architecture, infrastructure and developer experience to unlock faster delivery.
AI Delivery Guardrails: Implementing the engineering controls needed to safely adopt AI coding tools.
If you'd like to find out more, get in touch with us today.
Alan Jackson is a Director of Client Solutions at Codurance, with more than 20 years of experience in technology. Alan works with organisations to shape technology strategies and software delivery approaches that enable meaningful business change. Alan also works with clients to explore how AI-enabled engineering practices and AI-powered systems can unlock new opportunities while maintaining the quality, reliability and discipline required for production systems.