Can AI Agents Migrate Your Test Automation Framework Autonomously?

Lessons from Converting Playwright TypeScript to C#

Pavel Marunin
  • Principal Consultant
  • TTC Global
  • Auckland, New Zealand

At TTC Global, we are always looking for ways to push the boundaries of technology in order to deliver better services for our clients. Naturally, this includes exploring how artificial intelligence can drive testing efficiency and unlock innovative solutions.

Recently, we explored whether AI could help migrate our mature Playwright-TypeScript (TS) test automation framework into C#. Doing this manually would be a formidable challenge, given the maturity and sophistication of our TS framework, which includes:

  • Advanced business logic and test data modelling with layered abstractions
  • Hierarchical custom fixtures to manage browser state and improve readability/maintainability
  • Consistent logging powered by class decorators
  • Utilities that fill Playwright gaps (e.g., HTML table handling, robust getText(), string/date utilities, environment management, etc.)
  • Custom reporting to address limitations of Allure
  • Integrated mobile testing
  • …and more

Instead of embarking on a traditional migration effort, we asked a simple question: Could modern AI actually accelerate a complex framework migration?

Proof of Concept: Using Agentic AI Swarms

To find out, we ran a proof of concept (POC) using an agentic AI swarm to tackle the migration. This is still an emerging field, with best practices only beginning to take shape. To create a short feedback loop, we used Claude Code, which supports subagents with simple configuration.

We set up three subagents with distinct roles:

  • Architect: Coordinated the migration, ensured feature parity, and enforced C# best practices
  • Developer: Handled code migration and unit tests while keeping the SDK consistent with TS
  • Tester: Verified unit/UI tests and ensured test data alignment with TS

The setup process was straightforward and took only a few minutes. From there, the agents worked interactively to convert our Playwright-TS framework into C#, while retaining the same SDK, tests, and test data where possible.

Challenges of Migrating to Playwright-C#

This framework migration task was a very complex challenge, because Playwright-TS and Playwright-C# lack feature parity. Several features critical to our TS framework would require substantial re-architecture to replace core functionalities, as summarised in the table below.

While the AI handled much of the migration autonomously, the task exposed key differences between Playwright-TypeScript and Playwright-C# that required human intervention.

TS FeatureC# Migration Requirement / Challenge
Built-in test runnerMust use external third-party runners like NUnit, MSTest, or xUnit
Native fixturesSimilar base classes exist, but require custom implementation using standard hook mechanisms
Different hook modelMust adapt using the equivalent hook mechanism of the chosen runner
Centralised configuration (playwright.config.ts)Configuration is decentralised, must be replaced with .runsettings or .appsettings and custom config
Project abstraction or shardingMust build custom solutions or accept limitations
TypeScript-specific features (decorators, fixtures)Cannot be replicated directly in C#; requires more complex design patterns
Allure customisations, Test Component ReportRequires a different solution, as the original is tightly coupled with Playwright fixtures and TS decorators
Custom loggingRequires full redesign, as the original is based on TS decorators

 

These gaps are not arbitrary. They reflect architectural differences between Playwright for TypeScript and Playwright for .NET, which make one-to-one migration impossible. Despite these hurdles, the task is well-suited for AI because our TS framework is already mature, proven in production, and backed by comprehensive unit tests that serve as requirements.

Results: What the AI Produced

After working autonomously for about three days with occasional high-level prompting (and frequent rate-limit interruptions), the AI produced a migrated C# framework. The outcome was impressive:

  • High-level parity: The C# solution mirrored the original structure and SDK.
  • Smart adaptations: Without explicit prompting, the AI made reasonable architectural adjustments, such as:
    • Playwright-TS test runner → NUnit
    • TS fixtures → manual fixture setup in BaseTest.cs
    • playwright.config.ts → appsettings.json + PlaywrightTestConfiguration.cs
    • Faker → Bogus
  • Fallbacks: When no direct migration path existed (e.g., Allure customisations or logging decorators), the AI preserved method stubs and inserted comments for future redesign.
  • Debugging perseverance: The AI fixed compilation issues and most test failures, even wrestling with our notoriously complex Table class until it finally passed unit tests.

While this result still leaves a lot of challenging work to implement new solutions for the parts where direct migration is impossible (e. g. logging, reporting), as well as reviewing, fixing and productionising the code that the AI has migrated, the AI still completed a large part of the migration work, autonomously.

Technical Observations

Context management and human in the loop

To test the autonomy of the AI swarm, we let it run with minimal human input, intervening only after rate-limit pauses. After two days, the agents declared the migration complete, but the code didn’t compile, and several critical components had not been migrated. The root cause appeared to be context loss: the AI had generated a large internal context but failed to retain its earlier segments, leading to incomplete output. It took an additional full day of manual prompting and guidance to resolve compilation issues, complete the migration of remaining unit tests, and ensure they passed. This experience highlighted the current limitations of agentic AI in handling large-scale, context-heavy migrations without human oversight.

Despite the struggles, the AI achieved a good outcome in the end with only limited prompting and a high-level task definition. The multi-agent setup with different instructions for each agent helped the AI to get out of the hallucinations by either agent and progress the migration. So, the agentic AI swarm setup definitely has merit. But based on this experience, a better approach would be to break down the complex migration tasks into manageable steps, have the AI tackle one step at a time, and only proceed to the next step after the output has passed human validation. This mirrors how human agile teams work in practice: the architect would create an implementation roadmap first, then each epic on the roadmap would be broken down into stories and tasks, the tasks would be prioritised, and the engineers would then work on one task at a time until it met the acceptance criteria. The same approach can be attempted with AI agents doing the work and the human validating the outputs to keep the AI focused on one small task at a time and prevent it from losing context. This would be a promising area for further research.

AI hallucinations

During the migration of our complex Table class, the AI swarm struggled with focus and consistency; an example of classic AI hallucination. The agents struggled for about a day, repeatedly hitting rate limits, which severely impacted progress. The primary difficulty lay in optimising the class performance to ensure unit tests executed efficiently without timing out. This was also the same core challenge we had had to solve in the original TS framework. A direct like to like conversion was possible in this instance, but the AI took countless iterations to implement it, trying out tweaks that did not work. In scenarios like this one, where the implementation is complex but well-understood, manual refactoring (starting with the AI boilerplate and augmented by AI-assisted code completion) is likely to have achieved the required outcome faster. Breaking the work into smaller tasks would have helped the AI to keep focus on one outcome at a time and could have sped up the migration of the Table class as well.

Lessons Learned

While impressive, the AI-generated framework was not production-ready. Manual verification, redesign of non-transferable features (logging, Allure, reporting), and rigorous validation would still be required before use.

Some takeaways from this experiment:

  • Human-in-the-loop is essential: Combining AI autonomy with human oversight yields far better results - reviewing architectural decisions and validating output step by step.
  • Breaking down large epics into smaller tasks could help: this would allow the AI to keep the context focused on the outcome, reduce hallucinations, and speed up the implementation.
  • Multi-agent design works: Context separation across specialised agents helped the AI to reduce hallucinations and achieve the result despite many failed attempts along the way.
  • Subscription limitations: Running such a project requires expensive subscriptions. The Pro plan frequently hit rate limits, pausing work for hours. Realistically, the Max Plan or an enterprise-level subscription would be needed for large-scale autonomous projects.

Although the migrated framework wasn’t production-ready, this proof of concept offered valuable insights. It showed how agentic AI swarms can handle complex migration tasks - but also why human expertise remains indispensable. The technology is evolving fast, and future tools will likely make AI-assisted framework migration far more practical. Now, can autonomous AI agents write tests using our existing mature TypeScript framework? This would be another research topic. Watch this space!

Read the Test Lab Blogs

We'll be publishing our R&D under Test Lab! To get insights sent to your inbox or additional information about our Test Lab innovations, submit this interest form.

Next Steps

Reach out to TTC Global to explore our experiments in more depth or discuss how we can help accelerate your innovation goals. Whether you're just starting your AI-augmented quality engineering journey or looking to deepen your capabilities, we’d love to connect!