AI, MCP & Playwright: Your Questions Answered

Expert answers to real Playwright and MCP questions, covering frameworks, AI integration, CI/CD, migration, and scaling test automation

Mei Reyes Tsai
  • GM, Innovation and Technology
  • TTC Global
  • Auckland, NZ

With over 100 registrants and attendees from around the world, our recent Playwright + MCP webinar was truly an experience! 

I had the pleasure of presenting alongside TTC Global Principal Consultants, Allan Munar and Pavel Marunin, as we unpacked how Playwright, strong architectural patterns, and AI-powered MCP workflows are reshaping how teams test at scale. We were also joined by two fantastic client panellists (Jovert Saulon from Bankwest and Caitlin Swain from Metlifecare) who generously shared real-world lessons from their own Playwright journeys. 

Listening to our practitioners reminded me not only how fast the testing landscape is shifting, but how essential strong engineering foundations are for anyone trying to scale Playwright effectively. And while those insights are invaluable, the questions that surfaced during and after the session were even more telling. They painted a clear picture of the hurdles teams face day-to-day when adapting to new tools, frameworks, and AI-driven workflows.

Below, you’ll find our answers, grouped by theme so you can dig into the areas most relevant to your team.

Playwright Framework & Integrations

What strategies are recommended for integrating Playwright with cloud-based CI/CD platforms to ensure compatibility across multiple environments?

Treat your environment/infrastructure as code. Your tests should run in any environment. Common pitfalls in multi-environment drift include: configuration, data, and version control.

We recommend:

  • Using containerisation (e.g., Docker) to standardise environments.
  • Leveraging Playwright’s built-in support for multiple browsers and platforms.
  • Integrating with CI/CD tools like Azure DevOps, GitHub Actions, or Bamboo.
  • Setting up environment variables and configuration files for flexibility.
  • Running tests in parallel and sharding for scalability.

This ensures consistent behaviour across local, CI, and production environments.

How do you manage Playwright browser versions to avoid inconsistencies between local, CI and production environments?

Best practice is to use Playwright’s built-in browser management, which downloads and installs the required browser versions automatically. For CI/CD, ensure the same Playwright version is specified in your pipeline configuration, and use containerisation to keep environments consistent.

Is FlaUI just in .NET, and is it open source?

FlaUI is an open-source .NET library for automating Windows desktop applications. At TTC Global, we’ve integrated FlaUI into our Playwright Accelerator by creating a custom fixture that:

  • Runs desktop and web tests under one unified runner.
  • Provides reusable methods for common desktop actions.
  • Maintains consistent state and reporting across hybrid workflows.
  • Works seamlessly in CI/CD pipelines with enhanced Allure reporting.

This lets our teams extend Playwright beyond the browser for end-to-end automation without adding complexity.

Does your Playwright Accelerator framework support Native app testing?

Yes! The TTC Global Playwright Accelerator Framework seamlessly integrates with Appium for native mobile app testing. The framework runs the tests with the Playwright runner, uses generic utilities (logs, dates, etc.), pipelines and reports consistently, while the interaction with the native app (locators, clicks) is handled by Appium.

How does the framework handle Payroll applications like Datacom or PeopleSoft where we may have running batch processes as well as integration with other applications?

Our framework is designed to support complex enterprise applications, including payroll systems like Datacom's Datascape and PeopleSoft (something that is already proven with some of our customers). It can automate both UI and backend processes, handle batch operations, and integrate with other systems using API and database connectors.

For batch processes, we typically use hooks to monitor job status and validate outcomes, while integration points are managed via custom utilities and reporting.

If you were to use Playwright out-of-the-box instead, it all boils down to how you design your tests and how your test framework supports your test orchestration. There are multiple patterns you can apply to handle long batch processes, asynchronous outcomes, multiple system chains, etc. Your test framework should handle different hybrid orchestrations.

AI, MCP & Emerging Tooling

Are there any pragmatic AI tools today (or emerging soon) that successfully support Playwright automation? How mature are they in real-world use?

Yes, as shown in our demo, tools like GitHub Copilot and OpenAI’s code assistants are increasingly used to support Playwright automation. While they are indeed promising, real-world maturity varies. Most teams use them for code generation, boilerplate, and debugging, but human oversight is still essential for reliability and accuracy.

Real-world use indicates 13–40% productivity gains, depending on the complexity of the test being automated.

Please note: the demo presented is proprietary IP and the code is not available for public distribution. However, we are happy to discuss more in-depth or support your needs.

Can MCP handle hybrid (UI + backend) E2E with multisystems?

While MCP can support hybrid E2E testing across multiple systems, MCP doesn’t execute the tests itself. Instead, it connects your AI model to the tools that execute the tests.

Playwright MCP supports capturing browser network requests. You can add as many MCP servers as you need (e.g., REST MCP, custom internal MCP, etc.).

Can we use the MCP to migrate tests into Playwright?

Short answer: yes, MCP can facilitate migration by connecting AI tools to your systems and helping automate test generation and conversion.

However, we do not recommend a like-for-like migration. Migration is the best opportunity to clean up technical debt and improve architecture, instead of copying legacy problems into a new framework.

If we already have a mature Playwright framework with developed tests, how does the MCP add new tests without breaking or disturbing existing structure?

MCP can generate new tests and integrate with existing frameworks by reusing page objects, fixtures, and coding standards, provided your framework is modular and the agents you created follow best practices.

In our demo, we showed exactly this: A mature Playwright framework with existing tests, where the generated code reused existing page objects and fixtures wherever possible.

Human review is always recommended to make sure integration is seamless.

Can you train a model so that it learns your framework over time so it responds faster next time?

You can fine-tune AI models or use prompt engineering to improve their familiarity with your framework. However, most current tools rely on prompt context rather than persistent memory, so repeated use and well-crafted prompts help improve response quality.

What effort is required to maintain a prompting framework as new models come out?

Maintaining a prompting framework requires periodic updates to prompts and instructions as new models are released. This ensures compatibility and leverages improvements in model capabilities.

A test automation architect should treat this as continuous improvement. Keep in mind, prompt-tweaking is an ongoing process regardless of the model used.

How is AI hallucination handled, if any?

Hallucinations are managed by keeping humans in the loop: reviewing and validating AI-generated code, using clear and specific prompts, and version-controlling instructions to ensure repeatability and accuracy.

This must be embedded into governance; only accept the changes you want and reject any hallucinated or incorrect output.

How do you manage data privacy with 3rd-party tools such as Copilot?

Data privacy is managed by restricting sensitive data exposure, using secure APIs, and following organisational policies for third-party integrations. Always review privacy agreements and configure tools to limit data sharing.

GitHub Copilot’s Trust Center provides detailed information.

We follow strict security compliance (with or without AI):

  • No secrets in our repo
  • Test accounts cannot access production
  • Test data only
     

Migration & Case Studies

Is there a case study where you successfully migrated an entire library of Tosca automated scripts into Playwright (for SAP)?

Absolutely! We have customers currently in the process of migrating from other automation tools to Playwright (open source or proprietary). Please contact us for a detailed case study or to discuss your specific requirements — we’re happy to chat.

Conclusion

The enthusiasm around Playwright, MCP, and AI-assisted automation is only growing, and it’s exciting to see so many organisations thinking strategically about how to scale their automation well.

If this Q&A sparks ideas or you’d like to explore your organisation’s Playwright uplift, my team and I would love to talk. Feel free to contact me directly to set up a time to chat.

And if you missed the webinar, you can watch the full session here.