The Human Touch in Testing AI Systems: Stories from the Field | TTC Global

The Human Touch in Testing AI Systems: Stories from the Field

Discover how TTC Global applied AI testing expertise to help fruit growers access accurate, relevant information faster through an AI-powered chat tool.

Raj Natalia NZ TTC Global
  • Senior Consultant
  • TTC Global
  • Auckland, New Zealand

My Journey into Testing AI for Fruit Growers  

After 22 years of testing all things Telco, it was time to move onto something a little more outside my comfort zone.  When TTC Global talked to me about my next project engagement, I didn’t realise this would entail telling people I would be testing ‘AI for Fruit growers’. AI and horticulture aren’t the most obvious pairing, but this project proved just how valuable technology can be in the growing community.

TTC Global has an established AI practice, including an AI-Augmented Quality Engineering platform, as well as expertise evaluating and testing AI readiness, AI-powered testing tools, and AI systems. This was an opportunity to apply our testing expertise to a sector that’s vital to New Zealand and the wider world and is rapidly evolving. 

 

From Orchards to AI Algorithms 

Fruit growing has always required skill, dedication, and adaptability. But in recent years, it’s also demanded the ability to navigate an increasing amount of compliance standards, technical references and industry updates. For growers, finding the right information can be time-consuming and, at times, overwhelming. 

That’s where TTC Global was engaged. Our client developed an AI-powered chat tool using their existing website designed to provide growers with quick, accurate and relevant answers. The idea was to reduce the frustration of searching through large amounts of documentation and instead give growers confidence that the answers were at their fingertips. 

 

Testing AI vs. Traditional Software 

At TTC Global, we’ve tested many types of systems for our clients, but testing AI is a very different experience.  Traditional systems are predictable; the same input always produces the same output.  I think you know where I’m going with this. 

AI, however, is non-deterministic, which means you can ask the same question twice and receive slightly different answers. This can be quite a challenge, to put it mildly. 

My role was to ensure that regardless of these (subtle) variations, the tool consistently delivered information that was accurate, relevant, and trustworthy.  

The AI tool required testing at multiple levels: 

  • User Access Management – ensuring only the right people could access the tool. 

  • AI Chat Interface – validating that growers could ask questions naturally and still receive meaningful results. 

  • Document Management – confirming the AI was referencing and summarizing the correct material.  Citations were used to provide a link to the vast amount of documentation. 

  • Model Lifecycle Management – making sure the AI stayed accurate and up to date. 

  • Usage Reporting & Analytics – tracking how the tool was being used to guide improvements. 

 

Lessons Learned 

Testing the tool provided some important insights: 

  • AI can sometimes produce confident but incorrect information, which makes rigorous validation essential (this is why we include a disclaimer). 

  • Data quality is critical, as the AI tool can only be as reliable as the information it draws from. 

  • Growers phrase their questions in many ways, so testing had to reflect a wide range of real-world scenarios. 

  • Context matters. Without it, the AI can misinterpret questions or provide overly generic answers. Let’s face it, we don’t want to ask a question about grapes and AI responds with answers about haemorrhoids! (Just checking if I still have your attention 😉) 

 

Ultimately, testing AI is not just about checking functionality, it’s about building trust. This trust extends to confidence that the AI is learning about the environment you work in and providing contextual and accurate information. 

 

The Human Touch 

A highlight of this project was seeing growers test the tool during User Acceptance Testing. Their feedback confirmed that the AI tool wasn’t just functional, it genuinely improved their ability to access important information. That was a rewarding moment, because it showed the practical impact of the testing we carried out. 

The tool is now in active use by thousands of growers and agricultural staff each month, handling nearly 200,000 sessions and providing quick, accurate answers from a knowledge base of more than 60,000 documents. Surveys show that over 90% of users find the tool helpful, with most reporting they can resolve compliance or technical questions in under a minute—saving them valuable time in the field. 

 

TTC Global’s role was to ensure both the back end and front end of the tool were robust, reliable, and user-friendly. By working closely with data scientists, we helped shape an AI tool that the grower community could use with confidence. 

 

Looking Ahead 

While focused on agriculture, this successful project comprised many essential and representative elements of testing AI systems, ones that apply to any other industry, including accounting for non-deterministic systems, edge cases, and building user trust.  

For the application’s end users, the outcome is simple; less time searching through documents and more time focusing on what they do best producing world-class export quality fruit.  

For TTC Global, it’s another example of how our expertise ensures that even emerging technologies are reliable, usable, and deliver real value.