A Developer's Guide to Starting in Test Automation | TTC Middle East

A Developer's Guide to Starting in Test Automation

I'm a developer getting into automated testing where do I start?

Nate

This blog offers five tips for developers in a DevOps & Agile world.

1) Learn the basics of QA / QC / QE

ISTQB Foundations Certification is a great starting point. I haven't met a single person who does QA exactly how ISTQB does it, but where you choose to deviate you should be able to explain why. In the first instance, I would suggest it’s important to come to grips with industry terms and wrap your head around how it all fits together.

I found the TPI model and a TPI audit extremely useful. It’s a model set up to describe a specific world that may be very different from your real world but allowing it to address you ... letting it challenge you and taking it seriously is useful.

Read as much as you can - there are some books that most QA gurus have read and recommend, get conversant with them:  

  • Michael Feathers - Working Effectively with Legacy Code is one I go back to over and over again.
  • Lisa Crispin - Agile Testing: A Practical Guide for Testers and Agile Teams
  • Katrina Clokie – A Practical Guide to Testing in DevOps
  • Cem Kaner – Lessons Learned in Software Testing: A Context- Driven Approach

2) Develop a common vocabulary for talking about testing

What you want to be able to say is that we should test this to level C and have level C be a specific, concrete set of practices. If five different people looked at the tests around that feature and were asked what level of testing you've done, they should be able to score it the same way. If things are not clear, there is a high probability of miscommunication. Did we test it? Is not a yes/no question and imprecise language can lead to a lot of negative experiences.

3) Perform a risk analysis before writing test plans

QA/QC practices are about managing risk, so a risk assessment guides your actions. The simplest risk analysis I use is to list all the interactions a user or other program could do with your feature. Then give each interaction a score from 1-5 on how frequently this interaction will happen, and a second ‘damage’ score of if this didn't work how catastrophic would it be. Your risk score is the frequency score multiplied by the damage score. This should give you an easy and rough breakdown of where the most important pathways are to test. A less formal method of risk analysis is to ask your team "When this release goes live, imagine something happened and we have to roll this back, what would that be?" or "You know that knot in your stomach when it goes live, what are you afraid of?"   

4) Ask your customers

No seriously, they are usually happy to tell you what they think are most important parts of your product are or where they wish you had tested more. Good anonymous user data, instrumentation and analytics is a passive way to inquire with your users in a systematic way. When reviewing these reports, pay special attention to the parts that surprise you. Often the answers that surprise you are the areas that you wouldn't have thought to test thoroughly but should have. This instrumentation should feed back to your risk analysis and may change priorities for the next sprint.  

5) After a release, review the defects you missed

The questions you need to ask following a release in regard to defects are:

  • Where did they come from?
  • What patterns have emerged?
  • What could you do differently next time to help address this?

  The goal here isn't a blame session, but to get serious about iterative improvement. Defect clustering analysis is a really helpful practice.

If you need help to make sure you are testing the right things, contact a specialist testing consulting firm.