Some time ago, I wandered the iBeta halls in search of Test Engineers, specifically, those who work in our Automation, Performance, and Security verticals. I wanted them to write a bit about how they do what they do for the blog.
Unfortunately, this was not as easy as I’d hoped. For starters, the typical Engineer (with a capital ‘E’) doesn’t even speak the same language as everyone else! They also didn’t seem to care for the trivialities of blogs or social media sites like Twitter and Facebook, so plying them with the promise of making them Internet Famous didn’t really work… Fortunately, there is always pizza – the universal food group – and I was eventually able to nab a piece of content from our very own Joshua Kitchen. Here’s what he had to say about iBeta’s automation engagement flow.
The Five Primary Phases of Automation Engagement Flow
There are five primary phases of an automation engagement flow: infrastructure, scripting, batching, integration, and maintenance. Though there’s overlap from phase to phase and any particular phase doesn’t necessarily need to be “complete” to move to the next one, this is generally how automation progresses.
Infrastructure
Infrastructure is all about laying the groundwork to enable a successful automation effort. Minor gains are made on the part of single testers given automation tools, but to make fundamental changes to development culture it is necessary to layout infrastructure.
Infrastructure can be placed in the following categories:
- Tool assets: Tool assets are generally the first place organizations start in automation. We use tools that meet the general anticipated needs, then hand it out to the test staff. The other three areas are a little move involved because they include some additional, non-obvious investments (and time) before returns are realized.
- Personnel assets: Personnel assets are assigned to do the heavy lifting in automation projects; it is desirable to have everyone in the test group familiar with and accustom to using the tool(s), but a dedicated position is strongly recommended. This gives the group some capability in handling more complicated scripting tasks and helps drive a central focus.
- Physical assets / Software Assets: Physical and software assets are all about sandboxes for doing test tool development and final “production” test tool environments. Generally speaking, servers running VMs or Remote Desktops are a better investment than workstations for prolonged user interface automation; UI automation strategies usually end up with one session per instance. Speed up execution time with multiple instances.
Scripting
It is relatively easy to create disposable single-use scripts that mirror the steps of a manual test case. However, a little central guidance on deciding where and how to automate saves a lot of time in the long run by eliminating poor automation candidates and promoting coding standards.
Formal scripting begins when the infrastructure is partially defined, up, and running.
Scripting is broken down into the following, most of which have little to do with the actual act of creating code:
- Use Case Selection
- Steps/Path
- Path Variations
- Data Variations
- Recording/Coding
- Tool/Tool Language
- Version management
- Error Handling
- Reporting
We start by identifying manual tests as “candidates” for automation. A good automation candidate is one that is either seldom changed (e.g. core regression) or sees a great deal of short-term use between edits (e.g. data-driven). Identify the basic path, profile the necessary variations and data requirements, then move to coding. Code to a standard or template, version, add error handling to support robust unattended execution and add detailed reporting as needed to eliminate false positives and enhance error localization.
Batching
A batch consolidates a collection of scripts, interfaces with their error handling, and outputs concise reports. Batches are initially executed in single instances (nightly test execution, for example). Eventually, they run multi-instance to complete comprehensive automated suites quickly. These include:
- Base
- Structure/Multi-Batching
- Error Handling
- Reporting
Starting with single batches, we refine the batch execution to distribute over multiple hosts, add batch level error handling for unattended execution, and add the appropriate level of reporting into an easily readable report file or dashboard.
Integration
The ultimate goal of integration is to confirm compatibility with build tools. Integration testing automatically executes test suites when new code is deployed, thereby providing real-time feedback regarding the overall health of a build.
- First, integrate execution hosts with a build tool
- Next, integrate the batch with the build tool for activated execution
- Then integrate batch with build tool for automatic execution
- Finally, refine batch/build tool automatic execution
Fundamentally, add execution hosts to the build tool, add the batched test scripts as an independent build, then integrate the batched scripts + execution hosts to the proper build(s). Parse scripts of related build content (if applicable) to refine integration and reduce the number of redundant tests.
Maintenance
Everything requires maintenance; not accounting for maintenance continues to be one of the major pitfalls of test automation. Scripts often require editing due to application changes, software expansion, and the discontinuation of unnecessary elements. Plan accordingly when developing an automation engagement flow.
When conducting maintenance, you’re refactoring all the previous phases, adding refinements, editing/obsoleting existing test scripts and adding new ones to accommodate changes to the application.