Search for:

4. REDUCING SCOPE – Test Cycle Strategies

In the service plan costing model, we set the number of Test cases at 20,000. The first post in this series was a introduction on ways to to try and reduce that number. This the 4th in the series follows SDLC Strategies and  Test Iteration Strategies and is part of ST3PP’s Best Practice Series aimed at developing better Tools, People and Process. Looking at ways to improve QA efficiency in Canada in order to remain competitive at our higher wages.

Test Cycle

The easiest way is drop something out of the testing. Decrease the percentage coverage, don’t write test plans, test only simple scenarios or don’t create reports. Rather than reducing coverage, lets look at how we reduce effort in running a tests while maintaining quality.

1. Reuse

Reuse is the comes from the idea that if you create a test plan, execute it for a given environment, analyse the results and create a report, then there should be no benefit repeating the exact same test. On the other hand, if there is a code or environment change, that test (and only that test) should not need to be created again, but executed again, and results analysed and reported.

In our service plan costing model, we used 500 services and 10 functions per service and 4 tests per function making up the 50,000 test cases. Testing however should begin long before the all the services are developed. Lets say 20% of the services are developed. The first test iteration would only test 100 and not 500 by 10 x 4 or 4,000 tests. You have great developers and only 10% of those tests cases show there are issues development needs to address. Is there a benefit to running the other 90% of successful test cases again in the next release? Probably not, IF you certain nothing has changed. When the next code drop happens, you can just test 100 new services and the 10 changed ones. For simplicity, I never built this into service plan costing model. In part as the impact depends on if you working short AGILE sprints or longer waterfall based code drops. For accuracy though, these are good measurements and KPI to track and build into your model.

An organization structure and process is not always rigid enough to know for certain that a developer did not touch something and the risk is then missing an undocumented change. Enter automation.

2. Automation

If there is one thing I hear constantly about automation, is that it can take longer to automate and maintain scripts, than it does to manually test. Automation is not the cure for everything. For it to be beneficial it requires changes to your process and approach. Creating the test plan, data sources and success criteria usually take longer with automation, than manual testing, but running the test and generating a report should be far quicker and “automated”. This can mean a single test iteration or cycle, automation can take longer than manual testing. On the other hand, once created, a good automation tool will allow the test to be executed on different , iterations and environments with little or no change, dramatically decreasing the effort. Not all tools are perfect, and redevelopment of the test cases, for the smallest change in code or new test scripts for performance or load can negate the benefits quickly. This is not an automation fault, but that of the tools or process.

Time taken for creating test cases aside, there are other issues automation addresses

  • Regression testing (have you tried without automation?) How exactly are you sure you not missing an undocumented change?
  • How do you plan on continuous testing a service without automation?
  • Need to run through a thousand variables, how long will that take manually execute these tests?
  • How long does it take to execute an automated test, once already developed vs a manual?
  • What does executing (not creating) a automation test costs in man hours?
  • What about testing after launch in the SDLC, will you do that manually?
  • Do automation QA testers really cost more?

To be fair to automation, one needs to compare the impact of automation with on the entire SDLC, utilizing automation with a suitably redesigned process. If then the effort is not worth the reward, it’s a lesson learnt. Not every project will be suited to automation

Conclusion

A final note on documentation, test case development and reporting documentation are often the biggest time consumers of testing. Time equalling cost. Documentation needs to balance detail with effort.  Ironically, I spent a chunk of time this morning reading through a 37 page test plan. Yet on looking to implement the first test case, I found the a required parameter not mentioned anywhere in the 37 pages.  I then looked at a different test plan, and it was not there either, yet found it in seconds in that tools automation test script.

Fundamentally we are looking for ways to save time focussing on the areas of highest impact. Its not about slavishly following a set process, but defining the process to be more effective and hence increasing our value and software quality.

Reducing Scope – the Amount of Software Testing

Our example used at TASSQ and in our a More Detailed look at Service Plan Costing we used a fix number of test cases (50,000). At the TASSQ event, many immediately wanted to discuss ways to reduce the number of test cases. This is just as important an aspect as streamlining the test process once this is done. Due to time constraints, I decided to leave it till now. But, what kind of things can be done to reduce the sheer scope of testing needed?

This is too long for a single post, but over the next few weeks, I will build it out each area. Looking at high level areas to focus at though, we have 4 key areas to focus on

  1. SDLC Strategies
  2. Test Iteration Strategies
  3. Test Cycle Strategies
  4. Maintenance Strategies

1. SDLC Strategies

What is your corporate mandate? Is these a free internet service, that is best effort, or do errors have potentially huge financial risks? How long is this software expected to be in use (next release), and how mission critical is it to your business? The way we approach testing should reflect our business needs.

The second aspect is more architectural. I have already posted one post on API Versioning Strategies. How the service and the client are designed and managed in the SDLC, can greatly impact the amount of testing needed.

2. Test Iteration Strategies

This looks at way to reduce the number or effort required in each test Iteration. Can you share the same test case for Functional and Performance Testing? How can you ensure that development did fix the issues in the last release? Do you really need to retest code that was not changed?

Strategies here vary a great deal depending on if you using AGILE or Waterfall or some other Development methodology.

3. Test Cycle Strategies

This area looks at way to reduce the number and the complexity of doing individual test. A lot of this has to do with desired percentage coverage, but automation, data sources and regression are all aspects.

4. Maintenance Strategies

Far to often we focus on getting the software into production, yet its generally accepted that testing during the maintenance cycle can be well over half the testing costs. This is about, automation,  regression and continual testing strategies that can reduce the maintenance testing costs or coverage.

I cant poll the audience here, but as usual, we share and learn. So please if you have thoughts or suggestions on the subject of reducing the amount of testing required, please let me know.

 


 

7. SOAPSonar – Baseline and Regression Testing

We all have had experiences were “someone” decided to make a slight “tweak” to some code then promptly forgot to mention it to anyone else or at least the right someone else. That slight tweak causing some change, be it expected or not, that causes other parties to spend hours trying to trace the cause.

One of the key benefits of automation is the ability to identify any changes to XML by doing a XML diff. Comparing one version (Baseline) to that of another, (Regression Test). With web services API, we interested in the Request and Responses to ensure that they are not different, or rather that only expected differences are there. We need the flexibility to ensure that a some of the parameters be ignored, when they expected to be different each time.  Take for instance a file reference number or a service that returns the time. We may want to check to make sure the fields for Hour, Minute, Day, Month, Year etc remain unchanged, but perhaps we wish to accept changes to values in the fields, or limit these to certain parameters. Establishing what to check against the baseline and what not to, is an important part of regression testing.

Here is a Very Simple Baseline and regression test, using the SOAP Calculate Service.

1. Run SOAPSonar (with Admin Rights). Paste

http://www.html2xml.nl/Services/Calculator/Version1/Calculator.asmx?wsdl

into the Capture WSDL bar. Select Capture WSDL.

2. Lets use the Add Service. Select Add_1 and enter a=3 and b=3. Commit, Send. Hopefully your response was 6. If not, perhaps I could suggest some services? Rename it baseline.

2 Project View

3. Now lets select Run View, Drag Baseline into the DefaultGroup. Now select the Icon Generate New Regression Baseline Response Set.

3 Generate

4. Select XML Diff of Entire Baseline Response Document. This option matches both nodes and values.. Select OK, (Commit and Send if you need to)

4 XML diff

5. After the test is run, you will see the Test Suite Regression Baseline Editor. This is were you can select what you wish to watch or ignore. Automatically a base rule is generated. If you select Index 1, you should have 1 rule XPath Match. If you select XPath Match, you should see all the nodes graphically laid out for you. At the bottom you have 3 tabs. Baseline Criteria, Baseline Request (Captured Request), Baseline Response (Captured Response). For now, lets not change anything and just select OK.
5 Baseline editor

6. Lets go back to Project View, and change our b= to 9. The response should now be 12. Commit and send to Check. Then select Run View and change the Success Criteria to Regression Baseline Rules (See cursor). Commit and Send. This time, did your Success Criteria Evaluation Fail? It should, as it was expecting 6 as a response and not 12. Analyse Results in Report View.

6. Run baseline

7. If you now select that failed test case and then select the tab Success Criteria Evaluation, you see that Regression Baseline XML Node and Value Match failed, and it was the AddResult value.

7. Failed

8. Select Generate Report, then [HTML] Baseline Regression XML Diff Report and generate the report. Then View the report. Select Response Diff for Index 1. 1 Change found and you can clearly see it marked in Red.

8 report

9. Now lets ignore the response value, but maintain regression for the rest of the test case. Select Run View, then Edit Current Baseline Settings.

10 ignore

10. You should be back in the Test Suite Regression Baseline Editor. Select Index 1, then your rule and right click on AddResult in the visual tree. Select Exclude Fragment Array. It should show now in Red as excluded. Ok, Commit and Send. Your Regression Test should now pass, as everything but that value is still the same.

9 change

Conclusion

Automation of Regression Testing is far more than running an XML Diff. It involves selecting what aspects are expected to change and what aspects are not. By eliminating expected changes, any failures in future regression tests can receive the focus they deserve. Once automated, this can be run, hourly, daily, weekly or as needed, consuming little to know human interaction. Many of our customers, maintain a baseline and consistent regression test on 3rd party code. Any service their systems rely on, which they are not personally aware of development cycle. Continually testing through automated process to ensure they aware of any changes to the code.

Questions, Thoughts?

Continuous Testing in Agile

Along with performance testing, there were 2 other themes that continually came up in conversations during STAR Canada.

  1. How should QA integrate in a Agile environment
  2. The need for “Continuous Testing”.

While there are thousands of articles about Continuous Testing, and hundreds of thousands on Agile, there seems little on both. Perhaps due to some apparent conflicts.

Lets look at a theoretical QA in an agile environment. Say your organization Sprints are 2 weeks in length, each scrum having 8-10 members for manageability. Due to project time constraints,  there are 5 scrums working concurrently, each focussed on a different component of development of your application. What test cycles are done as part of the sprint and what cycles are done outside or as cross functional teams?

Agile Testing Levels

It was pointed out that although common, doing only unit tests and integration testing on your Sprints code, then jumping to acceptance testing of that sprint, is not Agile. Agile should in fact have all test stages built into the Sprint. Many companies ignore various test cycles like load, integration and security testing of the end to end system as there simply is not time in each Sprint.

An alternate approach is to created independent teams outside of the Agile development. Their role is to test integration, load, security and systems in integrated environments. Defects identified are then fed back into the scrum meetings and allocated to particular Sprint. This also is not really Agile, falling into some kind of hybrid. The challenge here is that issues are often identified after sprints are finished and so not really continuous testing either.

A second approach, was to create cross functional roles were the scrum masters and one or more members of each sprint were allocated to doing systems level testing and possibly fixes. These cross functional teams, would near the end of each sprint, break out of their old scrum into the new role. The challenge with this approach is that on shorter sprints, and large systems, they can land up spending more time in the cross functional role than in their own scrum.

Continuous Testing

Continuous Testing is somewhat the same as Baseline and Regression Testing, but need not only testing against a Baseline. Its about continually testing while developing through the entire SDLC. The benefit that issues can be identified far earlier (Shift Left approach) resulting in lower costs to address. Agile environments at first glance, seem to favour continuous testing, but does that include, regression, integration and systems testing across Sprints? If each test case takes 9 minutes to complete, 1 tester can only run 53 test cases in a day or 533 tests in a Sprint. This is simply not enough coverage to test all systems and other tests continuously. The result, is partial or low test coverage.

Enter Automation

If as part of each Sprint, a fully developed set of test cases are developed by each Sprint in the same application (eg SOAPSonar) covering their development efforts. The incremental work to role these up into test cases for integration, load etc would be minimal. Each sprint then shares a set of Integration, performance, load and regression etc tests that they simply run as part of their sprint. Being automated, these can even run after hours. The result is continuous testing of both at the Systems level and the Sprint level, without the heavy resource requirements of manual testing. issues be the system wide, or sprint level can then be addressed in Sprint.

Conclusion

The concern with this is the same as with any automation project, “Will The Development of the Automation Scripts Not Take More Time than the Resulting Benefit.” This is a tool selection question. Finding the right tool for your team to use to minimize the time taken developing and maintaining various test cases from function through load and regression, to acceptance testing.

Would you like to weigh in with your thoughts or comments on the subject?