Search for:

7. SOAPSonar – Baseline and Regression Testing

We all have had experiences were “someone” decided to make a slight “tweak” to some code then promptly forgot to mention it to anyone else or at least the right someone else. That slight tweak causing some change, be it expected or not, that causes other parties to spend hours trying to trace the cause.

One of the key benefits of automation is the ability to identify any changes to XML by doing a XML diff. Comparing one version (Baseline) to that of another, (Regression Test). With web services API, we interested in the Request and Responses to ensure that they are not different, or rather that only expected differences are there. We need the flexibility to ensure that a some of the parameters be ignored, when they expected to be different each time.  Take for instance a file reference number or a service that returns the time. We may want to check to make sure the fields for Hour, Minute, Day, Month, Year etc remain unchanged, but perhaps we wish to accept changes to values in the fields, or limit these to certain parameters. Establishing what to check against the baseline and what not to, is an important part of regression testing.

Here is a Very Simple Baseline and regression test, using the SOAP Calculate Service.

1. Run SOAPSonar (with Admin Rights). Paste

http://www.html2xml.nl/Services/Calculator/Version1/Calculator.asmx?wsdl

into the Capture WSDL bar. Select Capture WSDL.

2. Lets use the Add Service. Select Add_1 and enter a=3 and b=3. Commit, Send. Hopefully your response was 6. If not, perhaps I could suggest some services? Rename it baseline.

2 Project View

3. Now lets select Run View, Drag Baseline into the DefaultGroup. Now select the Icon Generate New Regression Baseline Response Set.

3 Generate

4. Select XML Diff of Entire Baseline Response Document. This option matches both nodes and values.. Select OK, (Commit and Send if you need to)

4 XML diff

5. After the test is run, you will see the Test Suite Regression Baseline Editor. This is were you can select what you wish to watch or ignore. Automatically a base rule is generated. If you select Index 1, you should have 1 rule XPath Match. If you select XPath Match, you should see all the nodes graphically laid out for you. At the bottom you have 3 tabs. Baseline Criteria, Baseline Request (Captured Request), Baseline Response (Captured Response). For now, lets not change anything and just select OK.
5 Baseline editor

6. Lets go back to Project View, and change our b= to 9. The response should now be 12. Commit and send to Check. Then select Run View and change the Success Criteria to Regression Baseline Rules (See cursor). Commit and Send. This time, did your Success Criteria Evaluation Fail? It should, as it was expecting 6 as a response and not 12. Analyse Results in Report View.

6. Run baseline

7. If you now select that failed test case and then select the tab Success Criteria Evaluation, you see that Regression Baseline XML Node and Value Match failed, and it was the AddResult value.

7. Failed

8. Select Generate Report, then [HTML] Baseline Regression XML Diff Report and generate the report. Then View the report. Select Response Diff for Index 1. 1 Change found and you can clearly see it marked in Red.

8 report

9. Now lets ignore the response value, but maintain regression for the rest of the test case. Select Run View, then Edit Current Baseline Settings.

10 ignore

10. You should be back in the Test Suite Regression Baseline Editor. Select Index 1, then your rule and right click on AddResult in the visual tree. Select Exclude Fragment Array. It should show now in Red as excluded. Ok, Commit and Send. Your Regression Test should now pass, as everything but that value is still the same.

9 change

Conclusion

Automation of Regression Testing is far more than running an XML Diff. It involves selecting what aspects are expected to change and what aspects are not. By eliminating expected changes, any failures in future regression tests can receive the focus they deserve. Once automated, this can be run, hourly, daily, weekly or as needed, consuming little to know human interaction. Many of our customers, maintain a baseline and consistent regression test on 3rd party code. Any service their systems rely on, which they are not personally aware of development cycle. Continually testing through automated process to ensure they aware of any changes to the code.

Questions, Thoughts?

Performance and Load Testing

A second theme of interest that came up repeatedly at STAR Conference last week was Performance and Load testing. Many of those raising the question, had mobile applications or some form of mash-up or worked in Agile environments were performance and functionality were important.

In the SOA or API world, when I refer to Performance, I am referring to a single functional service request to response time taken. The performance of the service as part of the API or web service itself. In the below diagram, it would the time it leaves the client to the time a response is received. The additional API and Identity requests that happen behind the API 1, included. These I refer to as enablers. API 2 has a DB and its identity system, and API 3 is on a Enterprise Services Bus, and has multiple enablers on the bus. Each API may have a number of services associated with it, and each of these may require different enablers, or complete different functions, and so will have different performance characteristics. Granular performance information is therefore important for troubleshooting.

Load Testing, is the performance group of services at a given load. Modelled, using expected behaviour. If function 1 in API 1 is expected to be accessed 5 times that of function 1 of API 2, then the model needs to load Function 1 in API 1 5 x that of function 1 API 2. Load testing can either be throttled to evaluate performance times at a planned TPS or simply increased till errors start occurring, to understand maximum TPS possible.

User experience performance is the perceived performance via a given client. Here we add the performance of a given client to that of the network, API and enabler. User experience does not embrace device / client diversity. Caching, partial screen refreshes, and a variety client tweaks, may hide some perceived performance issues. That said, unless the API performance is know, a poorly performing client can be difficult to identify.

Performance

The most common performance issues that tend to come up, are problems not with the API themselves, but with the enablers. Some back-end database, identity system or ESB that may have some other process running on it at a given time (e.g. backup), has a network issue or requires tuning. Often these issues are due to changes in the environment or only at a given time. A single load or performance test run, a few days before final acceptance, often fails to identify these issues, or the issues occur in production at some later date.

I previously wrote a long multi-part series about performance troubleshooting in mobile API and I have no intent to repeat that. The constant surprise however, when I show a shared test case being used for functional and performance testing, is why  wanted to add some clarification. Usually I get a blank stare during a demo for a few minutes before a sudden understanding.  So many QA testers have being trained to think of different tools and teams for functional and load testing, that the concept of a integrated tool can be difficult to grasp at first, requiring some adjustment in thinking.

After the adjustment occurs, I consistently get the same 2 questions

  1. “Does that mean you can define performance as a function of success criteria?” Yes, each test case for each service in each API, can have a minimum or maximum response time configured in success criteria. Say you set that value as 1 second along with any other criteria for success. If at any time later on that test is run, including load testing, the test case will fail. There is no need  to create new test scripts, data sources, variables etc for load testing in a separate tool. If its a new team, just give them the test case to run.
  2. Does that mean you can do continual testing or regression testing, on production system and identify any changes in functionality AND performance at the same time?  Yes. If the value is set at 1 second response in success criteria, and  you configure a automated regression or functional test every hour/day/week/ whatever. If at any point, performance or functionality changes, the test case would now fail as the response would be different than expected or previously. There is no need to run 2 separate applications to continually test service for changes in functionality and performance.

At this point I usually point out the benefit of physically distributed load agents vs. just virtual users.  The ability to trigger a central test from multiple locations in your network and compare response times, allows not only the simulation of Server, but also the Network. Larger companies often break out network performance turning into another team, and don’t consider it an “application issue”. I believe any performance issues is functionally important. Smaller companies, and senior executives, are however quick to the benefits or consolidating this into a single tool and report.

Conclusion

Regardless of if your performance/load team is a separate group or part of your role, sharing a test case, and actually building performance in to the success criteria in the same tool can offer huge benefits in time savings and in identifying performance issues earlier in development cycle and during maintenance. Why not try it yourself? Here is a two tutorials on Load Testing and Geographically Distributed Load Testing.

 

5. SOAPSonar – Defining Success Criteria


Just because a response code is response code is 200 range or not in the 400 range, does not meant that the test met with with the business requirements a tester is required to test for. For a list of Status Codes look here. This is represented by the Success criteria or Baseline arrow vs Outcome.

SOAPSonar Test Cycle

Perhaps your test case requires a specific code, value, response time or some other form of validation. Responding with a fax number instead of phone number, or the wrong persons number, is still a defect. For this reason, SOAPSonar offers a variety configuration options, to define what is indeed a successful test case and what is not. Lets start again with our SOAP example we used in Tutorial 4, and use the same .csv data sources for calculate and maps

1. Launch SOAPSonar and Open the test case from Tutorial 4. If you did not do that tutorial, now is a good time. Check that you have both Automation data sources under Configuration, Data Sources by going and in checking the columns. Check also you have our SOAP calculate Service and JSON Google Maps service.

1 check ADS

2. Lets use Subtract_1 service. Select it,. then in a= right-click and select [ADS] Automation Data Source, Quick Select, Calculate, Input a. Then in b= select [ADS],Input bCommit.

2. Subtract

3. Lets run this in run view. Select Run View, delete any existing test cases and drag Subtract_1 under the Default Group. Commit and Run Suite. How many test cases did you run and how many passed? I had all 10 pass.

3 Run suite

4. Now lets go back to Project View and define some additional success criteria. Select Subtract_1 and next to the Input Data tab, is a Success Criteria Tab. Select Success Criteria Tab and Add Criteria Rule.

4. Add success

5. Lets first add a rule for Response Time. Performance is after all a functional requirement and so should be part of functional testing. Lets just say 1 second as max value.

5 Timing

6. Now lets compare the result in column 4 of our .csv. Add Criteria Rule, Document, XPath Match. Then select your new rule and refresh if you need to. Look for SubtractResult parameter and right-click on it. Select Compare Element Value. Notice the Criteria Rules tab changes.

6. XPath

7. Select The Criteria Rule tab, then set match function to Exact Match. Then Choose Dynamic Criteria, Independent Column Variables, Calculate.csvSubtract Result Column from our [ADS]. Ok Commit.

7 Dynamic

8. Switch to Run View and lets run this test again. Commit, Run Suite. This time 9 passed and 1 failed. If you check that .csv file, line 3 subtract value answer column is wrong. The result being 10, yet the expected value being 5. Without defining Success criteria, this would have being missed. Performance wise I had no failures and responses were good.

8 Run

9. Now lets see if we can do this with JSON. Select Google Maps test, in Run View, clear the DefaultGroup, drag Google Maps over and Commit and Run Suite to make sure it is working. All 6 of mine passed.

9 Maps

10. Back in Project View. Select The Success Criteria Tab, and Add Criteria Rule, Timing, for a maximum response of 1 second.  Then Add Criteria Rule, Document, XPath Match. Select it and look for the distance, value. Right-click on it and Compare Element Value.

10 Google

11. Select the Criteria Rules, set Match Function to Exact Match and select icon Choose Dynamic Criteria, Independent Column Variables, your googlemaps.csv, Meters column. Ok, Commit.

11 Exact

12. Run Default Suite. This time 2 of the 6 test cases failed for me, although the time was near, in both cases, it was because my csv had a different value. We will look into report view in our next tutorial.

12 Report

Conclusion

Being able to mix performance, and various header and message requirements into a multiple rule set to define success criteria, allows for automation to reflect business requirements. This helps ensure that you are not just testing Status codes, or incorrect functionality, but the full response. Taking the time to define each test case with the right success criteria initially ensures that your baseline, performance and other systems tests are more accurate.

The arrow from the enablers to the data sources in the diagram at the top of the page, indicates the ability to use direct SQL or other calls to enablers to compare the values with those found in the response. Allowing success criteria to include validating the service is selecting the right value from the enabler.

Comments?

Continuous Testing in Agile

Along with performance testing, there were 2 other themes that continually came up in conversations during STAR Canada.

  1. How should QA integrate in a Agile environment
  2. The need for “Continuous Testing”.

While there are thousands of articles about Continuous Testing, and hundreds of thousands on Agile, there seems little on both. Perhaps due to some apparent conflicts.

Lets look at a theoretical QA in an agile environment. Say your organization Sprints are 2 weeks in length, each scrum having 8-10 members for manageability. Due to project time constraints,  there are 5 scrums working concurrently, each focussed on a different component of development of your application. What test cycles are done as part of the sprint and what cycles are done outside or as cross functional teams?

Agile Testing Levels

It was pointed out that although common, doing only unit tests and integration testing on your Sprints code, then jumping to acceptance testing of that sprint, is not Agile. Agile should in fact have all test stages built into the Sprint. Many companies ignore various test cycles like load, integration and security testing of the end to end system as there simply is not time in each Sprint.

An alternate approach is to created independent teams outside of the Agile development. Their role is to test integration, load, security and systems in integrated environments. Defects identified are then fed back into the scrum meetings and allocated to particular Sprint. This also is not really Agile, falling into some kind of hybrid. The challenge here is that issues are often identified after sprints are finished and so not really continuous testing either.

A second approach, was to create cross functional roles were the scrum masters and one or more members of each sprint were allocated to doing systems level testing and possibly fixes. These cross functional teams, would near the end of each sprint, break out of their old scrum into the new role. The challenge with this approach is that on shorter sprints, and large systems, they can land up spending more time in the cross functional role than in their own scrum.

Continuous Testing

Continuous Testing is somewhat the same as Baseline and Regression Testing, but need not only testing against a Baseline. Its about continually testing while developing through the entire SDLC. The benefit that issues can be identified far earlier (Shift Left approach) resulting in lower costs to address. Agile environments at first glance, seem to favour continuous testing, but does that include, regression, integration and systems testing across Sprints? If each test case takes 9 minutes to complete, 1 tester can only run 53 test cases in a day or 533 tests in a Sprint. This is simply not enough coverage to test all systems and other tests continuously. The result, is partial or low test coverage.

Enter Automation

If as part of each Sprint, a fully developed set of test cases are developed by each Sprint in the same application (eg SOAPSonar) covering their development efforts. The incremental work to role these up into test cases for integration, load etc would be minimal. Each sprint then shares a set of Integration, performance, load and regression etc tests that they simply run as part of their sprint. Being automated, these can even run after hours. The result is continuous testing of both at the Systems level and the Sprint level, without the heavy resource requirements of manual testing. issues be the system wide, or sprint level can then be addressed in Sprint.

Conclusion

The concern with this is the same as with any automation project, “Will The Development of the Automation Scripts Not Take More Time than the Resulting Benefit.” This is a tool selection question. Finding the right tool for your team to use to minimize the time taken developing and maintaining various test cases from function through load and regression, to acceptance testing.

Would you like to weigh in with your thoughts or comments on the subject?

3. SOAPSonar – Run View with Chaining

If you have not, please do Starting with SOAP and REST Tutorial first.

Automation lets you test the business flow scenarios. So now lets model a very simply business flow. Lets take an example, the company uses a 3rd party API for calculating distance between cities. This is a REST service. They have a second API that they use to calculate. The Client application makes use of both. The test requirements are that users are expected to calculate the time it would take to ride a bicycle between Toronto and Montreal if you travelled and average speed of 20 km/h. To do this we set up out test cases in project view and run it in Run View. But to automate this, we need the response from one service to be the input into the second. This we at ST3PP refer to chaining services for automation.

SOAPSonar Test Cycle

TUTORIAL

1. Open SOAPSonar, and lets first add the REST distance service. In this case I am going to use Google Maps. File, New, Test Group. Then right-click on Tests in the Project Tree and select New REST Test. It’s always good to give names to things, so right click on New Test_1 and Name it Distance.

1 Distance

2. Paste the following in to URI field.

http://maps.googleapis.com/maps/api/directions/json?origin=Toronto&destination=Montreal&sensor=false&avoid=highways&mode=bicycling Make sure Method is GET and then Commit and Send.

2 Send

3. In the Response section we interested in the distance. Unless Montreal and Toronto are floating apart, you should get 621 km at 1 day 8 hours. But at what speed? Select the tab Runtime Variables, then scroll down till you get the distance value field of 621476 (meters). Select this value number and right-clickAdd Variable Reference. Name it Distance. OK, Commit.

3 add vairable

4. In Tutorial 2 we used a SOAP calculate function, lets add that. Since SOAP comes with WSDL, just paste

http://www.html2xml.nl/Services/Calculator/Version1/Calculator.asmx?wsdl into the capture WSDL bar.

4. capture calc

5. Now on Divide_1right-click and Clone,  twice. Renaming the the first to KM and the second to Hours.

5 Clone

6. Select KM and we need to enter the response from distance in Meters, into a = and b= should be 1000, the response needs to go into hours. in a= right-click, [RV] Runtime Variable, New Custom Group, distance. Set b= to 1000. Then Commit and Send.

6 KM

7. In the Response tab,  select Runtime Variables, then Divide Result and right-click. Add Variable Reference and name it Distance_KM. Notice the tab shows Runtime Variables (1) now?

7 RV km

8. Lastly we need to divide that response by 20 km/h. Select Hours in the project tree, in a= right-click, Select [RV] Runtime Variable, Calculator.asmx, Distance_KM variable. then b=20. This should be the result we have being asked to model.

8 Hours

9. Switch to Run View and drag Hours test case under the DefaultGroup. We should name that our scenario.Notice that Distance and KM are known prerequisites and so get added automatically.

9 Run view

10. Make sure we use the Test Case Success Criteria as we have not captured a baseline and lets Log Fail Only. Make sure you not Publishing this to HP Quality Center Project. Commit and Run suite.

10. Run

Conclusion

You automatically placed into Real Run Monitor, but we talk about Report view and logs in next post. Save your project as Tut3.

You have just automated a test scenario of 3 unit tests across both REST and SOAP API’s that reside on different systems and at different locations (external) to your network. Yes you could have done this faster manually the first time, but we will look at ways to baseline and monitor as well as use this example for more locations or different speeds.

Comments? Please share any thoughts or provide feedback so others can learn.