Search for:

3. REDUCING SCOPE – TEST ITERATION STRATEGIES

Following on from SDLC strategies, the second group or area to focus on when attempting to reduce the total amount of testing is ways to reduce the number or Test Iterations or levels.

Testing Levels

Every organization has their own st3pps in their SDLC and organization structure. This often depends on the defined roles and functions in the organization and their development approach. A large waterfall based organization may have dedicated security, identity and load testing teams, while a smaller agile organization may have these functions fall under a single team or person. In every organization, there is a struggle between finding the most efficient process vs budget, personal agendas and overcoming inertia. “Its always being that way”, and “its done by a different team” does not imply optimized.

Unit Testing

With JSON services not having WSDL or Schema, many organizations have had to find new ways for Development and Testing to work together to ensure the correct coverage. Developers and testers may work off the same service description document, but how accurate is the document and how well does it match the developed services? What if a developer added or missed some functionality when coding? How does Testing even know what services are there for testing? Group development like Agile or Extreme programming may help, yet requires testers to have a fair understanding of JSON.

A second challenge is how do you gate or ensure that developers have done at least some level of due diligence before passing the code drop over to Testing to begin testing. If the defect density is to high, and code needs to be heavily reworked, the entire test cycle is wasted. If previously found issues are not suitably addressed, the same retesting is again a waste. Release management can take significant amount of time and resources.  To prevent this, various organizations have tried different approaches. From formal sign-off’s, to change documentation or release management and change management tools. Another approach, is to have one team develop the basic unit test, and this test be used as the gating process. Testing then takes these unit testes and expands them out to include far more intensive coverage. In SOAP its most often the testers that develop the unit test, but with JSON, it is now sometimes the developers that develop the unit test together with the service. Sharing tools and test cases only makes sense, as the developers require some way to test the code they developing in any case. Handing it to testing to expand is good practice, to ensure that “fresh eyes” are used. But what if a developer adds code not in documentation and does not hand a test case over either? Is this risk acceptable?

Whatever the approach, the objective remains the same, from release 0.1 to GA, how can one reduce the number of test cases. The concept also remains the same – Shift as much Testing as possible left, or earlier in the SDLC right to testing while developing and making sure that “code drops” have a certain amount of maturity.

Integration Testing

Integration testing of old was usually a different team to that of Unit testing. As per the post on Continuous Testing in Agile, if using and automation tool, the test cases should be the same. Integration testing, should just include expanding the test cases developed in Unit Testing to include things like encryption, cookies, identity, performance of each service (and enablers) etc. These individual test cases then linked for automation in what we call chaining. Integration testing often then be done concurrently with unit testing, much earlier in the SDLC.

Using the right automation integrated tool, that supports encryption, identity, performance and as many of your integration tests in one, enables the elimination of distinct test cycles for each of the Integration tests. SOAPSonar’s ability to do functional, security, performance and load testing using the same test case, is the number one reason customers say they selected it. That said, Performance of a individual service is something I recommend be part of a unit test and a functional requirement.

Consolidating integration tests into as few as possible and shifting as much as possible into functional unit test cycle can greatly reduce the number of iterations. The impact of removing a single test iteration can clearly be seen in service plan costing model. This is not about increasing risk by skipping systems tests..

Systems Testing

Systems testing requires a certain amount of code to be developed and tested, and the environment to be built.  Moving it too early in the SDLC can actually add to the testing needs. On the other hand, if the integration and unit tests are all automated, and completed successfully, systems testing becomes more about the environment and less about code. Finding code quality issues at this point can be very expensive and troublesome to fix in time.

Again if all the unit tests, and integration tests are automated in the same test case, and if the tool offers the ability to do systems test eg geographically dispersed load testing, the effort required during this iteration can be greatly reduced. A note on performance vs load testing. Too many times I hear that performance testing is left till Systems Testing and suddenly its discovered days before release that there are performance issues. How a service performs should be tested far earlier. At this stage, Load testing should be establishing that the environment (servers, network and other infrastructure) is suitable and not a individual services or enablers performance.

Using Virtualization or Simulated Services to stand in for services either unavailable or that cannot be load tested can be very useful in enabling earlier Systems testing. Simulated services are key trouble shooting tool to eliminate environmental factors like network, cloud, data integrity and servers.

ACCEPTANCE Testing

Acceptance testing is were you suddenly discover if all the work was correctly focussed. Have your Developers and Testers spent all this time developing and testing the right requirements?  NIST lists business requirements as the number 1 issue in software development. Often the users and sponsors are not technically minded and extracting the requirements can be very difficult. Bringing Acceptance Testing in earlier to review what is being done can greatly decrease time and effort spent in incorrect requirements delivery.

In web services, acceptance testing is usually heavily weighted on the client side usability and visuals. The need to see an example and not just a jpeg. This is another benefit of Simulated services. By creating a basic simulated service earlier in the SDLC, the GUI team can begin work. This GUI can then be used together with the simulated services for earlier acceptance testing and to compare to the end services built.

Regression

Regression testing between releases is an important way to minimize the testing scope. If a regression test of a service shows no code change between release 0.4 and 0.5 then those services were not changed since the last test cycle and need not be tested again. lets say a regression test says 30% of the code was had no changes, using the service plan costing model, how much testing could that save? That said, if they automated, it may be of little benefit not “pushing that button” and retesting them.

Where regression testing really shines is post launch. The maintenance phase of software often costing far more than the development phase. Yet if your automation test cases developed during the development cycle can also be used for regression testing during the maintenance cycle, the effort is greatly reduced. This “continual testing” post production and even API monitoring is rapidly growing area of focus for many organizations as more complexed meshed applications are being developed.

A note here, so often I hear that security or some other team wont allow for automation. The impact however of a single systems test iteration not being supported can cause any automation to be useless on production environment and prevent any regression testing using the previously built test cases.

Conclusion

There is a lot about automation in this post. Every day I hear of benefits and issues with automation. Many organizations try apply automation to their old process and find little benefit and become discouraged. Alternately, a tester involved in load, security or some other single test iteration, seek a tool to do just that iteration – sometimes just for control of their environment, breaking the larger benfit by doing so. The key to successful automation is and end to end approach from the start of the SDLC through. You may choose not to automate the whole process, or use other tools, but the design and application of automation needs to be done with the entire cycle in mind.

The next post will be on ways to reduce the number of unit tests.

 

SOAPSonar – Testing SOAP, REST or JSON Services

” What is the difference Testing a SOAP Services vs. JSON/REST or other service using SOAPSonar” After trying to answer this question verbally 3 times in the last week, I thought it a good idea to show it in a post.

  • SOAP – “Simple Object Access Protocol” usually uses XML, and has WSDL. It also has an explicit error format or SOAP Fault messages. It tends to be heavier weight and services are often far larger.
  • REST – Representational state transfer is a software architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system of which JSON is one language.
  • JSON – JavaScript Object Notation, uses readable text (not tue XML)to transmit data objects, consisting of attribute–value pairs. JSON does not use WSDL (Similar WADL is unpopular, in draft and seldom used), but usually uses a service description document. JSON Schema is also seldom currently used. JSON has no explicit error format. This makes JSON light weight and ideal for mobile applications.

So what does that really mean for someone using SOAPSonar?

The Difference

With a SOAP Service

You can use the capture WSDL bar and enter the URI, with ?wsdl afterwards and discover all the available services. Try it now with

http://www.w3schools.com/webservices/tempconvert.asmx?wsdl.

Notice the TempConvert and two services are automatically populated. When you select FahrenheitToCelsius_1, notice the Body is populated with field in SOAPSonar. If you enter a value, commit and send, you get your XML response.

1. WSDL
SOAPSonar offers a way to view the XML request, using the tab labelled XML and request headers. The same is possible in JSON, also headers tend to be lighter weight

2. XML

You can Also go to Documents and View Schema, which most likey does not exist in JSON

3. View Schema

With A JSON Service

There is no WSDL that can be captured and the chances are there is no schema. This means it is not possible for a Tester to automatically discover services in the same way. In SOAPSonar, we start by selecting File, New Test Group and then we have to name the test group. We can then Add a new test, by right-click, New JSON Test or more generic New REST Test and then naming each one.

5 New JSON

We then need a URI, the query parameters and the Method. Lets use

http://webservices.daehosting.com/services/TemperatureConversions.wso/FahrenheitToCelcius/JSON

as the URI and ?nFahrenheit=decimal as parameter to send and GET as the method. Then for 80 as the value, we replace Decimal with 80. How do I know this?, I read the document definition and example. The REST view in SOAPSonar would be as below. Notice the body of the request is frequently empty

6 REST View

The JSON view, is a single query line in the URI, and the Method. There is no WSDL to View and although incorrect queries will error, the description is limited.

7 JSON

So how then do Testers know what to Test? Its usually one of 4 ways
  1. The tester reviews the JSON code and looks for all URI, Methods and attribute–value pairs and reverse engineers tests cases. This takes significant JSON knowledge
  2. The Tester relies on the service description document, which should define all attribute–value pairs, Methods, URI’s and Query Strings. This requires good documentation.
  3. The developer and / or tester (Agile facilitates this) create and define the unit tests together. This unit test is then used to validate the basic functionality of the each function by both developer and tester. The tester, then adds ADS, chains functions, tests negative scenarios, load, and all additional aspects of the function to get the desired coverage
  4. You embrace yet to be standards of JSON Schema (tough given its level of maturity)

With JSON services, defining success criteria is also extremely valuable, due to the lack explicit error format. Its also far easier far developer to make minor changes to code, as they dont need to update schema, making regression testing important.

The Same

So now that we covered the differences, the rest is much of the same. Lighter weight JSON services tend to be much smaller and services and the very easy structure to understand. Be it SOAP or REST, SOAPSonar (and CloudPort) will identify all the variables and display them in the same manner.

Here is what that SOAP service looks like in graphical view for both request and response in the Runtime Variables tab. Any of these variables can now be used for chaining, automation data sources, success criteria, regression and a variety of testing options, using the right-click option.

4 Variable reference

Here is what the JSON service looks like for the graphical view for both request and response in the Runtime Variables tab. Any of these variables can also be used in the same way as SOAP, for chaining, automation data sources, success criteria, regression and a variety of testing options, using the right-click option.

8. JSON Vairables

If you wish to use a variable with SOAP, you right-click and add it in the field.

9 Query

If you wish to use a variable with JSON, you right-click and add it in the URI (or body occasionally) in the same way.

10 JSON Query

Conclusion

Yes there are differences testing SOAP vs. REST when using SOAPSonar. The lightweight nature of JSON, that makes it attractive, requires closer ties to development and more rigorous documentation in order to ensure that the service is being “discovered” and tested. This means Testing and Development need a clearly defined process, de-mark, deliverables and co-operation between developers and testers.

I hope this helps those QA professionals as that are now testing JSON vs SOAP services to adapt to the changes quicker. Questions, Comments?

10. SOAPSonar – Distributing Load Testing Geographically

Physically distributing the location of load test clients has 2 benefits. Firstly, it overcomes the limitations of network segment and  works station resources. Secondly it allows for the testing and understanding of the impact of network and location on load and performance.

Yes, you could run around, call different people and press the button at the same time, but integrating the test results can then be very difficult. Triggering a load test from a single central instance, across multiple physical machines, and centralizing the report generates a single drill down report.

In my previous tutorial we load tested using 3 virtual machines, using only one physical machine – SOAPSonar. This tutorial carries on were that one ended, so please do tutorial 9 first if you have not. This tutorial will now distribute the same test across multiple physical machines or “agents”

1. Check to make sure you still running the CloudPort Runtime and Performance Test Tutorial is loaded. this will be the service we will load test against. Confirm the IP address and URI.1 runtime

2. Launch SOAPSonar and go back into Project view and just run a quick Send request to server to make sure it is all working still. This confirm your [ADS] is in place and your runtime is up and URI is right.

2 SOAPSonar

3. Now we need to download our Physical Agent client software. Select Agents in the Top menu (next to help) then Download SOASPSonar Agent Installer. Your browser should launch and you should be able to download the latest agent by selecting it. Its important to keep your SOAPSonar Release and the Agent on the same release. Install the agent on your own machine or another if you would prefer.

5. Agent port

4. Run the agent software after installation and select File, Preferences.

4 agent preferences

5. Confirm your port and Select Log Individual Agent Run Events. You should now see CloudPort and Agent in your task bar.

3 Download

5.1 taskbar

6. Now we need to tell SOAPSonar that we have a agent available.  In SOAPSonar select Configure, Agents.

6 Agents

7. Select the Icon for add a New Agent, Give it a name that so you remember were it is. (like Montreal, Vancouver, Halifax, London or in my case James Bond). Then the IP address of the Agent (in my case its local so 127.0.0.1) and confirm the same port we checked in 5 above. Select OK. We now have a Agent to use along with our local instance in load tests. The idea is not to have it on the same machine for load tests, and preferably on a different network segment, but this is just a tutorial on how.

7 james

8. Now switch to Run View, and we should still have the same DefaultGroup and Group Performance Settings from the previous tutorial. Select Performance Loading Agents. Select Import Default Agent Definitions icon and your agent should be shown. Activate it by selecting the Red dot to Green. Commit settings to save your agent. 

8 activate

9. Now all we have to do is allocate how many virtual users to each agent. You have both your local SOAPSonar instance or Local Agent and then the new one we created. Select Group Performance Settings and change the Virtual Clients to 4. Then right next to that, select the icon for Agent Thread Allocation.

9 Add Virtual

10. Lets give 2 Virtual Agents to each of or physical agents. Confirm duration is 3 seconds and Commit and Run Suite.

10 alocate

11. You should now see the Agent Initialization Screen. Once the agent is initialized, select Start Test. If your agent does not initialize, check the IP address and Port and ensure you can ping the agent.

11 Agent initialize

12 In the Real-Time Monitor, you see you can now view performance by physical agent.

12 Real time

13. In Report View, you can now show performance for one Physical Agent, One Virtual Agent or aggregated. This allows to to compare performance from one physical location to another.

13 Repaort

Conclusion

Distributed agents is part of the Server Edition of SOAPSonar, along with expanded number of virtual users. Physical Load agents allows performance testing to scale through distributing the agents and resources. It also allows for testing of network infrastructure as well and application performance. Using the same Test Suite again as we use for functional testing, regression and performance to save time and be easily automated.

This is the end of the introductory series of Tutorials. If you doing a trial and just looking for a high level understanding how SOAPSonar can help you, you should be on your way. From time to time I will post new tutorials on new features, different options and greater challenges. Other features not as of yet used.

In the mean time, let us know how you enjoyed these. Private comment in the form below and public by starting a discussion at the bottom of the page.

Warning: strpos() expects parameter 1 to be string, array given in /home/content/13/11164213/html/ST3PP/wp-includes/shortcodes.php on line 193
[contact-form to=’[email protected]’ subject=’I just completed SOAPSonar Tutorial 10′][contact-field label=’Name what you want to be called.’ type=’name’ required=’1’/][contact-field label=’Email if you want a response.’ type=’email’/][contact-field label=’Comment. ‘ type=’textarea’/][/contact-form]

Comment or suggestions always welcome.

 

2. Reducing Scope – SDLC Strategies

I started this series with a high level overview on Reducing Scope. The first group I listed is SDLC Strategies. The entire SDLC is often not under the scope of IT, but under business of product management. The challenge can be explaining the impact of certain strategies on the SDLC to non IT focussed people. Often the Gap Between IT and business can be bridged with a little explanation as to how certain choices in product Strategy can effect SDLC costs, be these choices architectural or simply strategic.  It does however, takes a certain organization culture to do this effectively.

“Corporate Mandate”

Ask anyone in QA, development, information security etc what their corporate mandate is, and you usually get an answer “Zero Incidents”, or 100% defect free. Not only is this not possible, but often not really something management expects. The risk associated with the application, should determine the level of effort and amount of testing required. A free web services of upcoming events, a free blog such as this, or information service, my not need to same Testing effort as an on-line payments, financial calculation or personal information service. In the first example, you may need to do only a single test for functional compliance, but the the second, perhaps you decide to do a boundary test at the upper limit and lower limit and then perhaps a negative scenario as well. You may also have additional testing in security and integration and far more regression or continuous testing needs. This is why defining some criteria on how percentage coverage can be helpful. In a low risk application, perhaps the business owner will accept 30% percentage coverage. In a high risk application, perhaps the business owner will express the need to to have 100% or greater coverage.

A second aspect is how long is this application and release expected be in production. If new releases are made quarterly, or the application has some short life-cycle, perhaps a lower level of testing is warranted.  If however you developing a API that is to be used corporately by multiple applications and clients for next 5 -10 years, you may want to test that API more thoroughly as issues could replicate across your organization and all clients may not as of yet event exist. More in architecture on this subject.

Lastly I wanted to mention the aspect of customer expectation. If I am using a Google labs application for free, my expectation as a customer who finds a defect, may be very different from finding one on my tax return application. A young start-up company showing more proof of concept, may find its customers more tolerant of a defect than mature banking application. A free Blog more tolerant of my poor proof reading, than a paid subscription.

Architecture

I have written much about the need for web services to be both Robust and Sustainable. In the data economy, mobile and the very structure of web services has changed. Much of what falls under architecture today is strategies around embracing device or client  diversity. Developing, Testing and maintaining the life-cycle of the API (Web Service) independently to that of the current or future clients. I have written before on costs and management of that API and client relationship.

Many of our customers are using some form of intermediary now to create alternate versions or structured API, in new formats like size, structure, version, language or Identity formats. Creating multiple variations of API that relying on the same back-ends. Using these intermediaries to support device or client diversity.

Mediation of API to support new Clients
Mediation of API to support new Clients

With new API and code comes increased testing requirements and potential quality issues. Looking at the green check marks, do you need to test at each of these locations? If there is a problem identified, how does one trouble shoot it? How can Developers or Testers know the service is working when a client may not even exist yet. There is a need to be sure that the points behind the XML Conversion system are not changed and thoroughly tested and carefully placed under continuous regression testing before moving left in the diagram above, yet customer experience is seen to be only the left most check box. Understanding exactly how the message structure is altered as it transverses between the client and the end server can be difficult, as the rules for XML conversion can be scripted in entirely different language created by the vendor of the XML conversion system.

Conclusion

Although it may not fall under the realm of Tester, to decide on corporate strategy, it is the testers obligation to use their knowledge of testing to explain the impact that one strategy has on quality and testing needs vs. another.  What makes this more challenging, is that changing architectures brings the need for new skills, and often higher defect density. Making previous KPI and processes become redundant and making any estimation of the amount of testing required much more complicated.

As always, Comments?

Sustainable API

Sustainable API are Robust API that can be recombined or reused for a given foreseeable future. Its about embracing device diversity.

Life-cycle Management

API’s  are the way your customers will interact with your business, it’s important to treat API similar to that of an customer facing initiative. Planning and managing their development life-cycle, while clearly communicating the expected support and availability time frames. Customers are unlikely to launch into the development of new application that Consumes your API, if the API is likely to change to frequently. At the same time, emerging security, identity and standards, require certain updates to keep current. A great example is Twitter retiring API 1.0 in favour of API 1.1 forced to extended the retirement date 3 months, yet still received heavy criticism from their developer community.  Maintaining multiple versions and supporting the last 2 or 3 versions much like software is essential to give those further along the value chain time to redevelop their Consumer nodes.

Selecting well supported and defined industry stands to ensured both the broadest coverage and the ability, to automate the conversion or format of the API as new standards emerge.  Developing the sustainable API requires stricter documentation and standards adherence. Strict standards adherence enables device and delivery mechanism (Saas, Cloud, Mobile etc) independence.

The most common concern I have heard about Open API or Open Government is the lack of predicable stability. Predictable Stability  reduces the value of the API data sets. Client applications may now or in the future require SLA’s (Services Level Agreement) to offer their customers in turn. Up-time, response time, security, redundancy and disaster recovery are key to acceptance and adoption API.  Small changes in response formats, can create a huge amount of manual effort. Injecting a new field, or missing one response, can break the chain and cause data corruption for multiple Clients. Testing under distributed load, and tracking production traffic in order to keep abreast of performance and network bottlenecks is highly recommended.

It is good practice to do regression testing when developing any application or API.  Regression testing between instances in development, pre-production and production.  With API’s and API Client relationships are less formal, running a regression test on production to identify any changes that may otherwise go missed is essential.  Tools like SOAPSonar allow for baselining and scheduled regression testing, automating the comparison of results to previous test cases.

Componentized

Another key aspect of Sustainable API is keeping them as smaller building blocks. Much like Lego, Providing smaller componentized API allows Clients to consume only the information they need. Particularly important if these API get extended over mobile or poor quality networks.

Early adopters of SOA, tended to develop Wide API. API that responded with 100’s of fields, like the entire customer record. This drove up CPU and Network utilization if a query only required the customers 5 address fields. Mobile Networks, and lower CPU powered mobile devices and the Data Economy has shifted the focus to reusable or stackable component based API that are requested concurrently or sequentially as needed. This increases performance, reduces server and network loads and better supports a wider variety of possible future applications.  Componentizing API increases the number of API, but reduces the need to create overlapping API and versions.

When a new B2B or other partner requires access to only partial subset of an API, organizations are left needing to do one of 3 things

  1. Create a new API for the new partner
  2. Exposing unused components for Consumption by the partner who then discards the parts not required,
  3. Mediating, stripping out the unwanted or private fields in the API using filters or a message body parsing device like an API gateway to convert the message body and create a new virtual API with only the required fields.

Another benefit Componetized API is development, QA and life-cycle management. Complex Wide API  with many lines of code cost considerably more to develop, troubleshoot, test and adapt.  Hidden quality issues often land up becoming Common Law Features, as the bugs exist for so long and effects so many systems that user support is needed if it was ever to be fixed.

With the rapid development and definition of new IoT, Cloud, Saas, Identity and message formats to support the Data Economy, more and applications organizations are redeveloping their SOAP based services into REST, using OAuth or SAML based identity. Redeveloping the same data sets on new endpoints in order to support these new standards.  Buy componentized, Identity or other parts can be altered or changed without a full redevelopment.

Conclusion

In the Data Economy, Nodes may Consume information from multiple sources, refine it or apply some logic to the information then Provide the result in turn for other Clients to consume. As the Data economy matures, I expect that these value chains will grow in length, number and complexity. Sustainable API are those that have the flexibility and staying power to define and maintain their place in these emerging value chains.

9. SOAPSonar – Performance

One of the benefits of SOAPSonar is that performance and load testing use the same automation tests developed for functional testing. You just need to switch the mode from QA to Performance on the top right. All data sources, Projects, Test Cases and Test Suites remain unchanged. This allows performance and load tests to be built into functional requirements and regression testing, without creating a new series of test cases in a separate tool.

In order to do performance and load testing, and prevent any accidental denial of service attack, we need to use a CloudPort runtime. Its free, it local and it integrates well with SOAPSonar. Its also a useful tool to have and one we will use for a number of future tutorials. So lets go ahead and download and install it.

1. Download and install the CloudPort Runtime Player. You have to accept export restrictions, but are not asked for no personal information.

2. Secondly, download the runtime, ST3PP performance and unzip it to a location you can find again. You should have 3 files. Tutorial v1 and v2 and a short csv. We use V2 in future tutorials.

3. Launch the CloudPort Runtime Player and select run simulation, then find Tutorial v1 you downloaded and unzipped in step 2. Start Simulation Player. Accept port 8888. (good idea to Test availability)

3 run player

4. You now should have a JSON Simulation running on your machine to test against. If you look under Performance Test Tutorial, next to the icon of the networked globe,  you will see the URI should be http://127.0.0.1:8888/st3pp/ and the list of simulated services running, starting with soapsonar. Lets not change anything else here yet, but copy the URI http://127.0.0.1:8888/st3pp/.

4. Simulation

5. Leave the runtime running and launch SOAPSonar and lets create a basic JSON test case. You can do this by selecting Testing and then Launch SOAPSonar Testing Client from within CloudPort runtime or you can just run SOAPSonar as you usually do. Select File, New, Test Group then Right-click on Tests in the Project Tree and Select New JSON Test Give it a name like Performance

5. New Test

6. There is a small CSV file in the zip file you downloaded in step 2 called performance.csv. Lets add that as an [ADS]. In the project Tree under configuration, select Data Sources. Add Automation Data Source, then Select File Data Source. Give it an alias, find the performance.csv you extracted and ensure the Data Variables is on Request column, then select OK.

6 ADS

7. Back at our test case “performance“, paste the URI copied from CloudPort or http://127.0.0.1:8888/st3pp/ into the URI. Then add the query. In this case it will be ? followed by right-click and the [ADS] and our Request column. Set the method to GET (although it matters not in this runtime). Commit and Send Current Request to Server. Did you get a response that’s not a error? So far this has all being covered in earlier tutorials. your response should start with “Delivering”: “SOAPSonar”

7. Project

8. Select Run View and drag our test case over to the DefaultGroup. Check to make sure you right hand top corner is QA mode and Success Criteria is Test Case Success and not Regression. Then Commit and Run.  All 6 Test cases from the .csv should run and pass. Did they? Select Report View, and notice that 2nd Test case CloudPort took over 500ms and the 4th test case Tools, took over 1000ms. This is because the runtime has some latency added for these two cases. These individual services, not under load perform slower.  So far you have being testing functional testing. If we wanted to we could add a success criteria now and fail any service over a certain response time, now would be a good time.

8 Slow

9. lets go back to Run View and Change SOAPSonar from QA Mode to Performance Mode in the right top corner.  Notice that Suite Settings Changed? Now if we select Run Performance Testing in Synchronous Mode, each test group is run sequentially and each test case performance statistics are isolated and run individually. Asynchronous will run all your test groups at the same time to replicate different traffic patterns. We only have one test in one group. So lets leave it on Synchronous. Lets also leave the rest for now. Careful with logging as it can effect your machines load and hence performance.

9 Synchronous

10. Select the next Tab, Group Performance Settings. Here we establish the number of Virtual Clients and the length and extent of the load. Select just 3 virtual clients and set it to Duration and 3 seconds. Leave Throttle unchecked as we see how many TPS we can hit with 3 Virtual Clients. We have made no changes to the [ADS], functional test, regression or success criteria. Commit and Send.

2014-010 Virtual

11. This time your Realtime monitor is different. In report view, you see a consolidated report, when you select that, you see a break down per virtual client. East virtual client can export a file for further processing or you can generate a report. How  many TPS did you hit? We highly recommend using the 90% Res Time column as a reference, ignoring the 10% of responses that are extra long or short.

11 Report

Conclusion.

Doing load and performance testing as early in the development cycle can be critical in finding the time to address any concerns. Using the same test case and simply switching to performance mode vs developing a new set of test cases in a different tool, enables far greater coverage and reduced time.

In our next tutorial we will use both virtual and physically distributed load agents in a performance and load test.

Take a minute to give me some private feedback in the form below. This will be mailed to me and not published.

[contact-form subject=’Performance Testing’][contact-field label=’Handel’ type=’name’ required=’1’/][contact-field label=’My opinion on Perfromance and Load testing is :’ type=’radio’ options=’Why would anyone do Performance or Load tesing?,We do Performance but not Load testing,We use different tools for Performance and Load testing,We use an integrated Tool for Performance and Load Testing’/][contact-field label=’We do performance testing as part of Coninous and Regression Testing’ type=’checkbox’/][contact-field label=’Comment’ type=’textarea’ required=’1’/][/contact-form]

Otherwise please post any public comments below.