Search for:

10. SOAPSonar – Distributing Load Testing Geographically

Physically distributing the location of load test clients has 2 benefits. Firstly, it overcomes the limitations of network segment and  works station resources. Secondly it allows for the testing and understanding of the impact of network and location on load and performance.

Yes, you could run around, call different people and press the button at the same time, but integrating the test results can then be very difficult. Triggering a load test from a single central instance, across multiple physical machines, and centralizing the report generates a single drill down report.

In my previous tutorial we load tested using 3 virtual machines, using only one physical machine – SOAPSonar. This tutorial carries on were that one ended, so please do tutorial 9 first if you have not. This tutorial will now distribute the same test across multiple physical machines or “agents”

1. Check to make sure you still running the CloudPort Runtime and Performance Test Tutorial is loaded. this will be the service we will load test against. Confirm the IP address and URI.1 runtime

2. Launch SOAPSonar and go back into Project view and just run a quick Send request to server to make sure it is all working still. This confirm your [ADS] is in place and your runtime is up and URI is right.

2 SOAPSonar

3. Now we need to download our Physical Agent client software. Select Agents in the Top menu (next to help) then Download SOASPSonar Agent Installer. Your browser should launch and you should be able to download the latest agent by selecting it. Its important to keep your SOAPSonar Release and the Agent on the same release. Install the agent on your own machine or another if you would prefer.

5. Agent port

4. Run the agent software after installation and select File, Preferences.

4 agent preferences

5. Confirm your port and Select Log Individual Agent Run Events. You should now see CloudPort and Agent in your task bar.

3 Download

5.1 taskbar

6. Now we need to tell SOAPSonar that we have a agent available.  In SOAPSonar select Configure, Agents.

6 Agents

7. Select the Icon for add a New Agent, Give it a name that so you remember were it is. (like Montreal, Vancouver, Halifax, London or in my case James Bond). Then the IP address of the Agent (in my case its local so 127.0.0.1) and confirm the same port we checked in 5 above. Select OK. We now have a Agent to use along with our local instance in load tests. The idea is not to have it on the same machine for load tests, and preferably on a different network segment, but this is just a tutorial on how.

7 james

8. Now switch to Run View, and we should still have the same DefaultGroup and Group Performance Settings from the previous tutorial. Select Performance Loading Agents. Select Import Default Agent Definitions icon and your agent should be shown. Activate it by selecting the Red dot to Green. Commit settings to save your agent. 

8 activate

9. Now all we have to do is allocate how many virtual users to each agent. You have both your local SOAPSonar instance or Local Agent and then the new one we created. Select Group Performance Settings and change the Virtual Clients to 4. Then right next to that, select the icon for Agent Thread Allocation.

9 Add Virtual

10. Lets give 2 Virtual Agents to each of or physical agents. Confirm duration is 3 seconds and Commit and Run Suite.

10 alocate

11. You should now see the Agent Initialization Screen. Once the agent is initialized, select Start Test. If your agent does not initialize, check the IP address and Port and ensure you can ping the agent.

11 Agent initialize

12 In the Real-Time Monitor, you see you can now view performance by physical agent.

12 Real time

13. In Report View, you can now show performance for one Physical Agent, One Virtual Agent or aggregated. This allows to to compare performance from one physical location to another.

13 Repaort

Conclusion

Distributed agents is part of the Server Edition of SOAPSonar, along with expanded number of virtual users. Physical Load agents allows performance testing to scale through distributing the agents and resources. It also allows for testing of network infrastructure as well and application performance. Using the same Test Suite again as we use for functional testing, regression and performance to save time and be easily automated.

This is the end of the introductory series of Tutorials. If you doing a trial and just looking for a high level understanding how SOAPSonar can help you, you should be on your way. From time to time I will post new tutorials on new features, different options and greater challenges. Other features not as of yet used.

In the mean time, let us know how you enjoyed these. Private comment in the form below and public by starting a discussion at the bottom of the page.

Warning: strpos() expects parameter 1 to be string, array given in /home/content/13/11164213/html/ST3PP/wp-includes/shortcodes.php on line 193
[contact-form to=’[email protected]’ subject=’I just completed SOAPSonar Tutorial 10′][contact-field label=’Name what you want to be called.’ type=’name’ required=’1’/][contact-field label=’Email if you want a response.’ type=’email’/][contact-field label=’Comment. ‘ type=’textarea’/][/contact-form]

Comment or suggestions always welcome.

 

2. Reducing Scope – SDLC Strategies

I started this series with a high level overview on Reducing Scope. The first group I listed is SDLC Strategies. The entire SDLC is often not under the scope of IT, but under business of product management. The challenge can be explaining the impact of certain strategies on the SDLC to non IT focussed people. Often the Gap Between IT and business can be bridged with a little explanation as to how certain choices in product Strategy can effect SDLC costs, be these choices architectural or simply strategic.  It does however, takes a certain organization culture to do this effectively.

“Corporate Mandate”

Ask anyone in QA, development, information security etc what their corporate mandate is, and you usually get an answer “Zero Incidents”, or 100% defect free. Not only is this not possible, but often not really something management expects. The risk associated with the application, should determine the level of effort and amount of testing required. A free web services of upcoming events, a free blog such as this, or information service, my not need to same Testing effort as an on-line payments, financial calculation or personal information service. In the first example, you may need to do only a single test for functional compliance, but the the second, perhaps you decide to do a boundary test at the upper limit and lower limit and then perhaps a negative scenario as well. You may also have additional testing in security and integration and far more regression or continuous testing needs. This is why defining some criteria on how percentage coverage can be helpful. In a low risk application, perhaps the business owner will accept 30% percentage coverage. In a high risk application, perhaps the business owner will express the need to to have 100% or greater coverage.

A second aspect is how long is this application and release expected be in production. If new releases are made quarterly, or the application has some short life-cycle, perhaps a lower level of testing is warranted.  If however you developing a API that is to be used corporately by multiple applications and clients for next 5 -10 years, you may want to test that API more thoroughly as issues could replicate across your organization and all clients may not as of yet event exist. More in architecture on this subject.

Lastly I wanted to mention the aspect of customer expectation. If I am using a Google labs application for free, my expectation as a customer who finds a defect, may be very different from finding one on my tax return application. A young start-up company showing more proof of concept, may find its customers more tolerant of a defect than mature banking application. A free Blog more tolerant of my poor proof reading, than a paid subscription.

Architecture

I have written much about the need for web services to be both Robust and Sustainable. In the data economy, mobile and the very structure of web services has changed. Much of what falls under architecture today is strategies around embracing device or client  diversity. Developing, Testing and maintaining the life-cycle of the API (Web Service) independently to that of the current or future clients. I have written before on costs and management of that API and client relationship.

Many of our customers are using some form of intermediary now to create alternate versions or structured API, in new formats like size, structure, version, language or Identity formats. Creating multiple variations of API that relying on the same back-ends. Using these intermediaries to support device or client diversity.

Mediation of API to support new Clients
Mediation of API to support new Clients

With new API and code comes increased testing requirements and potential quality issues. Looking at the green check marks, do you need to test at each of these locations? If there is a problem identified, how does one trouble shoot it? How can Developers or Testers know the service is working when a client may not even exist yet. There is a need to be sure that the points behind the XML Conversion system are not changed and thoroughly tested and carefully placed under continuous regression testing before moving left in the diagram above, yet customer experience is seen to be only the left most check box. Understanding exactly how the message structure is altered as it transverses between the client and the end server can be difficult, as the rules for XML conversion can be scripted in entirely different language created by the vendor of the XML conversion system.

Conclusion

Although it may not fall under the realm of Tester, to decide on corporate strategy, it is the testers obligation to use their knowledge of testing to explain the impact that one strategy has on quality and testing needs vs. another.  What makes this more challenging, is that changing architectures brings the need for new skills, and often higher defect density. Making previous KPI and processes become redundant and making any estimation of the amount of testing required much more complicated.

As always, Comments?

Sustainable API

Sustainable API are Robust API that can be recombined or reused for a given foreseeable future. Its about embracing device diversity.

Life-cycle Management

API’s  are the way your customers will interact with your business, it’s important to treat API similar to that of an customer facing initiative. Planning and managing their development life-cycle, while clearly communicating the expected support and availability time frames. Customers are unlikely to launch into the development of new application that Consumes your API, if the API is likely to change to frequently. At the same time, emerging security, identity and standards, require certain updates to keep current. A great example is Twitter retiring API 1.0 in favour of API 1.1 forced to extended the retirement date 3 months, yet still received heavy criticism from their developer community.  Maintaining multiple versions and supporting the last 2 or 3 versions much like software is essential to give those further along the value chain time to redevelop their Consumer nodes.

Selecting well supported and defined industry stands to ensured both the broadest coverage and the ability, to automate the conversion or format of the API as new standards emerge.  Developing the sustainable API requires stricter documentation and standards adherence. Strict standards adherence enables device and delivery mechanism (Saas, Cloud, Mobile etc) independence.

The most common concern I have heard about Open API or Open Government is the lack of predicable stability. Predictable Stability  reduces the value of the API data sets. Client applications may now or in the future require SLA’s (Services Level Agreement) to offer their customers in turn. Up-time, response time, security, redundancy and disaster recovery are key to acceptance and adoption API.  Small changes in response formats, can create a huge amount of manual effort. Injecting a new field, or missing one response, can break the chain and cause data corruption for multiple Clients. Testing under distributed load, and tracking production traffic in order to keep abreast of performance and network bottlenecks is highly recommended.

It is good practice to do regression testing when developing any application or API.  Regression testing between instances in development, pre-production and production.  With API’s and API Client relationships are less formal, running a regression test on production to identify any changes that may otherwise go missed is essential.  Tools like SOAPSonar allow for baselining and scheduled regression testing, automating the comparison of results to previous test cases.

Componentized

Another key aspect of Sustainable API is keeping them as smaller building blocks. Much like Lego, Providing smaller componentized API allows Clients to consume only the information they need. Particularly important if these API get extended over mobile or poor quality networks.

Early adopters of SOA, tended to develop Wide API. API that responded with 100’s of fields, like the entire customer record. This drove up CPU and Network utilization if a query only required the customers 5 address fields. Mobile Networks, and lower CPU powered mobile devices and the Data Economy has shifted the focus to reusable or stackable component based API that are requested concurrently or sequentially as needed. This increases performance, reduces server and network loads and better supports a wider variety of possible future applications.  Componentizing API increases the number of API, but reduces the need to create overlapping API and versions.

When a new B2B or other partner requires access to only partial subset of an API, organizations are left needing to do one of 3 things

  1. Create a new API for the new partner
  2. Exposing unused components for Consumption by the partner who then discards the parts not required,
  3. Mediating, stripping out the unwanted or private fields in the API using filters or a message body parsing device like an API gateway to convert the message body and create a new virtual API with only the required fields.

Another benefit Componetized API is development, QA and life-cycle management. Complex Wide API  with many lines of code cost considerably more to develop, troubleshoot, test and adapt.  Hidden quality issues often land up becoming Common Law Features, as the bugs exist for so long and effects so many systems that user support is needed if it was ever to be fixed.

With the rapid development and definition of new IoT, Cloud, Saas, Identity and message formats to support the Data Economy, more and applications organizations are redeveloping their SOAP based services into REST, using OAuth or SAML based identity. Redeveloping the same data sets on new endpoints in order to support these new standards.  Buy componentized, Identity or other parts can be altered or changed without a full redevelopment.

Conclusion

In the Data Economy, Nodes may Consume information from multiple sources, refine it or apply some logic to the information then Provide the result in turn for other Clients to consume. As the Data economy matures, I expect that these value chains will grow in length, number and complexity. Sustainable API are those that have the flexibility and staying power to define and maintain their place in these emerging value chains.

9. SOAPSonar – Performance

One of the benefits of SOAPSonar is that performance and load testing use the same automation tests developed for functional testing. You just need to switch the mode from QA to Performance on the top right. All data sources, Projects, Test Cases and Test Suites remain unchanged. This allows performance and load tests to be built into functional requirements and regression testing, without creating a new series of test cases in a separate tool.

In order to do performance and load testing, and prevent any accidental denial of service attack, we need to use a CloudPort runtime. Its free, it local and it integrates well with SOAPSonar. Its also a useful tool to have and one we will use for a number of future tutorials. So lets go ahead and download and install it.

1. Download and install the CloudPort Runtime Player. You have to accept export restrictions, but are not asked for no personal information.

2. Secondly, download the runtime, ST3PP performance and unzip it to a location you can find again. You should have 3 files. Tutorial v1 and v2 and a short csv. We use V2 in future tutorials.

3. Launch the CloudPort Runtime Player and select run simulation, then find Tutorial v1 you downloaded and unzipped in step 2. Start Simulation Player. Accept port 8888. (good idea to Test availability)

3 run player

4. You now should have a JSON Simulation running on your machine to test against. If you look under Performance Test Tutorial, next to the icon of the networked globe,  you will see the URI should be http://127.0.0.1:8888/st3pp/ and the list of simulated services running, starting with soapsonar. Lets not change anything else here yet, but copy the URI http://127.0.0.1:8888/st3pp/.

4. Simulation

5. Leave the runtime running and launch SOAPSonar and lets create a basic JSON test case. You can do this by selecting Testing and then Launch SOAPSonar Testing Client from within CloudPort runtime or you can just run SOAPSonar as you usually do. Select File, New, Test Group then Right-click on Tests in the Project Tree and Select New JSON Test Give it a name like Performance

5. New Test

6. There is a small CSV file in the zip file you downloaded in step 2 called performance.csv. Lets add that as an [ADS]. In the project Tree under configuration, select Data Sources. Add Automation Data Source, then Select File Data Source. Give it an alias, find the performance.csv you extracted and ensure the Data Variables is on Request column, then select OK.

6 ADS

7. Back at our test case “performance“, paste the URI copied from CloudPort or http://127.0.0.1:8888/st3pp/ into the URI. Then add the query. In this case it will be ? followed by right-click and the [ADS] and our Request column. Set the method to GET (although it matters not in this runtime). Commit and Send Current Request to Server. Did you get a response that’s not a error? So far this has all being covered in earlier tutorials. your response should start with “Delivering”: “SOAPSonar”

7. Project

8. Select Run View and drag our test case over to the DefaultGroup. Check to make sure you right hand top corner is QA mode and Success Criteria is Test Case Success and not Regression. Then Commit and Run.  All 6 Test cases from the .csv should run and pass. Did they? Select Report View, and notice that 2nd Test case CloudPort took over 500ms and the 4th test case Tools, took over 1000ms. This is because the runtime has some latency added for these two cases. These individual services, not under load perform slower.  So far you have being testing functional testing. If we wanted to we could add a success criteria now and fail any service over a certain response time, now would be a good time.

8 Slow

9. lets go back to Run View and Change SOAPSonar from QA Mode to Performance Mode in the right top corner.  Notice that Suite Settings Changed? Now if we select Run Performance Testing in Synchronous Mode, each test group is run sequentially and each test case performance statistics are isolated and run individually. Asynchronous will run all your test groups at the same time to replicate different traffic patterns. We only have one test in one group. So lets leave it on Synchronous. Lets also leave the rest for now. Careful with logging as it can effect your machines load and hence performance.

9 Synchronous

10. Select the next Tab, Group Performance Settings. Here we establish the number of Virtual Clients and the length and extent of the load. Select just 3 virtual clients and set it to Duration and 3 seconds. Leave Throttle unchecked as we see how many TPS we can hit with 3 Virtual Clients. We have made no changes to the [ADS], functional test, regression or success criteria. Commit and Send.

2014-010 Virtual

11. This time your Realtime monitor is different. In report view, you see a consolidated report, when you select that, you see a break down per virtual client. East virtual client can export a file for further processing or you can generate a report. How  many TPS did you hit? We highly recommend using the 90% Res Time column as a reference, ignoring the 10% of responses that are extra long or short.

11 Report

Conclusion.

Doing load and performance testing as early in the development cycle can be critical in finding the time to address any concerns. Using the same test case and simply switching to performance mode vs developing a new set of test cases in a different tool, enables far greater coverage and reduced time.

In our next tutorial we will use both virtual and physically distributed load agents in a performance and load test.

Take a minute to give me some private feedback in the form below. This will be mailed to me and not published.

[contact-form subject=’Performance Testing’][contact-field label=’Handel’ type=’name’ required=’1’/][contact-field label=’My opinion on Perfromance and Load testing is :’ type=’radio’ options=’Why would anyone do Performance or Load tesing?,We do Performance but not Load testing,We use different tools for Performance and Load testing,We use an integrated Tool for Performance and Load Testing’/][contact-field label=’We do performance testing as part of Coninous and Regression Testing’ type=’checkbox’/][contact-field label=’Comment’ type=’textarea’ required=’1’/][/contact-form]

Otherwise please post any public comments below.

Robust API

In the Data Economy, the currency is information. The near default method of accessing information today is via the developing and exposing of Web Services API as Providers of information. Most applications are developed to Consume more than 1 API, increasingly from more than one location, source and even organization. SaaS, cloud services, supply chains, payment clearing, shipping information, social media etc are all examples that rely on API’s. A good quality API is essential for success in the Data Economy and corporations need to define an approach to API quality in much the same way as they would any other product quality. ST3PP refers to this need for a strategy to ensure Robust and Sustainable API.

Traditional approach to development architecture, tightly coupled web services development with the application client development. Business gave requirements and developers created all the services to expose these requirements. Developers then developed the client UI to access these services. When they felt they were “done” they passed it to quality assurance, which tested the “application” as a whole via the client UI. Often manually entering keys into the client’s UI’s fields in an attempt to ensure functionality. If anything was identified by QA, it usually went back to development to “fix” and development decided if it was easier to be “fix” the service or the client. The next time business issued new requirements, the entire process started again.

In the Data Economy, the client application needs to be treated independent of the Web Service API. API’s are designed as re-usable components to stand independently of any client or other application that may Consume the API. The various API each Provide some portion of information witch the Client application may consolidate or refine. These API could come from multiple locations, organizations or delivery models, like SaaS, BYOD, Cloud, Open API etc . API are no longer something IT deals with, but considered as core business asset, differentiating one organization from the next in a competitive information based economy. Better API = Better ability to establish corporate value in the economic chain. To get the most form API assets, a new approach to development and QA is needed. API need to be treated independently, like an end product. Developing API to Provide information yet unknown future consumers, requires that API be Robust.

1) Functional Testing.

In the Data Economy, the need for each field in each API to be functional still exists. Since API are no longer being developed for a particular client, and independent method of testing the API to ensure no functionality, format or other limitation exists in the API. Automated testing using broadest possible data sources can further ensure Robustness.

2) Compliance Testing

Developing an API, for unknown Consumer applications requires that the API meet with certain standards, to avoid versioning based on client applications. Testing of the API needs to include the compliance of the API to accepted standards in order to ensure that a new Consumer, perhaps for a new native smart phone application, will operate in the same way a web browser client in Chrome does or another server refining the information.

3) Security Testing

Robust API needs be secure API, independent of client application Consuming the API. SQL Injections, Cross Scripting, Improper key or session management and other OWASP top 10 vulnerabilities need to be tested for. “Cloud”  Identity structures like WS, SAML and OAuth along with key management become key components of testing for Robustness. Additional information leakage though API’s with “forgotten” exposed information fields and Metadata can be filtered using a governance gateway.

4) Performance and Scalability

Performance and scalability are not only a function of hardware, but of location, encryption, message signing, network, location, wait times, retries load throttling and many other application design criteria. An application that Consumes information from a variety of API’s on different networks and managed by different teams, requires additional hardening to ensure performance and scalability. How long should I wait if one API is not available? Do I require a resend after how long? What if someone is on a poor quality mobile network, how would that effect my  performance? What if I required higher level of encryption? How many concurrent clients can I support with my current infrastructure? What if I split servers or added a second location?

Visionary organizations have started by creating “Information” or “Data Management”executive to extracted value from corporate information for the Data Economy. This involves treating API as we would an application core to the corporations success. Poor quality API, limit access and make extracting value from data near impossible. These executives need to ensure that business, development and QA structure the right process and approach to creating more Robust and Sustainable API.

8. SOAPSonar – Identity and Authorization

Many of our customers start their development or testing using some other product, then reach a point where they need to use some form of identification, cookie, key, encryption or other form of authorization and realize the tool they using does not support what they need. They then look at the months work completed vs starting with a new tool.

There are simply so many ways, standards and architectures for identity, that most tools are unable to support more than a few. Before selecting an automation tool, I highly recommend taking the time to identify and test all the applicable identity and authorization needs you may have. Although this is a “tutorial” I am not going to cover off all possible options, or full detail on a single option. Rather, I will try and explain some of the more common options and were to find these settings.

In this first example I want to show the testing of a login service. In this (REST) example a POST request is made to a given URI and the message body contains the username and password variables. Once successfully logged in, the service responds with a Token and ID, that are used further in the application. These username and password variables can be tested via an automation data source (.csv) and each step of the login process chained, by creating a runtime variable of the token response. The benefit is then the ability to load test this login service using this [ADS].

1 Message bodySometimes identity needs to be in the header. In the request section of SOAPSonar, at the bottom is a number of tabs.  By either clicking on the keys icon or the Authentication tab, can see a number of options for configuration. Here you can find Basic, Kerberos, or Digest Authentication settings. You can also set up using the returned cookies and SSL certificates to send embed as part of the header message. (a reminder at the bottom says For WSS-Token SOAP Header Authentication, use the Tasks tab). In the screen below, I selected Basic and entered my username and password.

2 Encryption

When I look at the message header request I sent, I can see the line Authorization: Basic aXZhbjpteXNlY3JldCBwYXNzd29yZA== added to the header request sent.

3 Request

In the Task tab, is a number of Token and WS-Security Functions. Looking first at Identity Tokens, you can select from a wide variety supported.

4. Tasks

I selected SAML 2.0 Token, popular with today’s mobile applications. Once added, you can configure the token by selecting the spanner icon next to it and activate it by ensuring it shows a green dot.  Here is a screenshot of just the first tab for SAML token configuration. As you can see the options need to be extensive.

5. Options

A second group of tasks is the WS-Security tasks. Here you can encrypt and decrypt the message with various keys and options. This enables testing of HTTPS and other secure services using the same test cases developed for functional testing.

6. Token

Once added, you again configure it by selecting the spanner icon and activate/deactivate it by enabling a green dot.

7 PKI

The same WS-Security settings are available in the response section, to encrypt or decrypt the response.

8 response

Conclusion

Integrating identity, authorization and encryption into your automation test case is essential if you wish to do any kind of continual testing or regression testing. Especially using your test cases after release, were without these features, your test cases developed would be not work in a production environment.

This tutorial did not show a real examples, but I wanted to highlight were to go and what are some of the options for testing authentication, identity and encryption, without blacking out my secret keys, to look like some 3 or 4 letter government censorship organization got to the pictures first. I hope you find it useful in getting started. Comments?