Search for:

Robust API

In the Data Economy, the currency is information. The near default method of accessing information today is via the developing and exposing of Web Services API as Providers of information. Most applications are developed to Consume more than 1 API, increasingly from more than one location, source and even organization. SaaS, cloud services, supply chains, payment clearing, shipping information, social media etc are all examples that rely on API’s. A good quality API is essential for success in the Data Economy and corporations need to define an approach to API quality in much the same way as they would any other product quality. ST3PP refers to this need for a strategy to ensure Robust and Sustainable API.

Traditional approach to development architecture, tightly coupled web services development with the application client development. Business gave requirements and developers created all the services to expose these requirements. Developers then developed the client UI to access these services. When they felt they were “done” they passed it to quality assurance, which tested the “application” as a whole via the client UI. Often manually entering keys into the client’s UI’s fields in an attempt to ensure functionality. If anything was identified by QA, it usually went back to development to “fix” and development decided if it was easier to be “fix” the service or the client. The next time business issued new requirements, the entire process started again.

In the Data Economy, the client application needs to be treated independent of the Web Service API. API’s are designed as re-usable components to stand independently of any client or other application that may Consume the API. The various API each Provide some portion of information witch the Client application may consolidate or refine. These API could come from multiple locations, organizations or delivery models, like SaaS, BYOD, Cloud, Open API etc . API are no longer something IT deals with, but considered as core business asset, differentiating one organization from the next in a competitive information based economy. Better API = Better ability to establish corporate value in the economic chain. To get the most form API assets, a new approach to development and QA is needed. API need to be treated independently, like an end product. Developing API to Provide information yet unknown future consumers, requires that API be Robust.

1) Functional Testing.

In the Data Economy, the need for each field in each API to be functional still exists. Since API are no longer being developed for a particular client, and independent method of testing the API to ensure no functionality, format or other limitation exists in the API. Automated testing using broadest possible data sources can further ensure Robustness.

2) Compliance Testing

Developing an API, for unknown Consumer applications requires that the API meet with certain standards, to avoid versioning based on client applications. Testing of the API needs to include the compliance of the API to accepted standards in order to ensure that a new Consumer, perhaps for a new native smart phone application, will operate in the same way a web browser client in Chrome does or another server refining the information.

3) Security Testing

Robust API needs be secure API, independent of client application Consuming the API. SQL Injections, Cross Scripting, Improper key or session management and other OWASP top 10 vulnerabilities need to be tested for. “Cloud”  Identity structures like WS, SAML and OAuth along with key management become key components of testing for Robustness. Additional information leakage though API’s with “forgotten” exposed information fields and Metadata can be filtered using a governance gateway.

4) Performance and Scalability

Performance and scalability are not only a function of hardware, but of location, encryption, message signing, network, location, wait times, retries load throttling and many other application design criteria. An application that Consumes information from a variety of API’s on different networks and managed by different teams, requires additional hardening to ensure performance and scalability. How long should I wait if one API is not available? Do I require a resend after how long? What if someone is on a poor quality mobile network, how would that effect my  performance? What if I required higher level of encryption? How many concurrent clients can I support with my current infrastructure? What if I split servers or added a second location?

Visionary organizations have started by creating “Information” or “Data Management”executive to extracted value from corporate information for the Data Economy. This involves treating API as we would an application core to the corporations success. Poor quality API, limit access and make extracting value from data near impossible. These executives need to ensure that business, development and QA structure the right process and approach to creating more Robust and Sustainable API.

8. SOAPSonar – Identity and Authorization

Many of our customers start their development or testing using some other product, then reach a point where they need to use some form of identification, cookie, key, encryption or other form of authorization and realize the tool they using does not support what they need. They then look at the months work completed vs starting with a new tool.

There are simply so many ways, standards and architectures for identity, that most tools are unable to support more than a few. Before selecting an automation tool, I highly recommend taking the time to identify and test all the applicable identity and authorization needs you may have. Although this is a “tutorial” I am not going to cover off all possible options, or full detail on a single option. Rather, I will try and explain some of the more common options and were to find these settings.

In this first example I want to show the testing of a login service. In this (REST) example a POST request is made to a given URI and the message body contains the username and password variables. Once successfully logged in, the service responds with a Token and ID, that are used further in the application. These username and password variables can be tested via an automation data source (.csv) and each step of the login process chained, by creating a runtime variable of the token response. The benefit is then the ability to load test this login service using this [ADS].

1 Message bodySometimes identity needs to be in the header. In the request section of SOAPSonar, at the bottom is a number of tabs.  By either clicking on the keys icon or the Authentication tab, can see a number of options for configuration. Here you can find Basic, Kerberos, or Digest Authentication settings. You can also set up using the returned cookies and SSL certificates to send embed as part of the header message. (a reminder at the bottom says For WSS-Token SOAP Header Authentication, use the Tasks tab). In the screen below, I selected Basic and entered my username and password.

2 Encryption

When I look at the message header request I sent, I can see the line Authorization: Basic aXZhbjpteXNlY3JldCBwYXNzd29yZA== added to the header request sent.

3 Request

In the Task tab, is a number of Token and WS-Security Functions. Looking first at Identity Tokens, you can select from a wide variety supported.

4. Tasks

I selected SAML 2.0 Token, popular with today’s mobile applications. Once added, you can configure the token by selecting the spanner icon next to it and activate it by ensuring it shows a green dot.  Here is a screenshot of just the first tab for SAML token configuration. As you can see the options need to be extensive.

5. Options

A second group of tasks is the WS-Security tasks. Here you can encrypt and decrypt the message with various keys and options. This enables testing of HTTPS and other secure services using the same test cases developed for functional testing.

6. Token

Once added, you again configure it by selecting the spanner icon and activate/deactivate it by enabling a green dot.

7 PKI

The same WS-Security settings are available in the response section, to encrypt or decrypt the response.

8 response

Conclusion

Integrating identity, authorization and encryption into your automation test case is essential if you wish to do any kind of continual testing or regression testing. Especially using your test cases after release, were without these features, your test cases developed would be not work in a production environment.

This tutorial did not show a real examples, but I wanted to highlight were to go and what are some of the options for testing authentication, identity and encryption, without blacking out my secret keys, to look like some 3 or 4 letter government censorship organization got to the pictures first. I hope you find it useful in getting started. Comments?

Reducing Scope – the Amount of Software Testing

Our example used at TASSQ and in our a More Detailed look at Service Plan Costing we used a fix number of test cases (50,000). At the TASSQ event, many immediately wanted to discuss ways to reduce the number of test cases. This is just as important an aspect as streamlining the test process once this is done. Due to time constraints, I decided to leave it till now. But, what kind of things can be done to reduce the sheer scope of testing needed?

This is too long for a single post, but over the next few weeks, I will build it out each area. Looking at high level areas to focus at though, we have 4 key areas to focus on

  1. SDLC Strategies
  2. Test Iteration Strategies
  3. Test Cycle Strategies
  4. Maintenance Strategies

1. SDLC Strategies

What is your corporate mandate? Is these a free internet service, that is best effort, or do errors have potentially huge financial risks? How long is this software expected to be in use (next release), and how mission critical is it to your business? The way we approach testing should reflect our business needs.

The second aspect is more architectural. I have already posted one post on API Versioning Strategies. How the service and the client are designed and managed in the SDLC, can greatly impact the amount of testing needed.

2. Test Iteration Strategies

This looks at way to reduce the number or effort required in each test Iteration. Can you share the same test case for Functional and Performance Testing? How can you ensure that development did fix the issues in the last release? Do you really need to retest code that was not changed?

Strategies here vary a great deal depending on if you using AGILE or Waterfall or some other Development methodology.

3. Test Cycle Strategies

This area looks at way to reduce the number and the complexity of doing individual test. A lot of this has to do with desired percentage coverage, but automation, data sources and regression are all aspects.

4. Maintenance Strategies

Far to often we focus on getting the software into production, yet its generally accepted that testing during the maintenance cycle can be well over half the testing costs. This is about, automation,  regression and continual testing strategies that can reduce the maintenance testing costs or coverage.

I cant poll the audience here, but as usual, we share and learn. So please if you have thoughts or suggestions on the subject of reducing the amount of testing required, please let me know.

 


 

Calculating Percentage Coverage

I wanted to discuss some confusion of percentage Test Coverage. I have noticed that different organizations calculate test coverage very differently.  This can be very confusing when using contractors, off-shoring and simply comparing best practices. Lets say you have a simple service that returns Name, Phone Number and Address and you asked to create test cases for 100% Test Coverage. What exactly does that mean?

Would a simple unit test entering the following be considered 100% Coverage

  • Bob Smith
  • 555.555.5555
  • 55 street
  • City
  • QC
  • M5M 5L5

Or would you need to break the service down into each of its functions. Name, Phone Number Street, City, Province, Postal Code. Testing each of these independently?

But how many test cases do you need to perform, for you to consider it 100% coverage. Lets take Postal Codes. Would a single Postal code be considered 100% coverage? Or would you need one from each of the 18 starting letters ( Y, X, V, T, S, R etc)? Perhaps you require some random number of say 10 or 100 postal codes? Or do you need to enter every defined Canadian Postal code.  Lets consider testing name function, how long a name does the app need to support, how many names can a person have, what if we include title, what if the persons name has a suffix.

What about negative scenarios? Do you need to test postal code that does not exist, or one in the wrong format before one can consider the test coverage to be 100%? With space after first 3 letters, without space, or with a hyphen. What about letter number letter or what if all letters, number or some other possible combination? How many of these negative scenarios does one need to run to say you covered 100%?

What about testing these functions as they relate to each-other or as this service relates to other services?  Do you need to test that a Postal Code starting with letter V, is not used for a city that resides in Quebec? Do you need to confirm that this address service when used in one chained request, responds the same was as when used in another? So often I hear of companies unit testing services as they developed, but never running a final systems and integration end to end test. What if one service requires that postal code to have a hyphen and the other a space?

Understand if your organization is manually testing a service, entering even 18 postal codes will take significant time directly impacting costs. Entering all positive, negative scenarios including chained services is just not feasible. Does increasing the number of test cases actually effect the percentage coverage, or is a single test case enough? All the possible boundaries for a simple service like postal codes could result in a large number of tests. Does testing  each service once, without considering all the boundaries and negative scenario’s constitute 100% coverage? More importantly perhaps, is when QA testers give a percentage coverage, does it really mean the same thing to the everyone?

I would like to invite everyone to weigh in and share their thoughts on the subject. Please select and option and comment if you will below. So far the majority selected test every function once. So I broke this into boundaries and positive and negatives to see if we can get further clarification.

***Please note The form is submitted privately and is not automatically published. If you wish your response published, use the comment link at the end of any post***

Warning: strpos() expects parameter 1 to be string, array given in /home/content/13/11164213/html/ST3PP/wp-includes/shortcodes.php on line 193
[contact-form to=’[email protected]’ subject=’percentage coverage’][contact-field label=’What does your Organization consider 100%25 Test Coverage?’ type=’radio’ required=’1′ options=’Whatever We have Time for,One Test for Each Service,Test Each Function of the Service only once,Boundaries for Each Function,Both Positive and Negative Boundaries for Each Function,All/Many (Data Source) in Chained Workflow’/][contact-field label=’Comment’ type=’textarea’/][contact-field label=’Screen Name’ type=’name’/][/contact-form]

7. SOAPSonar – Baseline and Regression Testing

We all have had experiences were “someone” decided to make a slight “tweak” to some code then promptly forgot to mention it to anyone else or at least the right someone else. That slight tweak causing some change, be it expected or not, that causes other parties to spend hours trying to trace the cause.

One of the key benefits of automation is the ability to identify any changes to XML by doing a XML diff. Comparing one version (Baseline) to that of another, (Regression Test). With web services API, we interested in the Request and Responses to ensure that they are not different, or rather that only expected differences are there. We need the flexibility to ensure that a some of the parameters be ignored, when they expected to be different each time.  Take for instance a file reference number or a service that returns the time. We may want to check to make sure the fields for Hour, Minute, Day, Month, Year etc remain unchanged, but perhaps we wish to accept changes to values in the fields, or limit these to certain parameters. Establishing what to check against the baseline and what not to, is an important part of regression testing.

Here is a Very Simple Baseline and regression test, using the SOAP Calculate Service.

1. Run SOAPSonar (with Admin Rights). Paste

http://www.html2xml.nl/Services/Calculator/Version1/Calculator.asmx?wsdl

into the Capture WSDL bar. Select Capture WSDL.

2. Lets use the Add Service. Select Add_1 and enter a=3 and b=3. Commit, Send. Hopefully your response was 6. If not, perhaps I could suggest some services? Rename it baseline.

2 Project View

3. Now lets select Run View, Drag Baseline into the DefaultGroup. Now select the Icon Generate New Regression Baseline Response Set.

3 Generate

4. Select XML Diff of Entire Baseline Response Document. This option matches both nodes and values.. Select OK, (Commit and Send if you need to)

4 XML diff

5. After the test is run, you will see the Test Suite Regression Baseline Editor. This is were you can select what you wish to watch or ignore. Automatically a base rule is generated. If you select Index 1, you should have 1 rule XPath Match. If you select XPath Match, you should see all the nodes graphically laid out for you. At the bottom you have 3 tabs. Baseline Criteria, Baseline Request (Captured Request), Baseline Response (Captured Response). For now, lets not change anything and just select OK.
5 Baseline editor

6. Lets go back to Project View, and change our b= to 9. The response should now be 12. Commit and send to Check. Then select Run View and change the Success Criteria to Regression Baseline Rules (See cursor). Commit and Send. This time, did your Success Criteria Evaluation Fail? It should, as it was expecting 6 as a response and not 12. Analyse Results in Report View.

6. Run baseline

7. If you now select that failed test case and then select the tab Success Criteria Evaluation, you see that Regression Baseline XML Node and Value Match failed, and it was the AddResult value.

7. Failed

8. Select Generate Report, then [HTML] Baseline Regression XML Diff Report and generate the report. Then View the report. Select Response Diff for Index 1. 1 Change found and you can clearly see it marked in Red.

8 report

9. Now lets ignore the response value, but maintain regression for the rest of the test case. Select Run View, then Edit Current Baseline Settings.

10 ignore

10. You should be back in the Test Suite Regression Baseline Editor. Select Index 1, then your rule and right click on AddResult in the visual tree. Select Exclude Fragment Array. It should show now in Red as excluded. Ok, Commit and Send. Your Regression Test should now pass, as everything but that value is still the same.

9 change

Conclusion

Automation of Regression Testing is far more than running an XML Diff. It involves selecting what aspects are expected to change and what aspects are not. By eliminating expected changes, any failures in future regression tests can receive the focus they deserve. Once automated, this can be run, hourly, daily, weekly or as needed, consuming little to know human interaction. Many of our customers, maintain a baseline and consistent regression test on 3rd party code. Any service their systems rely on, which they are not personally aware of development cycle. Continually testing through automated process to ensure they aware of any changes to the code.

Questions, Thoughts?

6. SOAPSonar – Report View

An important reason for Automation, can be the time saved over generating manual reports. Comparing expected results with actual on a case by case basis and sorting through this date to combine it in a meaningful way. Making sense from pages for XML, in an attempt to filter the few key issues. One of the top 5 issues shared with me is false positives, or QA reporting an issue, that was either incorrectly diagnosed or not considered a issue. Much of this can be mistake in what was entered, but just as often, a mistake in understanding the expected behaviour.

SOAPSonar Test Cycle

So lets take a look at SOAPSonar’s report view, carrying on from the previous Tutorial #5 defining success criteria.

1. Lets start this time with JSON Google maps. In QA Mode, Project View. Please confirm you have the service and the [ADS] and that service works. Switch to Run-View and clear any tests under the DefaultGroup and drag over only the Google Maps test case we did in Tutorial 5.

1 Starting out

2. look at the area to the right, In QA Mode there are 2 tabs. Suite Settings, and Group Settings. If you switch to Performance mode, there are 3 different tabs. In QA mode, lets change the Result File name to Tut6_1.xml and leave the location at C:\Program Files\Crosscheck Networks\SOAPSonar Enterprise 6\log\QA. We have not captured a baseline for regression testing, so select Test Case Success Criteria. Result Logging as Log All results and Verbose and Use Optimized XML logging. Warning, Logging all and verbose logs in large test environments can greatly effect performance and seriously load any workstation. We usually recommend log errors fails and errors. HP Quality Centre and other options, leave unselected.

2. Verbose

3. On the Group Settings tab, lets leave things default. Here you can define a Data Source table to run through multiple tests. Commit and Run Suite.

4. Realtime Run Monitor shows that I ran 6 test cases of which 4 failed and 2 passed. Select Analyse Results in Report View at the top of the page.

4. Realtime

5. In Report View, we can now see on the far left, under Today, I have Tut6_1xml Log file at the location we entered in step 2. Right Clicking on the file allows you to export Request and Response Data as Files and Export Results to Normalized XML. Select Export results in Normalized XML and you have details of each test case run.

5 Log files

6. In the main section you can see the first 2 tests are green and the rest red. Selecting the first test, and the tab Test Iteration Summary and the Request Response Tabs populate with the what was sent and received. Do you notice the exact time for and size of each message is also reported along with teh response code?

6 First Test

7. Select Success Criteria Evaluation, and you can see the Response time and exact Match rule we created, both were a success.

7 Success Criteria

8. If select the 3rd line (first to fail) we see that it used the ADS Index  3rd row) the independent Response time and the response code was 200 (normally a pass). By selecting Success Criteria Evaluation tab, we can see that the exact match success criteria failed. That is, the csv value we were expecting was different from what we received.

8. first failed

9. We can generate a number of PDF reports via the drop down menu at the top of the page. These PDF reports can also be exported in a variety of formats. Take a look at a few.9 reports

10. Running the same test in performance mode and not QA mode, generates an alternate set of reports. The same is true if its is against a baseline.

These results can also be published to HP Quality Center.

Conclusion

Between management level reports and detailed level request and responses, reporting can consume a lot of time. Testers tend to focus on how long it takes to run a manual test vs automate the test cycle, often forgetting about the time taken to generate reports in a manual testing environment and how long it takes to capture and supply the required information with any issues. The ability to supply the test case and log, is key to troubleshooting issues, and the first thing we ask our customers for when they have any technical support questions. That is because automation enables the exact repeat of the same test case, and therefore any issue should be simple to replicate.

Comments?