Search for:

3 Ways to Get Started with CloudPort – Using WSDL

As part 3 in this series. In the first we captured a service using CloudPort’s proxy feature, and in the second we created a service copying and pasting mostly. In this tutorial we use a SOAP service that has WSDL. SOAP services currently differ from REST based service in that they include WSDL that defines the structure and parameters if the message header and bodies. In this case, I will use the simple temperature convert SOAP service.

1. Start CloudPort and select Build Simulation. In the Capture WSDL bar enter the URL followed by ?wsdl or in our case paste

http://www.w3schools.com/webservices/tempconvert.asmx?WSDL

1. Capture WSDL
2. Change the Network Listener Policy – in my case I just changing the port to 8888 and will leave it on local machine, and commit.

2 listener

3. Now we can define any specific Request or Response values. You could build a VB script or create a random variable or any of a number of ways to match requests responses. In this case, our test coverage is 3 values.

  • 0 Fahrenheit response expected -17.7777777777778
  • 72 Fahrenheit response expected 22.2222222222222
  • 100 Fahrenheit Response expected 37.7777777777778

So lets start by cloning the FahrenheitToCelsius simulator 3 times and renaming it 0, 72, 100

3. Clone

4. Now we need to change the rules for inbound documents. The default rule recognizes any value. Right click on the actual value and select Add XML Element Value Criteria, Contains Value for each of the 3 cloned services.

4. Rule 1

5. Now selecting the rule allows for entering the value. Enter 0, 72, 100 in each of the clones services and Exact Match.

5. Rule

6. Now Select the Response tab, and enter the desired response value for each of the 3 clones simulators.

6 Response

7. If you want to test Celsius to Fahrenheit you can repeat this process.

8. Now launch your created simulation by selecting the green arrow icon Start Local Simulation. The Free Simulation player will launch and show you the URI and rules available. Select Copy Generated WSDL to Link icon. This is the listener we set up in step 2.

8 copy

9. In SOAPSonar now, paste that WSDL link into the Capture WSDL field and the 2 services are shown in the project tree.

10. Test the 3 services listed in point 3 now. Do you get the expected response?

9 soapsonar

 

 

 

 

SOAPSonar – Continuous Testing ideas

In this example, I am going to use a success criteria to monitor a specific field value for a given response, but there are many possible ways to implement continuous testing. To have a little fun, I am going to use a JSON weather services provided by http://api.openweathermap.org. Being the weather and this being Canada, there should be a lot of change.

Lets say you only interested in knowing if Rain is in the forecast for the next day. Lets set up a success criteria first that fails the test case should there be rain in the forecast. if you not done the introductory tutorial on Success Criteria please do so first.

1. File, New, Test Group. New JSON Test and Name it.  Paste the following in the URI

http://api.openweathermap.org/data/2.5/forecast/daily?q=Toronto&mode=jsonl&units=metric&cnt=1

and Set the method to GET. Commit and Send.

1 URI

2. Now All we want to know is if there is rain in the the forecast. Select Success Criteria tab, Add Criteria Rule and XPath Match.

2. Xpath

3. Now lets edit our XPath Match rule. By selecting it, we see the Graphical XPath, Scroll down till you see weather, main and select the value, right-click and Compare Element Value.

3 Element

4. Select tab Criteria Rules. Change the Force Action to Force Fail. and Match Function term to Rain. We now have a criteria rule that will fail the test case if rain is in the forecast. Drag it over to the Run View and Send. Did it fail? is there Rain in the forecast?

4. Rain

5. Now if we automate the test case and it fails, it should mean rain. So in Run View, select Create Command Line Script. (see pointer). you need to save the file first.

5 script

6. Now Lets Generate a Report, (One Page), call it rain and Email Report to the right address. We then need to define our email SMTP settings. On the second page, fill in the details for your email server and send test email.

6 automate

7. Lastly, let schedule as a a Task using the Windows Scheduler. Fill in details and OK.  If you want, you can go into windows scheduler to edit the task further. The test case uses standard windows scheduler.  On manually running it, I get a PDF report in my mailbox as an attachment.

7 Windows Scheduler

8. Note, I could also set up a Task in the response section to send me a email with some personalized note.

8 Task

This rather silly example is to show that you can automate a test case to run as frequently as you may want, to watch for a certain value in a certain  part of the response, that you defined as a success criteria. That could well be a response time, validation code or any other parameter, and need not be rain.

Questions, or Comments?

Conditional Test Cases

A recent customer request was to have a decision tree in an automation test. This is a more advanced tutorial, showing using global variables and decision trees.

Some reasons for wanting a decision tree could be as simple as saying, if a test fails, automatically run an additional set of tests, or more complexed, like saying, if a test returns a value in a given field, run an additional test or different test.

Lets use iTunes JSON service for this tutorial. Here I want to get a list of albums for a artist using the search feature.

1. Start SOAPSonar and lets create a new project. File, New Test Group, then File, New, JSON Test Case. Create 3 JSON tests cases (you can clone them). Name them Search, Lookup A, Lookup B.

1 Create test cases

2. Lets enter the queries. What I am going to do is a search first, then use the artist ID to get a list of Albums for that artist

  • Search A use http://itunes.apple.com/search?term=arctic+monkeys as the Query and GET as the method
  • Lookup A use http://itunes.apple.com/lookup?id=62820413&entity=album and GET as the method
  • Lookup B use http://itunes.apple.com/lookup?id=5893059&entity=album and GET as the method

2. URI

3. We need to define a global variable. Policy Settings (in project Tree), Project Globals, Project Global Variables and enter artist=1 (some initial value). We just defined a global variable called artist.

3. Global Variable

4. Now in Search, we define a Runtime variable for artistId. Look in the Response section, Runtime Variables, and scroll down till you see the artistId value. It should be 62820413. Right-click on the value and add variable reference. Leave the name as artisIsd and accept.

4 rt variable

5. Now we need to update our global variable with the runtime variable. Select the Tasks tab, then Actions, Update Global Variable.

5 Global

6. Now lets select the variable. Edit the Task created in 5, select artist and then right click and [RV]Runtime Variable and find and select artistId.

6 runtime

7. Its time to define our test case in Run View. Drag Search under Default group. Right-click on Search test and select Add Conditional Test Group.

7 Runview

8. Now we define the condition. Select Conditional Tests folder, then drag Lookup A and Lookup B under it.  Select Global Variable Match, enter artist (our global variable) and paste 62820413 for the value in Lookup A and  5893059 for the value in Look-up B. Commit and send.

8 Conditional

9. You should see that 2 Test cases were run. When we look at the results in report view, they search and Lookup A.

9 first scenario

10. Now lets change the search test from

http://itunes.apple.com/search?term=arctic+monkeys query

to http://itunes.apple.com/search?term=the+black+keys

Commit then switch to Run View and Run Suite. This time the second test was Look-up B.

10 alternate

Any questions or comments?

3 Ways to Get Started with CloudPort – Create

In the first in this series, we used an iTunes JSON service and made a request via SOAPSonar while we capture the request and response using CloudPort’s proxy feature. This is great when your service is developed and available, but what if the service you need does not yet exist or needs to be altered?

Since I am not developing a service, I am going to use SOAPSonar to make a request of the real service and get a response. Then I will simply copy this response and paste it into CloudPort (with a small edit) and set up the right listener policies.

1. We going to do a second iTunes JSON GET to list of  albums for a given artist. Starting with SOAPSonar (you can use the same or start a new project) lets Create Two New JSON Tests. Lets rename the first alt-J albums and second Black Keys albums. The queries are

If you wondering were this came from, I read the service description document and too the ID value from the artistic response from the previously used search service. Commit and Send both. Now we have the requests and responses.

1 soapsonar query

2. Open up CloudPort, you can create a new, or use the previous. Right-click on Tests and create 2 New JSON Simulators. Rename them alt-J Albums and Black Keys Albums.

1 New

3. Now we need to establish the end points and queries or listeners. Select Add manual Rule and set the listener Target to URL and the paste the /lookup?id=558262494&entity=album in as the URL. Ensure its set to Exists. Once you created the new rule, delete the old catch all rule. If you wondering were this came from, its everything we sent as a request after the http://itunes.apple.com/

3 Simulator

4. Now do the same for Black Keys setting the target URL this time to match /lookup?id= 5893059&entity=album and Exists. Notice there are a number of ways to identify a incoming request to and hence trigger a response.

4 simulater BK

5. If you are using a new project, you need to also set the Network Listeners. If you using the same project as in the capture tutorial this should exist. Make sure yours looks as below with Name, IP, Port 8888 and URI.

5. Network Listner

6. Now we need to set up the responses. There are two parts to each response the header and the body. Go back to SOAPSonar real query and copy and past the entire response header and body into CloudPort Response tab. Then commit.

6 copy

7. Do the same for the second Black Keys service for both header and response.

7 response

8. Now we could edit these, adding albums, changing structure or renaming things, but the bands may not be happy. So all I will add is a small change above wrapperType

“simulated”: “simulated using CloudPort”, (make sure you have the , for correct JSON notation)

to both. 

11. New service

9. Notice if you look at the Response Runtime Variables tab, if you response is correctly JSON formatted the graphical view will populate. If its is not, the response will still be sent, but you will not be able to use variables and the client may not understand your response.

9 graphical view

10. Lets test our newly created services. Launch the Simulation Player, by selecting the green arrow icon. Notice your listener URI and we now have 4 simulated services.

10 simulation player

11. Now clone the two real SOAPSonar services and change the requests to

Do you see the changes we made?

In this example we added one line or possible variable, but we could just have well created an entire service from scratch. Here is the zipped create simulation.

4. REDUCING SCOPE – Test Cycle Strategies

In the service plan costing model, we set the number of Test cases at 20,000. The first post in this series was a introduction on ways to to try and reduce that number. This the 4th in the series follows SDLC Strategies and  Test Iteration Strategies and is part of ST3PP’s Best Practice Series aimed at developing better Tools, People and Process. Looking at ways to improve QA efficiency in Canada in order to remain competitive at our higher wages.

Test Cycle

The easiest way is drop something out of the testing. Decrease the percentage coverage, don’t write test plans, test only simple scenarios or don’t create reports. Rather than reducing coverage, lets look at how we reduce effort in running a tests while maintaining quality.

1. Reuse

Reuse is the comes from the idea that if you create a test plan, execute it for a given environment, analyse the results and create a report, then there should be no benefit repeating the exact same test. On the other hand, if there is a code or environment change, that test (and only that test) should not need to be created again, but executed again, and results analysed and reported.

In our service plan costing model, we used 500 services and 10 functions per service and 4 tests per function making up the 50,000 test cases. Testing however should begin long before the all the services are developed. Lets say 20% of the services are developed. The first test iteration would only test 100 and not 500 by 10 x 4 or 4,000 tests. You have great developers and only 10% of those tests cases show there are issues development needs to address. Is there a benefit to running the other 90% of successful test cases again in the next release? Probably not, IF you certain nothing has changed. When the next code drop happens, you can just test 100 new services and the 10 changed ones. For simplicity, I never built this into service plan costing model. In part as the impact depends on if you working short AGILE sprints or longer waterfall based code drops. For accuracy though, these are good measurements and KPI to track and build into your model.

An organization structure and process is not always rigid enough to know for certain that a developer did not touch something and the risk is then missing an undocumented change. Enter automation.

2. Automation

If there is one thing I hear constantly about automation, is that it can take longer to automate and maintain scripts, than it does to manually test. Automation is not the cure for everything. For it to be beneficial it requires changes to your process and approach. Creating the test plan, data sources and success criteria usually take longer with automation, than manual testing, but running the test and generating a report should be far quicker and “automated”. This can mean a single test iteration or cycle, automation can take longer than manual testing. On the other hand, once created, a good automation tool will allow the test to be executed on different , iterations and environments with little or no change, dramatically decreasing the effort. Not all tools are perfect, and redevelopment of the test cases, for the smallest change in code or new test scripts for performance or load can negate the benefits quickly. This is not an automation fault, but that of the tools or process.

Time taken for creating test cases aside, there are other issues automation addresses

  • Regression testing (have you tried without automation?) How exactly are you sure you not missing an undocumented change?
  • How do you plan on continuous testing a service without automation?
  • Need to run through a thousand variables, how long will that take manually execute these tests?
  • How long does it take to execute an automated test, once already developed vs a manual?
  • What does executing (not creating) a automation test costs in man hours?
  • What about testing after launch in the SDLC, will you do that manually?
  • Do automation QA testers really cost more?

To be fair to automation, one needs to compare the impact of automation with on the entire SDLC, utilizing automation with a suitably redesigned process. If then the effort is not worth the reward, it’s a lesson learnt. Not every project will be suited to automation

Conclusion

A final note on documentation, test case development and reporting documentation are often the biggest time consumers of testing. Time equalling cost. Documentation needs to balance detail with effort.  Ironically, I spent a chunk of time this morning reading through a 37 page test plan. Yet on looking to implement the first test case, I found the a required parameter not mentioned anywhere in the 37 pages.  I then looked at a different test plan, and it was not there either, yet found it in seconds in that tools automation test script.

Fundamentally we are looking for ways to save time focussing on the areas of highest impact. Its not about slavishly following a set process, but defining the process to be more effective and hence increasing our value and software quality.

Speedometer Confusion

I follow Seth Godin as he has incredible insight and ability to simplify things down to a few key words.  His post, Speedometer confusion is one of may such posts.

Too often, being busy, is confused with productive.  We set metrics like how many defects found, how many test cases run and how many hours tested. But what exactly do these metrics mean to software quality? Looking down at our speedometer, we may think we travelling at 200km/hour for the last 8 hours covering some 1600km.  But what if when we check the GPS, it shows we only covered 100km.

We are often wary of establishing effective KPI, worried about what these would really reveal, or worried management may not understand. Opting to track “safe” KPI that show some form of effort rather than effect. Rather than number of test cases, why not measure percentage coverage? Rather than defects found, why not measure defects missed? Rather than hours spent testing, hours saved?

It is nearly impossible to improve, if one fails to take an accurate measure of the current situation and set goals for the desired outcome.