Search for:

Performance Tuning Mobile API – Introduction

Mobile applications offer a number of unique challenges with regards to testing, adding complexity through an expanding number of variables. Along with usual testing concerns, there is an array of devices, with an uncertain network and the emerging mobile services standards themselves.  Business people wish to focus on the user’s experience, attempting to gain some level of certainty in what is still a very uncertain and emerging world. Specifying requirements to developers and QA that can be extremely difficult or costly or even impossible to validate.  Let’s take a poorly crafted requirement “the application screens will refresh in less than 3 seconds on devices and networks”.

When in field testing, a device consistently requires more than 3 seconds to refresh a screen, what is wrong? Did Developers or QA fail to meet with the requirements? When performance is poor, how do developers and testers pinpoint the problem? Is it even a software issue that can be fixed? Is it perhaps the devices memory/CPU load, the screen size or storage space, the client software, or perhaps bug in that particular version or OS. Perhaps it’s the geographic location of the user, the wireless provider, the type of network the user has, the user’s physical location and signal strength. Or perhaps it’s one particular service, identity, encryption, authentication or some 3rd party service used by the mobile application. There are literally thousands of possible causes and combinations, making it impossible to consider, test and validate them all. Tuning however for the best chance of success is possible.

With the problem being the number of variables, it is no surprise that best practices usually involve dividing these variables into groups and then testing to understand the impact each particular group can have on performance rather than attempting to test for every permeation.  By breaking the entire experience into smaller components, and understanding the impact of each component, variances can easily be identified. Although every organization and application is different,   let’s looks at 4 groups.

  1. Client – The device, its operating system, and applications on it, including the your own.
  2. Network – wireless and wired communication from wherever the client is to the Ethernet port of the API.
  3. API – The Services that are Consumed by the Mobile Application
  4. Enablers – The services, like Databases, Identity, 3rd party etc that support the API.

This can be represented as:

User Experience = Enablers + API + Network + Client.

The next post is on understanding the impact of client and on ways to isolate and understand the impact that client has on your overall performance for troubleshooting.

CLOUDPort Free Runtime Player for Troubleshooting

I get a lot of calls from clients having connectivity issues between the client and the services. Connecting between various labs, environments,  instances, sites etc  can be difficult for developers and testers to troubleshoot. Here is a simple free way to confirm connectivity at the web service level.

The CLOUDPort Runtime player is a free tool that can run mock virtualized services to test your client against. While the paid version of CLOUDPort allows you to create the run-times / responses you wish, the Free run-time, comes with 3 embedded solutions. An Echo Service, a Static Response Service and a Fault Service.

The runtime can be used in a variety of ways. The echo service is often used to check field mapping through a XML gateway or some transformation device, since the request is sent back as a response, you can confirm any manipulation of the request or response message. CLOUDPort Runtime also support load testing, providing real time performance information, using either echo or static response. I don’t want to try list all the possible uses cases of the free runtime, as I am sure many of you will come up with new ways.

Read More