Search for:

CLOUDPort Free Runtime Player for Troubleshooting

I get a lot of calls from clients having connectivity issues between the client and the services. Connecting between various labs, environments,  instances, sites etc  can be difficult for developers and testers to troubleshoot. Here is a simple free way to confirm connectivity at the web service level.

The CLOUDPort Runtime player is a free tool that can run mock virtualized services to test your client against. While the paid version of CLOUDPort allows you to create the run-times / responses you wish, the Free run-time, comes with 3 embedded solutions. An Echo Service, a Static Response Service and a Fault Service.

The runtime can be used in a variety of ways. The echo service is often used to check field mapping through a XML gateway or some transformation device, since the request is sent back as a response, you can confirm any manipulation of the request or response message. CLOUDPort Runtime also support load testing, providing real time performance information, using either echo or static response. I don’t want to try list all the possible uses cases of the free runtime, as I am sure many of you will come up with new ways.

Read More

PERFORMANCE TUNING MOBILE API – CLIENT

In my first post in this series, I highlighted the need to isolate and break down the user experience into logical and measurable portions to used as a baseline.

The client often gets the most focus the device in the users hand at the end of the chain of factors influencing performance. As it’s at the end of the chain, it is the sum of all the others and unarguably the user’s final experience. This being said, the client impact on performance is only the time added from the moment the device receives a complete message, to display. Or the point of submit till the time it leaves the device.  It’s may not be hard to identify when a mobile application is giving a poor user experience, but QA needs to also identify why. Read More

PERFORMANCE TUNING MOBILE API – NETWORK

A mobile application by default has a network component. The portion from the phone to the Ethernet card of the API servers. Who has not seen the US commercials “can you hear me now?” Canadian Wireless Service providers spend significant effort to plan their network coverage, identify poor performance, do capacity planning and ensure signal coverage. This includes crowd sourcing, BI, using tools and even driving around. Wireless networks are however not static and everything from the number of tree leaves, to time of day, affect the signal strength and capacity for a given location (Wireless bandwidth is limited and shared per cell frequency and cell coverage). Add to this nearly 10,000,000 square km of geography we have in Canada and you can understand the enormity of testing the network.

Read More

PERFORMANCE TUNING MOBILE API – API Themselves

To non-technical minded people and API is often a “a program running on a server somewhere” and rarely considered impacting to user experience. Web Services API’s Provide the information responses after doing the necessary calculations to requests made by the mobile application that populate the information on the screen. The client is all about presentation, and usually does little computation. Its the API and not the client that is doing the heavy computational tasks and hence can have greater effect on the user experience. Sure you can throw more computing power at it, but this does not always work.

Depending on the design, each request to by the mobile device to Consume an API, could respond by Providing multiple fields. Say the API is a customer record API. A request by the Client, would result in the API Providing the entire customer record even if only a small part of this response is needed.  That means if the application needs to display the customer number, yet the API Provides the entire customer record, the entire record would be transmitted to the device, which would drop everything but the customer number.  On the other hand, any screen on the client, can request more than one API.  Say a second API Provides order history for a given customer number. The client application has a screen that displays customer number and the last order made. It would need to first request for the customer record API. The API will Provide the entire customer record.  The client would then send the customer number as part of its request to Consume order history API, and the server would then Provide a response for after doing the necessary computation to generate the order history. This computation process could rely on a external DB or CRM system (what we call a enabler). The client would then populate and display the page with just the portions needed. This workflow we call a Chained service, the response from one request, being used as a request for another.

The time taken for a API to respond, includes this logic or computation that can involve look-ups on other systems (enablers). Identity validation, DB lookup and even external systems like partner shipping systems, or foreign trading systems.  Robust and Sustainable API should be kept small, lightweight and client independent, to ensure their modularity and re-usability. This was not always the design criteria, and many older services are monolithic and tightly coupled with the client. API Gateways are often used to mediate and create new lighter weight services for Mobile applications. Adapting protocol and message format  and creating virtual partial API, to strip unwanted traffic off the network portion. These gateways can offer caching and performance improvements, but can also be sources of latency.

The rapid growth in mobile application development has resulted in many new technologies, and emerging standards. Newer, lighter weight protocols like REST are generally used vs. more mature, heavyweight SOAP. New encryption methodologies like elliptic curve are common since it requires lower client CPU processing. New Identity formats like SAML and OAuth are used to address identity in the cloud and mobile arena. These new emerging technologies, are often still early or pre-standard development and relatively immature. Furthermore, the skills of developers and QA in these new standards are very limited and in extremely high demand. When last did your team get training on one of these emerging technologies, or have they simply learned these as they have developed your mobile application? It is unlikely that a business can expect the same level of maturity and quality in mobile applications as perhaps they may in other more traditional development and the fault density will probably be far higher in new mobile application development.

API Performance = User Experience –  (Client, Network, Enablers)

From Client and Network posts, we already know the Network and Client performance impacts. The same test case run locally to the API gateway or server provides the performance for API + Enablers.

I constantly recommend that performance testing be done earlier in the testing and development life-cycle. Since SOAPSONAR same test cases can be used for functional and performance testing, performance testing should start as each service is validated functional on a service by service basis. SOAPSonar will report on each individual request and response time for each service, including each step in a chained service. A client application may only show the end result of the chain service or the service that responds slowest. Testing via the device may give user experience, but provides little information as to what or which service is slowing things down.

An important part of performance testing is understanding the impact of load on performance. This is usually done right before production cutover, and often leaves little time for time consuming rewrites. The result is often over architect hardware or network to compensate. SOAPSonar can use the same test case, with Virtual Agents to generate load (including across physically distributed load agents). Reporting on the pact on performance at a given TPS or understanding at what point the system will begin failing. By defining a success criteria in the test case to fail tests that take over a given time, can help identify individual services that start failing under load or when running a regression test.

SOAPSonar can detail the performance of each request made to each API and the response time, hence identifying any particular API’s which may not perform well. If these API are supported by an Enabler, identifying the poorly performing Enabler vs the API can requires additional isolation.

PERFORMANCE TUNING MOBILE API – ENABLERS

The logic behind API’s can involve look-ups on other systems. API’s also address things like identifying the user, authentication, security, encryption, message signing etc many of these reside on other centralized or shared systems. A trend in the Data Economy, SaaS and Cloud services is to do mash-ups of API’s, either Consuming API’s outside of your organization as part of your application, or Consuming, refining then Providing them it turn. Collectively, I refer to these as Enablers, services that support, yet not exclusively for your application or under your control. The vast majority of the performance issues I have being involved in was due to some Enabler or other being slow or failing under load.

Read More

Performance Testing Mobile Applications – Conclusion

In conclusion, I wanted to answer a few questions and comments that I have got back on the series, and summarize.

When at junior school, I was asked what I wanted to be one day. I responded a scientist and inventor. The response I got stuck in my mind as it seemed totally at odds with how I perceived this career path. While I thought each day would bring new things, the “Scientific Method” is around understanding the impact of dependant and independent variables. Running the exact same test over and over again, with the small change in a single variable to understand the effects that variable has.

User Experience = Enablers + API + Network + Client. Read More