Search for:

KPI – Identifying Areas in Process that Requiring Tuning

Both Six Sigma and CMMi focus on creating and managing a feedback loop. Assessing the current status in your organization and then identifying areas for improvement.  To do this, KPI or Measurement Objectives are needed to measure the effectiveness of your organizations Tools, People and Processes. Measuring details like Defect Density, Test Coverage, Total defects, reliability etc. Then collecting the relevant data and analysing these measurements to provide insight into the the results, set goals and track progress.

One of my favourite tools or methods to visual and analyse KPI is via Pareto Graph. Its simplicity makes the 80/20 rule stand out to any level of an organization. In the above example the first 7 groups of defects is clearly well over 80% of the all defects. The impact of addressing the others being minimal, while a small change in the first 3 could have major impact to your organizations quality.

Potential issues with Pareto Graph.

Although extremely helpful in identifying areas that need to be addressed, there are some potential pitfalls on relying on Pereto analysis too much. These include

  1. It is important to define and group the categories correctly. They need to both be precise, so that there can be little doubt as to which category the defect fits into and they need to be of a suitable granularity. Making larger groups, can make addressing the sub issues in the groups difficult, yet making too many sub-groupings can result in loosing the visual effect and intent of the pareto.
  2. Selecting appropriate time periods is important. Measuring only a single release cycle or team, can result in wild fluctuations in results. Consolidating multiple teams, releases etc can cloud visibility into a particular team or release issues.
  3. If changes are made to the Tools, People and Process, to address the issues identified in the  pareto chart, adding them to the same period will not offer any insight into the effectiveness of these changes. These changes need to be compared to a previous baseline. Baselines however can become rapidly dated.
  4. Pareto analysis is draws focus on areas of greatest need, but a laser focus only on those areas, can rapidly destabilize others. Some of these other fixes could be potentially low hanging fruit requiring little effort to resolve.
  5. Although Pareto analysis identifies common areas that may require focus or consideration, it does not supply solutions.

Poll

What I have learnt is that when someone sits down and actually gathers the information and generates a Pareto chart for analysis, they are almost always surprised by what is revealed. For benefit of all,  I wanted to take a short Poll.  First off, looking at teams that do use pareto analysis and teams that don’t.  What I would be interested in seeing, is if these teams see common or different set of issues. In other words, does doing a pareto analysis change peoples list of most common defects. The results will be collected privately and consolidated by me, and posted at a later date. Comments at the bottom of the page posted before hand for public, but comments in the poll kept private unless you stipulate in them you wish to be mentioned.

Please take a minute to weigh in and help others ST3PP ahead.

[contact-form subject=’pareto analysis’][contact-field label=’Screen Name’ type=’name’ required=’1’/][contact-field label=’Does Your Organization Use Pareto Analysis’ type=’radio’ required=’1′ options=’Yes – we review them frequently,No – I have never seen one for my organization’/][contact-field label=’1 – Most Common Defect Group’ type=’select’ required=’1′ options=’Business Changes – Missing or added business functionality,Design Defects – Missing or incorrect architecture requirements,Developer Defects – Missing or defected functionality in code,Environmental Defects – Versioning%26#x002c; enablers or other other site issues,QA Defects – Incorrectly identified as a defect,Other – Please describe in comments’/][contact-field label=’2 – Second Most Common Group’ type=’select’ options=’Business Changes – Missing or added business functionality,Design Defects – Missing or incorrect architecture requirements,Developer Defects – Missing or defected functionality in code,Environmental Defects – Versioning%26#x002c; enablers or other other site issues,QA Defects – Incorrectly identified as a defect,Other – Please describe in comments’/][contact-field label=’3 – Third Most Common Group’ type=’select’ options=’Business Changes – Missing or added business functionality,Design Defects – Missing or incorrect architecture requirements,Developer Defects – Missing or defected functionality in code,Environmental Defects – Versioning%26#x002c; enablers or other other site issues,QA Defects – Incorrectly identified as a defect,Other – Please describe in comments’/][contact-field label=’4 – Fourth Most Common Group’ type=’select’ options=’Business Changes – Missing or added business functionality,Design Defects – Missing or incorrect architecture requirements,Developer Defects – Missing or defected functionality in code,Environmental Defects – Versioning%26#x002c; enablers or other other site issues,QA Defects – Incorrectly identified as a defect,Other – Please describe in comments’/][contact-field label=’5 – Fifth Most Common Group’ type=’select’ options=’Business Changes – Missing or added business functionality,Design Defects – Missing or incorrect architecture requirements,Developer Defects – Missing or defected functionality in code,Environmental Defects – Versioning%26#x002c; enablers or other other site issues,QA Defects – Incorrectly identified as a defect,Other – Please describe in comments’/][contact-field label=’Private Comment To Me’ type=’textarea’/][/contact-form]

 

Calculating QA Costs – Service Planning Costing (SPC)

The series on Costing Models Includes Back In Costing (BIC), Agile Anarchy Method (AAM), and Just Test Something (JTS) Method. The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively. My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

Service Planning Costing is done by calculating the desired number of test cases, then multiplying them by time needed to complete these and the costs for this time. To this you add any additional project costs. It requires a full understanding of the project before costing can be done, and detailed project planning and scope.

SPC Model Inc. is costing a planned project. The new Project will have 50 new web services, each averaging 5 functions in the service. SPC Model Inc standards require a minimum testing one positive and one negative scenario only per service function. By multiplying these numbers together, SPC Model Inc plans require 500 test cases. The project plan calls for 5 expected development cycles or code drops. The security team will need to run their set of tests and the performance team theirs. They also need to do a final lab and production implementation test. Its estimated their current team skills are at a level were each test case will take an average of 12 min to develop and complete and the corporate rate is $50 per hour for QA resources. Finally they plan on purchasing a new tool, 2 lab machines, training and need to pay for hiring and on-boarding new employee. The Service Planning Cost (SPC) is seen to be $63,000 for this project as per below.

SPC Base

The SPC model is focused on breaking the testing down into as many small pieces as possible and assigning a value to each piece. Variations are possible using averages across the project or breaking the project down further into stages. For instance, First release may have less test cases or services, Security test could be a separate calculation or hourly rate could include training and hiring. The more granular, the more likelihood of accuracy, but the greater the risk of inaccuracy. Numbers like release cycles and time per test can inherited through by reviewing previous similarly projects or through running short pilot. Whatever numbers are used, it is still based on plan and plans can go wrong.

Advantages.

  1. Although some of the numbers may come from previous projects, the actual costs are based on real target project scope.
  2. The model supports organization test coverage standards or objectives. The number and extent of test cases is planned for positive, negative, load, security etc.
  3. Very easily communicated and aides understanding between on-shore and offshore teams and management, ensuring all parties work to the a defined plan. This is one of the reasons for this model to be popular with outsourced organizations to reduce project risk and clearly define the scope to prevent scope creep.
  4. Timelines, gates, KPI and milestones are known
  5. Inefficiencies can readily and clearly be identified and costs easily understood. For instance the cost of an additional code drop.

Disadvantages

  1. The line items can be very subjective, hence vulnerable to padding or underestimating. A very small mistake in any line item is compounded and can result in significant over/under costing. For example change the individual test case time by 2 minutes average.
  2. Does not allow for much flexibility in changing the plan. For instance, a particular service may perhaps warrant additional testing or an additional requirement or unforeseen services added.
  3. Still dependent on other parties delivery. What if a extra code drop becomes needed due to a previous issue not being fixed correctly
  4. Relies on a totally flexible environment. A testing team cannot always be expanded or contracted in real time.
  5. Does not support agile development model well.

CONCLUSION

Service Planning Costing (SPC) is not a silver bullet, or exact science and it does not prevent misuse through padding to target a desired price or outcome. It can however be a extremely valuable tool for analyzing expenses, identifying inefficiencies and managing KPI. These KPI can be used one project to the next, and “tuned” over time to plan better.

I will be covering more around these KPI and on understanding better how an organization, or individual,  may focus on the SPC model to “Dial In” for excellence in future posts and at my TASSQ presentation.

Tuning in for QA Excellence

The future of QA, especially in higher salaried countries like Canada, is Excellence. Manually testing via repetitive basic unit tests,  “blindly entering data”, has little future, or returns.  I can’t get enough of articles like Computer World’s Tech hotshots: The rise of the QA expert. In a rapidly changing world of technology, sometimes QA departments can seem like they are locked in stasis, 30 years in the past. The greater challenge for any tool vendor is gaining these groups interest in evaluating change, not the tools features or functionality.

I usually divide Software QA into 3 buckets or aspects. The first and largest expense in most QA environments is People.  The time costs for  fingers and eyes to enter “data”.   It’s not surprising then that much of QA cost cutting efforts are targeted at reducing People costs. Often People “tuning” amounts to cutting rather than development. When last did your QA department do any training or personal development? When last, as a QA professional did you do any training of your own initiative? Perhaps signing up for JSON introduction course or learn a new tool? “But as a Automation tool vendor, don’t you replace the need for People?”.  The answer is no, we require higher skilled people to develop and use automation tools. Someone still needs to plan, develop and run the test cases, else who would we sell tools to.  Less fingers and eyes maybe, but far more thought and Process.

That brings us to the second aspect of QA is Process. Business constantly needs to re-invent itself. Tuning business process is an import part of  competing and achieving greater excellence.  Yet how many software QA teams have you heard described as innovative? When last did you try an alternate, experimental process or embrace role changes, vs object to proposed role changes? Have you tried getting QA involved earlier in the development cycle, or change the requirements for hand-off from developers. Perhaps you brought some tests into the testing cycle earlier (like performance or identity) to provide more time for code fixes. Changing process requires the right people and the support infrastructure.

That brings us to the 3rd aspect of Software QA , support infrastructure or Tools.  Just like People and Process, Tools by themselves will have little impact. They require the support of the people and the process. So often I hear of someone developing and in-house tool for testing, usually showing great skill, but taking significant time. This person then leaves the company, leaving no-one knowing how to use this expensive tool. I know 3 large enterprises, that used the same person to develop such a tool, that person no longer with any of the these companies. Tools cannot take more people to maintain than to manual testing would. So too must the tool support your business process, there is little point having a great tool in the lab, if the code drop is delayed and team sits idle waiting.

The 3 aspects are all not just linked but act as multipliers with a compounded effect. For example, if you process is streamlined to cut one development cycle out of being tested, it will impact both people and tools required. If a tool can cut 5 minutes off a test cycle, significant impact can be seen in people and process.  If you do cut out a development cycle and save 5 minutes a test case, the effect is multiplied resulting in huge impact to the entire SDLC in either more testing or less expense. Only through “dialing in” and tuning all 3 aspects can Quality Assurance Excellence be achieved.

So please, take some time to consider how you can personally achieve greater excellence as a QA professional. I for one, personally and as a tool vendor, support any such activity.

Calculating QA Costs – Just Test Something Method (JTS)

Adding to the costing models in the series for Agile Anarchy Costing, Backing In Costing and Service Planning Costing, Just Test Something is one of the more common. The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively.  My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

The Just Test Something (JTS) method of costing is when the price for a QA project is determined by external pressure and not by formal requirements for test coverage. Were BIC relied on past precedent, JTS is usually based on time to market, costs or some other business requirement or external force.

Now before you grab your pitchforks and start packing the wood around the stake for my heretical costing model, let me mention that “…whatever we have time for…” was the most common response and feedback I got for the Percentage Coverage article. It was also the most common QA tester, vs. QA managers opinion on how they cost QA. Statements like “we get x weeks to test what we can”

For example, JTS Model Inc, marketing department released a press release that JTS software 1.0 would be GA in 2 weeks time.  Final cut-off time for any potential code fixes was however determined in a Go/No go meeting 32 hours before GA. Development, was still however struggling to add the last few change requests added by user requirements team. They expected the next code drop to be ready in 3 days time, but delivered it 5 days later, leaving QA 3.5 work days of testing time as  Working extra hours, QA was still finding a significant number of quality issues at the deadline meeting. The CEO, decided however, that GA deadline was of greater concern than any yet undiscovered code issues. As a result, performance, Load and Security testing were ignored and only basic functional tests covering some un-calculated percentage of the application was completed.

Typical of start-ups and immature QA departments, Just Test Something (JTS) is often a result of lack of QA focus and common in organizations that consider QA only as “un”necessary evil. Often bordering on having users do final QA in production. Not to be confused with the recent trend in offering bug bounties, JTS  places QA on some scale between “wish we could skip this step /expense” and “I guess we have to say we did SOME LEVEL of QA in the check box”.

JTS takes little focus on number of services, percentage coverage or number of release drops. If fact, it usually has very little planning and is mostly re-active. The process, being “whatever we have time for, get busy”. What is to be tested, and how it will be done being left up to a QA manager or even the individual tester to decide.

Advantages

  1. Low accountability and plausible deniability, QA can always say it was not given enough time and usually there is a certain amount of acceptance that defects will make it into production.
  2. Costs are usually tightly controlled and known, its only the outcome (quality) that is estimated. Seldom does issues like percentage coverage, number of services, number of test cases etc need to be described to management.
  3. Testers focus and time naturally shifts to code frequently used parts of the application or parts that have more defects in the code, since testing structure is less rigid and testing is focussed on highest priority, not total coverage.
  4. Flexibility for testers to test as and how they see fit, determining their own tools, process and focus is often part of JTS, together with low accountability, is attractive to some QA staff.
  5. A final Go/No Go meeting is usually part of the SDLC in which more than QA weigh in on if “enough” testing was done. If the level of QA is too low, this meeting can provide a last-minute reprieve.

Disadvantages

  1. QA role is heavily diminished, lessening their credibility and ability to weigh in and ask for an extension in a Go / No Go meeting. Often little formal gating is done, and code thrown at QA to get it off developments plate resulting in frequent release cycles.
  2. Lack of process can result in QA’s attention and resources not being evenly distributed, resulting QA testing the most common parts of code multiple times, while ignoring other. The result is uneven coverage of QA and possibly deeply embedded defects that can be missed for many releases.
  3. Certain steps are generally more frequently sacrificed due to the constraints. For example performance testing or security testing. Eventually these steps fall out of the testing process entirely as it becomes expected that these aspects will be ignored.
  4. Usually organizations lack of focus on QA results in little training of education spent on developing QA Skills, Process or Tools. The focus if anything on reporting progress. The result further decreases the efficiency of what little QA is being done.
  5.  Poor QA rapidly leads to poor reputation. At some point managements focus shifts to “Fixing Quality” and alternate QA strategies, like outsourcing, off-shoring and restructuring become common place as attempts are made to repair previously missed defects and repair a “week” QA organization.

Conclusion

In reality, any QA department needs to balance time to market and other pressures with QA coverage. As mentioned in the first of this series, these are not static models, but rather companies may use one or more of these models and be positioned on some scale, from slightly applying this model to mostly utilizing this model. QA may wish they have unlimited time and resources at their disposal to do 100% test coverage, but this is rarely the case. What defines JTS model is that QA coverage is determined by the pressure placed on it and not the need to, due any particular level of due diligence.

So put away your Pitchfork, and add a comment below if you wish to add or detract from this post.

Calculating QA Costs – Backing In Costing Method (BIC)

The series on Costing Models Includes Service Planning Costing (SPC), Agile Anarchy Method (AAM), and Just Test Something (JTS) Method. The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively.  My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

Very few organization use 100% one model or the next. These models are neither exclusive or of a rigid scale. Rather most organizations will use one model as primary model, and perhaps have 1 or more secondary models. Placing them somewhere on a scale between one or more of these models.

Backing In Costing Model (BIC) or baseline in model is uses past measurements to determine future costs with very limited desire to changing Process, Coverage or anything for that matter.  Similar, yet not to be confused with regression testing, a baseline from previous year/s is used to work backwards to costs future projects.

For example, last year BIC Model Inc. had 10 QA staff that delivered 2 large projects to the relative satisfaction of everyone. Last years cost, were based on the previous year and I no one remembers  how those were calculated.  They established a baseline, and set the expectations within their organization and to their customers. The process was settled, roles understood and number of production errors accepted. The entire cost allocated to QA last year was $600,000.  That is $300,000 per large QA project and average of $60,000 per QA staff (The hourly QA costs, although rarely used are seen as $30.36).

This year  BIC Model Inc. has a global executive requirement for 10% reduction in workforce and costs.  However they have 2 similar size projects and a 3rd project that the management accepted is ABOUT 50% the size of the other two. This years QA will be fortunate to have an increased budget to $ 675,000. BUT “Don’t consider this a new baseline, next year  we drop back to $540,000 from last year” . The amount is calculated by taking the baseline $600,000 -10% annual savings target =540,000  x 25% for the additional project. BIC Model Inc. decides to spend this additional money by adding 2 fresh new recruits to QA at $37,000 a year to offset the additional workload expected.

The BIC Model is focussed on previously accepted and predictable costs and about maintaining same Status Quo. Usually effected by annual cost reductions efforts, but rarely is the team successful as in the example above raising the budget.

It’s often characterized by management seeing little benefit to training staff or updating tools, as these bring change and possibly new costs. Vocal supporters for streamlining process, there is usually much resistance to change. Until some accepted baseline measurement like usual amount of production errors, fails to meet expectations. The motivation then, is often to streamline process to maintain the previous baseline.

Advantages

  1. Costing is based on real corporate experience and history. There is no unexpected surprises like additional corporate expense sharing like health or retirement plans or desk space lease etc
  2. It’s a easy sell to executives, they expected the costs and there is no need to explain percentage coverage calculations, number of services, test coverage, release cycles, emerging technology concerns etc
  3. People tend to know their roles and relationship between development, business and QA is usually mature.
  4. It’s very subjective regarding size of projects. In the example above, how similar in size are these projects really? This allows for a certain amount of exaggeration.
  5. Software maintenance and regression testing costs are usually well understood

Disadvantages

  1. Backing in to how much testing will be done, based only loosely based on the amount of testing required. It’s very subjective regarding size of projects. Time per test case development, number test cases, number of software releases etc can often be ignored
  2. The focus is on maintaining Status Quo and not on improvement. There is often little progression, changes to process,  promotion, training, introduction of new tools and skills development.
  3. The baseline is often dated. Rather than re-calculating the baseline and to better understand the time per test case development, number test cases, number of software releases etc. after each project, the baseline can be years old.
  4. Bad process, habits,  practices, people etc. become part of the baseline to be protected.
  5. Corporate costs cutting initiatives, like 10% example In BIC Model Inc. can eventually be the “straw that breaks the camels back”.   On the other hand, QA is constantly living under the threat cuts,  and needs to show that each year is equal or larger than the previous. Annual negotiation becomes and constant battle for survival balanced against expectations on the delivery of quality. Subjective numbers, not backed by detailed baselines, can become very inflated.

Conclusion

Taking a baseline at the end of each project, to understand the impacts of new process, people, tools etc and to ensure that your calculations for any future project is good practice. Evaluating one project vs another.  Were the BIC model can rapidly fail, is if the detail or level of the baseline is poor or the baseline becomes outdated. Backing into the amount of testing to be done based vs. using these baselines to calculate forward using a more detailed costing model.

The second in this series covers Just Test Something (JTS) Method.

Did I miss an advantage or disadvantage. Please feel free to comment below. My next costing model will be posted shortly.

What is the future for QA Analysts in Canada?

I wanted to generate some thought on how the QA role is changing in Canada.

The largest fix costs associated with QA by far is that of labour. Let’s look at some average QA analyst salaries using Salary.com Canadian median numbers to avoid debate.

Level 1 QA Analyst 0-2 years experience C$53,463 /year C$26 /hour
Level 2 QA Analyst 2-4 years C$61,839 /year. C$30 /hour
Level 3 QA Analyst 5+ years C$75,585 /year C$36 /hour
QA Manager 8+ years C$93,642 /year C$45 /hour
QA Director 12+ years C$107,355 /year C$52 /hour

Finding the right balance and mix of staff, seniority, skill set, to work load is a difficult endeavour. Do you set aside individuals to do client vs. server, performance vs. functionality, regression vs. pre-production etc.

For simplicity let’s say Acme Inc has 10 QA staff all level 2 QA Analysts and one QA director.  Their application has 4 releases a year.  The calculation for wages expense would be 10 employees x C$61,839 wage = C$618,390 + C$107,355 for the director = C$725,745 excluding additional costs associated with an employee like office, equipment, management overhead etc. That’s C$181,436.25 a release. It’s not surprising that management focus is often on ways to reduce this amount.  Attempts at cutting the costs of labour, usually comes in the a few forms

Read More