Search for:

Calculating QA Costs – Agile Anarchy Method (AAM)

Continuing the theme of QA costing methodologies, Just Test Something (JTS) and Backing In Costing (BIC), Service Planning Costing (SPC). The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively.  My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

Let me start by saying, Agile Anarchy Method of COSTING is not the full Agile methodology. Agile Anarchy Method (AAM) is about focusing on maintaining a certain agility (anarchy) in order NOT to be tied to fixed costs.

It’s about the bleeding costs of Agile, selectively applying only parts of Agile methodologies to development process. AAM is usually used in organizations were there are frequent changes to environment, QA scope or the software being tested. It can even be found in organizations following a more traditional waterfall process for development. Its also common in rapidly growing or newly founded organizations, where immature process and reactive planning is sometimes dressed up by calling it Agile, vs having to admit that its really just Anarchy.  Read More

Tuning in for QA Excellence

The future of QA, especially in higher salaried countries like Canada, is Excellence. Manually testing via repetitive basic unit tests,  “blindly entering data”, has little future, or returns.  I can’t get enough of articles like Computer World’s Tech hotshots: The rise of the QA expert. In a rapidly changing world of technology, sometimes QA departments can seem like they are locked in stasis, 30 years in the past. The greater challenge for any tool vendor is gaining these groups interest in evaluating change, not the tools features or functionality.

I usually divide Software QA into 3 buckets or aspects. The first and largest expense in most QA environments is People.  The time costs for  fingers and eyes to enter “data”.   It’s not surprising then that much of QA cost cutting efforts are targeted at reducing People costs. Often People “tuning” amounts to cutting rather than development. When last did your QA department do any training or personal development? When last, as a QA professional did you do any training of your own initiative? Perhaps signing up for JSON introduction course or learn a new tool? “But as a Automation tool vendor, don’t you replace the need for People?”.  The answer is no, we require higher skilled people to develop and use automation tools. Someone still needs to plan, develop and run the test cases, else who would we sell tools to.  Less fingers and eyes maybe, but far more thought and Process.

That brings us to the second aspect of QA is Process. Business constantly needs to re-invent itself. Tuning business process is an import part of  competing and achieving greater excellence.  Yet how many software QA teams have you heard described as innovative? When last did you try an alternate, experimental process or embrace role changes, vs object to proposed role changes? Have you tried getting QA involved earlier in the development cycle, or change the requirements for hand-off from developers. Perhaps you brought some tests into the testing cycle earlier (like performance or identity) to provide more time for code fixes. Changing process requires the right people and the support infrastructure.

That brings us to the 3rd aspect of Software QA , support infrastructure or Tools.  Just like People and Process, Tools by themselves will have little impact. They require the support of the people and the process. So often I hear of someone developing and in-house tool for testing, usually showing great skill, but taking significant time. This person then leaves the company, leaving no-one knowing how to use this expensive tool. I know 3 large enterprises, that used the same person to develop such a tool, that person no longer with any of the these companies. Tools cannot take more people to maintain than to manual testing would. So too must the tool support your business process, there is little point having a great tool in the lab, if the code drop is delayed and team sits idle waiting.

The 3 aspects are all not just linked but act as multipliers with a compounded effect. For example, if you process is streamlined to cut one development cycle out of being tested, it will impact both people and tools required. If a tool can cut 5 minutes off a test cycle, significant impact can be seen in people and process.  If you do cut out a development cycle and save 5 minutes a test case, the effect is multiplied resulting in huge impact to the entire SDLC in either more testing or less expense. Only through “dialing in” and tuning all 3 aspects can Quality Assurance Excellence be achieved.

So please, take some time to consider how you can personally achieve greater excellence as a QA professional. I for one, personally and as a tool vendor, support any such activity.

Calculating QA Costs – Just Test Something Method (JTS)

Adding to the costing models in the series for Agile Anarchy Costing, Backing In Costing and Service Planning Costing, Just Test Something is one of the more common. The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively.  My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

The Just Test Something (JTS) method of costing is when the price for a QA project is determined by external pressure and not by formal requirements for test coverage. Were BIC relied on past precedent, JTS is usually based on time to market, costs or some other business requirement or external force.

Now before you grab your pitchforks and start packing the wood around the stake for my heretical costing model, let me mention that “…whatever we have time for…” was the most common response and feedback I got for the Percentage Coverage article. It was also the most common QA tester, vs. QA managers opinion on how they cost QA. Statements like “we get x weeks to test what we can”

For example, JTS Model Inc, marketing department released a press release that JTS software 1.0 would be GA in 2 weeks time.  Final cut-off time for any potential code fixes was however determined in a Go/No go meeting 32 hours before GA. Development, was still however struggling to add the last few change requests added by user requirements team. They expected the next code drop to be ready in 3 days time, but delivered it 5 days later, leaving QA 3.5 work days of testing time as  Working extra hours, QA was still finding a significant number of quality issues at the deadline meeting. The CEO, decided however, that GA deadline was of greater concern than any yet undiscovered code issues. As a result, performance, Load and Security testing were ignored and only basic functional tests covering some un-calculated percentage of the application was completed.

Typical of start-ups and immature QA departments, Just Test Something (JTS) is often a result of lack of QA focus and common in organizations that consider QA only as “un”necessary evil. Often bordering on having users do final QA in production. Not to be confused with the recent trend in offering bug bounties, JTS  places QA on some scale between “wish we could skip this step /expense” and “I guess we have to say we did SOME LEVEL of QA in the check box”.

JTS takes little focus on number of services, percentage coverage or number of release drops. If fact, it usually has very little planning and is mostly re-active. The process, being “whatever we have time for, get busy”. What is to be tested, and how it will be done being left up to a QA manager or even the individual tester to decide.

Advantages

  1. Low accountability and plausible deniability, QA can always say it was not given enough time and usually there is a certain amount of acceptance that defects will make it into production.
  2. Costs are usually tightly controlled and known, its only the outcome (quality) that is estimated. Seldom does issues like percentage coverage, number of services, number of test cases etc need to be described to management.
  3. Testers focus and time naturally shifts to code frequently used parts of the application or parts that have more defects in the code, since testing structure is less rigid and testing is focussed on highest priority, not total coverage.
  4. Flexibility for testers to test as and how they see fit, determining their own tools, process and focus is often part of JTS, together with low accountability, is attractive to some QA staff.
  5. A final Go/No Go meeting is usually part of the SDLC in which more than QA weigh in on if “enough” testing was done. If the level of QA is too low, this meeting can provide a last-minute reprieve.

Disadvantages

  1. QA role is heavily diminished, lessening their credibility and ability to weigh in and ask for an extension in a Go / No Go meeting. Often little formal gating is done, and code thrown at QA to get it off developments plate resulting in frequent release cycles.
  2. Lack of process can result in QA’s attention and resources not being evenly distributed, resulting QA testing the most common parts of code multiple times, while ignoring other. The result is uneven coverage of QA and possibly deeply embedded defects that can be missed for many releases.
  3. Certain steps are generally more frequently sacrificed due to the constraints. For example performance testing or security testing. Eventually these steps fall out of the testing process entirely as it becomes expected that these aspects will be ignored.
  4. Usually organizations lack of focus on QA results in little training of education spent on developing QA Skills, Process or Tools. The focus if anything on reporting progress. The result further decreases the efficiency of what little QA is being done.
  5.  Poor QA rapidly leads to poor reputation. At some point managements focus shifts to “Fixing Quality” and alternate QA strategies, like outsourcing, off-shoring and restructuring become common place as attempts are made to repair previously missed defects and repair a “week” QA organization.

Conclusion

In reality, any QA department needs to balance time to market and other pressures with QA coverage. As mentioned in the first of this series, these are not static models, but rather companies may use one or more of these models and be positioned on some scale, from slightly applying this model to mostly utilizing this model. QA may wish they have unlimited time and resources at their disposal to do 100% test coverage, but this is rarely the case. What defines JTS model is that QA coverage is determined by the pressure placed on it and not the need to, due any particular level of due diligence.

So put away your Pitchfork, and add a comment below if you wish to add or detract from this post.

Calculating QA Costs – Backing In Costing Method (BIC)

The series on Costing Models Includes Service Planning Costing (SPC), Agile Anarchy Method (AAM), and Just Test Something (JTS) Method. The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively.  My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

Very few organization use 100% one model or the next. These models are neither exclusive or of a rigid scale. Rather most organizations will use one model as primary model, and perhaps have 1 or more secondary models. Placing them somewhere on a scale between one or more of these models.

Backing In Costing Model (BIC) or baseline in model is uses past measurements to determine future costs with very limited desire to changing Process, Coverage or anything for that matter.  Similar, yet not to be confused with regression testing, a baseline from previous year/s is used to work backwards to costs future projects.

For example, last year BIC Model Inc. had 10 QA staff that delivered 2 large projects to the relative satisfaction of everyone. Last years cost, were based on the previous year and I no one remembers  how those were calculated.  They established a baseline, and set the expectations within their organization and to their customers. The process was settled, roles understood and number of production errors accepted. The entire cost allocated to QA last year was $600,000.  That is $300,000 per large QA project and average of $60,000 per QA staff (The hourly QA costs, although rarely used are seen as $30.36).

This year  BIC Model Inc. has a global executive requirement for 10% reduction in workforce and costs.  However they have 2 similar size projects and a 3rd project that the management accepted is ABOUT 50% the size of the other two. This years QA will be fortunate to have an increased budget to $ 675,000. BUT “Don’t consider this a new baseline, next year  we drop back to $540,000 from last year” . The amount is calculated by taking the baseline $600,000 -10% annual savings target =540,000  x 25% for the additional project. BIC Model Inc. decides to spend this additional money by adding 2 fresh new recruits to QA at $37,000 a year to offset the additional workload expected.

The BIC Model is focussed on previously accepted and predictable costs and about maintaining same Status Quo. Usually effected by annual cost reductions efforts, but rarely is the team successful as in the example above raising the budget.

It’s often characterized by management seeing little benefit to training staff or updating tools, as these bring change and possibly new costs. Vocal supporters for streamlining process, there is usually much resistance to change. Until some accepted baseline measurement like usual amount of production errors, fails to meet expectations. The motivation then, is often to streamline process to maintain the previous baseline.

Advantages

  1. Costing is based on real corporate experience and history. There is no unexpected surprises like additional corporate expense sharing like health or retirement plans or desk space lease etc
  2. It’s a easy sell to executives, they expected the costs and there is no need to explain percentage coverage calculations, number of services, test coverage, release cycles, emerging technology concerns etc
  3. People tend to know their roles and relationship between development, business and QA is usually mature.
  4. It’s very subjective regarding size of projects. In the example above, how similar in size are these projects really? This allows for a certain amount of exaggeration.
  5. Software maintenance and regression testing costs are usually well understood

Disadvantages

  1. Backing in to how much testing will be done, based only loosely based on the amount of testing required. It’s very subjective regarding size of projects. Time per test case development, number test cases, number of software releases etc can often be ignored
  2. The focus is on maintaining Status Quo and not on improvement. There is often little progression, changes to process,  promotion, training, introduction of new tools and skills development.
  3. The baseline is often dated. Rather than re-calculating the baseline and to better understand the time per test case development, number test cases, number of software releases etc. after each project, the baseline can be years old.
  4. Bad process, habits,  practices, people etc. become part of the baseline to be protected.
  5. Corporate costs cutting initiatives, like 10% example In BIC Model Inc. can eventually be the “straw that breaks the camels back”.   On the other hand, QA is constantly living under the threat cuts,  and needs to show that each year is equal or larger than the previous. Annual negotiation becomes and constant battle for survival balanced against expectations on the delivery of quality. Subjective numbers, not backed by detailed baselines, can become very inflated.

Conclusion

Taking a baseline at the end of each project, to understand the impacts of new process, people, tools etc and to ensure that your calculations for any future project is good practice. Evaluating one project vs another.  Were the BIC model can rapidly fail, is if the detail or level of the baseline is poor or the baseline becomes outdated. Backing into the amount of testing to be done based vs. using these baselines to calculate forward using a more detailed costing model.

The second in this series covers Just Test Something (JTS) Method.

Did I miss an advantage or disadvantage. Please feel free to comment below. My next costing model will be posted shortly.

Cost of Versioning and API or Service in the Data Economy

The Data Economy is booming, and much is being written about Software “eating the world”. many companies however, have not formalized a strategy on Developing and Versioning their API’s. In a recent Forbes article “Collaborate to Grow Says Deloitte Global CEO Barry Salzberg”  MIT Sloan graduates Jaime Contreras and Tal Snir are quoted as saying “the peer-to-peer exchange of goods and services – is being called the next big trend in social commerce, and represents what some analysts say is a potential $110 billion market.” Last month,  Information week did as entire special issue on the “Age Of The API“, the enablers of the Data Economy.

This exponential growth in API’s is however creating significant versioning concerns and many organizations are beginning to consider their strategy for API versioning as their current strategies become insupportable. For example, the business needs a REST version of an existing SOAP service for mobile access. Should the migrate the entire service and all the client Consumers of the service to a new REST API and end of life the existing SOAP service? Or perhaps develop new API and leave the SOAP service in place?  Whatever the reason is for the change, is the best strategy to create new, update the existing, replace the existing or do something else? How many of these changes can they manage in a given period of time and what are the costs? Read More

What is the future for QA Analysts in Canada?

I wanted to generate some thought on how the QA role is changing in Canada.

The largest fix costs associated with QA by far is that of labour. Let’s look at some average QA analyst salaries using Salary.com Canadian median numbers to avoid debate.

Level 1 QA Analyst 0-2 years experience C$53,463 /year C$26 /hour
Level 2 QA Analyst 2-4 years C$61,839 /year. C$30 /hour
Level 3 QA Analyst 5+ years C$75,585 /year C$36 /hour
QA Manager 8+ years C$93,642 /year C$45 /hour
QA Director 12+ years C$107,355 /year C$52 /hour

Finding the right balance and mix of staff, seniority, skill set, to work load is a difficult endeavour. Do you set aside individuals to do client vs. server, performance vs. functionality, regression vs. pre-production etc.

For simplicity let’s say Acme Inc has 10 QA staff all level 2 QA Analysts and one QA director.  Their application has 4 releases a year.  The calculation for wages expense would be 10 employees x C$61,839 wage = C$618,390 + C$107,355 for the director = C$725,745 excluding additional costs associated with an employee like office, equipment, management overhead etc. That’s C$181,436.25 a release. It’s not surprising that management focus is often on ways to reduce this amount.  Attempts at cutting the costs of labour, usually comes in the a few forms

Read More