Horror Stories from the Test Lab

By Tamara Wilhite posted 01-01-2019 01:06:23 PM

  
While I have extensive experience in engineering software testing, my brother’s experience is in quality control and testing of hardware. When we were comparing testing horror stories, he had more than a few I thought worth sharing.

In quality and in testing, you can never be perfect. Something always slips through at some point, though the goal is to design systems to keep that to a minimum. The decision is whether or not the additional tests and more thorough tests are worth the cost of defects that go through.

What irked him more than anything else is when the boss said, "Do a minimal test. If something goes wrong, it's on me."

If your boss ever says that:
1) Get it in writing.
2) Get it in triplicate.
3) Append it to the test cycle results.

The threat of doing this typically gets the manager to approve running the appropriate tests, the thorough tests customers expected or mandated.  When he was doing product testing, my brother had several advantages over a manager who wasn’t as familiar with the hardware. He wrote the test cases and the user manual, so he knew them inside and out. Having worked the particular products for many years, he could do most of the tests in 2 business days, sometimes quicker, when less experienced engineers took three or more days to complete the same tests.

The issue in too many facilities is rushed testing that isn’t as thorough as it needs to be. The problem for many organizations is how few appreciate how long thorough tests actually take. Yet managers often expected the work to be done far faster. If he cited a time frame that was longer than how fast they wanted the product out, then he sometimes was told “just do the bare minimum”. However, I only had one time where I was asked to do less than the minimum acceptable test, and we did the minimum because the manager wouldn't agree to sign off on saying "I asked them to do less than the minimum acceptable".

The classic hardware testing horror story starts when the test code comes in at lunchtime. Unfortunately, due to customer requirements and delays in development, the boss says it has to ship that day. That means you'd only have a few hours (at best) to test it so that it can go to production release. These are the situations where the manager thought it was not a big deal to order minimal tests or they have to ship or else they don't get paid.

For the products he worked with, he could do the functional tests that covered 70-80% of the ways normal customers would use it. That didn't leave time to test the many less commonly used features that are still expected to work. Since the new software version became the released version, other customers in the field will naturally update to it. The new software version will then be subject to widespread, thorough functional testing.

Sometimes you get lucky, and nothing happens. The product does work as expected, and no one finds anything wrong. If there is a bug, it's sometimes in such an obscure feature combination only one customer ever finds it. The issue here is how frustrating it is for technical support to track down the root causes that could have been identified if not fixed if you dedicated enough time for testing.
More often than not, you find that several key features weren't working in the change, and customers are not happy. By that point, it has been a few weeks or a few months. The boss' promise of "It's on me" has been long forgotten. Sales and service are asking how this happened. Don’t we test the product? Didn’t you test the product?

You can try to point back to what you were told. That’s assuming you remember the boss’ promise and that they admit they told you to do the minimal test. Of course, the boss may just shrug their shoulders. Engineering has to fix it anyways. Unfortunately, orders to cut testing short makes it look like quality didn't do its job. This undermines the company’s reputation for quality as well as that of quality staff.

My personal horror stories included thorough software testing of the software version but insufficient time to test the software installation package before it was rolled out and a lack of coordination for solving interface problems. The ability to say it was another team’s fault doesn’t help the customers feel better, much less solve their problems. And if we’d had enough time for thorough testing and fixes of identified problems, finger-pointing wouldn’t have been necessary. I can only remember one occasion where a manager said, “Just test the happy path, the quick little workflow test.” My brother, on the other hand, had been told to do the minimal testing often enough that he’d actually asked for said signatures to confirm yes, that is what the boss ordered.

In the end, we both agreed that testing tends to be given insufficient time and resources whether you’re dealing in hardware or software, because management doesn’t recognize how key it is to quality.

Permalink