A few years ago I wrote a white paper on testing Commercial Off the Shelf (COTS) products. As my last couple of projects have related to this I thought it would be a good idea to share it again here.
I haven’t made any changes, although I must admit my opinion has changed somewhat in some areas.
What are your experiences working with COTS products? I’d be interested to hear your thoughts, so please feel free to get in touch…
A common perception regarding COTS software is that minimal or no testing is required as the product will work “out of the box”.
Companies are discovering however that rather being reduced, time for test-related activities more often shifts away from traditional functional testing towards other activities such as compatibility and integration testing.
The objective of this article is to highlight several of the questions companies should ask when implementing a COTS product, and some suggestions to help avoid some of the potential pitfalls.
How can I test if I can’t access the code?
The customer is unlikely to be able to access the source code for a COTS product. Functional testing will almost certainly be inappropriate as there is no may of knowing what code coverage can be achieved by writing any number of test scenarios, so a black-box approach is required.
Even if you have detailed functional requirements you are attempting to fulfil, it is highly improbable that a COTS product will have been developed with those in mind. User guides etc. can provide a good steer, but a logical approach to validate that the product is fit for purpose would therefore be to design tests based on the functional and/or business processes that will be followed using the product.
It is those functional/business processes that essentially become the requirement.
Will the product be compatible with my specific configuration?
Testing completed by the software vendor will be in the context of their own test environments and will never be exhaustive when it comes to the combinations of operating systems, web browsers and peripherals on the various hardware configurations available. For that reason the ownership of validating the software’s compatibility within the context of the customer’s own environment and network configurations must rest with the customer.
The size of the risk will naturally depend on the product itself and the complexity and diversity of the environment in which it will be deployed, but as a minimum the key features or processes should be tested against that configuration to help minimise the risk of deployments failing on specific configurations.
Compatibility with any future releases of the product should also be considered, as well as the impact of having potentially using different versions of the same product. The creation of an Upgrade Plan could assist this.
Will the product integrate with my existing solution?
Integrating with the customer’s existing suite of applications can be a challenge and is an obvious area of risk when implementing COTS products.
Integration testing can help to verify that data etc. can be passed between multiple applications and components as expected. The addition of interoperability testing can assist in verifying that the sending and receiving applications/components process the data via those interfaces in the correct manner.
What known issues/defects does the product have?
All software contains defects. Quite often, vendors include details on known defects in release notes and on their website, along with a target date or release version for the defect to be resolved.
The severity and criticality of the defect will vary depending on where it is in the product and how important that particular piece of functionality is to the individual customer. It may not be possible to ascertain from the information provided by the vendor if a known defect impacts whether or not the product is fit for purpose for a given customer.
Completing a series of tests based on the functional and/or business processes that will be followed is a good way to reduce that risk.
What testing has already been carried out?
Getting an understanding on the level of testing conducted on a COTS product can be incredibly difficult. Vendors may provide a list of known defects but that will not provide any information regarding the development methodology used, whether any test tools were used, how defects were tracked, the types of testing performed, the phases of testing performed, how peer reviews were used or how experienced the team involved with developing and testing the product are.
In other words, the only steer a customer really has that the product is of sufficient quality to be released is implied by the fact the vendor has released it.
This is an “agile” project, why do I still need to test?
It is a common misconception that Agile projects do not require a rigorous approach to testing. Using a test method with clearly defined metrics and analytics helps provide traceability of requirements, which in turn means that you can ensure that an Agile project is fit for purpose.
It is also a common misconception that Agile projects do not require any documentation. On the contrary, it is perhaps more important for documentation to be produced on an Agile project; the key is understanding what the documentation is required for and only producing what is required.
With regards to implementing a COTS product on an Agile project, it is highly likely that the same questions as those posed above regarding compatibility, integration etc. exist and as long as they do, testing will always need to be considered.
Summary
Testing COTS products and the challenges associated with doing so is likely to become a bigger priority for organisations in the future as they rely more on vendor-developed solutions to try and meet business needs quickly and cost-effectively.
However, it will always be the customer’s responsibility to ensure that the product is fit for purpose according to their criteria and the specific environment(s) in which it will be deployed.
For that reason testing should always be considered, with the validation of a COTS product requiring a more functional and business process-centric view of risk and processes rather than more traditional requirements-based testing.