Summary
While code coverage is a useful metric in evaluating the health of a code base, perfect code coverage does not mean the software does the right things from a user's point of view. A new tool from Borland aims to link requirements to test cases, and adds requirements coverage as a code quality indicator.
Advertisement
Useful metrics for evaluating code quality are increasingly part and parcel of a developer's life: Test coverage, dependency analysis, static analysis all provide important data about the health of a developer's code base. Yet, in spite of getting perfect scores on all those counts, can you still produce bad code?
Very much so, according Rob Cheng, director of developer solutions at Borland. Artima spoke with Cheng on the eve of Borland's release of a new set of products that aim to help ensure code quality from a more holistic perspective:
The most common thing we hear from CIOs is that code can be perfect, but the application can be perfectly useless because it's not doing the right thing. It may be doing the wrong thing very well, but not doing what the business needs...
Testing doesn't help, unless you're testing for the right thing. It's not so much about tools and products that can automate testing, but about [whether] you're testing the right things. You may have some... steps in the functionality of your application that you can write into a test suite, and you can automate the execution and tracking of that test, but that may not be what the end-user or the business really wants.
Borland's new tool aims to ensure that requirements are linked to test cases, as well as to code a developer works on. The idea is to ensure not only that developers work on the right features, but also that developers are quickly appraised of requirements changes.
You too need to make sure that you consider not only code coverage, but also requirements coverage: How much of the validation and testing are covering the things that the end-user or business values?
In capturing requirements, you need to make sure you're capturing all the steps that QA needs. Things like peak load issues, or platform requirements... Our requirements [tool] has a repository of requirements, and there are UIDs around requirements. You can connect a requirement's ID and artifacts... to a test or a test suite, the functional requirements that that test is supposed to be exercising.
Cheng explained that such a holistic approach to code quality is part of a set of techniques collective termed Application Lifecycle Management, or ALM. According to Cheng, ALM consists of areas such as requirements definition and management, change management, IT management and governance, and lifecycle quality management:
ALM is about the ability to align... the business analysts and the requirements with developers and their code and QA and test cases... A big part of application life-cycle management is the ability to manage the quality across the application lifecycle...
ALM offers organizations a way to link ... from business requirements to code, [and from code] to test cases in an automated way that traces artifacts across all of those [areas]. That allows you to break down the traditional barriers between those departments, and between the folks in business who are working with customers and end-users to define requirements, and the developers who actually implement code, and QA... Having those siloed in a traditional way makes it very hard to deliver consistently applications that meet business needs.
Borland's current solution not only links requirements to tests, but ensures that those tests execute every time a developer checks code in:
Gauntlet ... is a continuous build and test automation system. Some people call this a continuous integration server. It marries testing and measurements with version control.
Every time a developer checks code into the version control repository, on the server Gauntlet kicks off some automated builds and automated tests... That includes static code analysis, compilation, packaging, building... and runtime testing as well, unit tests, and code coverage analysis... Because it does this every time a developer checks code in, the frequency of tests is much higher.
The core message around continuous integration is that you really need to build your application much earlier, and much more frequently. To work on an application for weeks or months before handing things over to QA is extremely inefficient. And it causes a problem [in] that you never know what the status of your project is. You don't know how healthy it is, you don't know what the quality and stability of the code is.
While many open-source and commercial tools exist to measure code coverage, what tools and techniques do you use to evaluate the requirements coverage of your projects? And to what degree do you think current requirements analysis tools help ensure that you're building software end-users actually find useful?