This post originated from an RSS feed registered with Agile Buzz
by Steven E. Newton.
Original Post: Tools for Testable Software Development
Feed Title: Crater Moon Buzz
Feed URL: http://www.cmdev.com/buzz/blosxom.cgi?flav=rss
Feed Description: Views and experiences from the software world.
Software needs testability built-in. Automated testing is of critical importance in effective software development. Testing through the user interface is problematic at best, successfully automating this kind of testing is full of difficulties. Code can have testability hooks built in, to help find errors using automated methods. This project explores some ways to design and implement testability in any software in a broadly applicable way. A primary goal is to explore a test support library that can be incorporated into any application.
Background
In the context of discussing how to manage and administer deployed production software, an organization needs to have some "levers and knobs" to turn. In other words, "Put a power meter and steering wheel on every Web service". It should be possible to ask, "Who's using the service? How often are they requesting it? How many requests succeed? How many fail?". In order to do this, he lists three needs [Schadler03]:
visibility
accountability
hands-on control
In testing software, the development organization needs similar knobs, levers, and meters on the software to ensure that it functions as desired.
The Test-lead Sink
We take as metaphors two ideas from the hardware world. First, build in test leads or test points in a manner similar to those in in chips and circuit boards. Second, provide a sink into which test results can be sent. The sink would be something like a monostate object -- one which can receive and act on inputs but which does not itself change state. To get different behaviors, multiple sinks can be attached to or listen to the testing result outputs.
The input to the sink would look something like this:
static Map (Set) of key value pairs
Each value must have some kind of TypeAdapter
MonitoredEvent
key
value or typeAdaptedValue
Sinks would be Testable-event observers, e.g. have a method something like
postMonitoredEvent(MonitoredEvent evt) {
...
}
Things to send to test lead sink:
object creation events (only for key resource allocation classes)
call to monitored method
semantically significant failures
significant state change (of a class or the system)
Many of the events will be recorded an accumulator and min/max extremes may be recorded. For example, as an important dynamic collection changes, we may want to know the number of elements and the high/low water marks.
Types of monitored values
initialization
configuration values
shutdown
outputs
side effects
internal states
probes -- an assertion that, when it fails, generates an error event that gets logged
Performance
# of operations over time
Resource Usage
high and low water marks &
1000
lt;/li>
audit
transaction completion
aggregates (what do I mean by this?)
suspect events
errors
This would be a dynamic and real-time display of the sort of information that can be generated by log analysis.
The test lead sink should be switchable at run-time. The whole facility can be turned on and off, and it should have "quiet", "normal" and "verbose" modes.
Generating a reasonable, understandable, clear, and simple display of the collected test information is important and potentially difficult.
Testers will want to be able to poke state changes into the objects under test. One way to enable this would be to have a way build mock objects and push them into the We'd like to be able to dynamically swap them in for testing purposes. There is a certain overlap between building in this kind of testability and building in monitoring for production operations. Having something that people can see serves two purposes and lowers the barrier for doing either of them.