Sponsored Link •
|
Summary
A GUI program can rapidly become an unholy mess of object interactions that is difficult to get working properly, and even more difficult to test. But with the right architecture, it should be possible to solve those problems.
Advertisement
|
In Designing Graphic Interfaces, I described RADICAL's intriguing potential for interactively designing a GUI interface. I ended that article by listing three concepts that promise to improve the design of the program that uses the GUI:
It is of course very difficult to test a GUI program, especially in the kind of automated way that JUnit is all about. It takes a special testing harness that's generally difficult to use and either expensive or hard to understand, or both.
But if the model drives the GUI, then a "false front" can be created that has the same API as the GUI, but which simply stores the data values it receives so they can be inspected. The test program can then poke the model and examine the results to make sure they're correct.
The benefit, of course, is the model can be tested normally (and completely) so that any bugs which are found in the GUI are guaranteed to be shallow (existing only in the GUI code, rather than in the model).
The idea is to put all GUI interections into a separate class (or possibly a combination of them), which encapsulates all of the interactions between components. For example, pushing the play button may cause the button to change it's appearance so that it's depressed, change the appearance of the stop button so it's raised, and enable some animated image that blinks or waves or whatever.
The underlying model doesn't need to know what kinds of GUI components exist, or how they interact. But adding those kinds of interactions to the GUI layer itself has two problems: the GUI code is more complex, and there's no good way to test it.
Putting the interactions in a separate class creates a GUI interaction layer that can be tested. An object that has the GUI program's API can then be used to verify that the appropriate interactions occur. After that extraction, what remains in the GUI class is simply components and their set() methods -- in other words, something so simple enough that testing isn't really required. (About the only thing that could go wrong would be failing to poke the model when a GUI event transpires, and that kind of problem is easily observed and fixed.)
For testing, it seems clear that the GUI needs two interfaces, one that the underlying model uses to poke data into the GUI, and one that the interaction model uses to drive component states. (There may actually be a set of interfaces for each purpose, but in theory those are the two main classifications.) The test program can then create objects that implement the interfaces to check the model or the interactions.
What isn't quite clear at this juncture is how the interaction model works with the underlying model, when both concepts are implemented simultaneously. The three possible arrangements are 1) interaction model delegates to underlying model, 2) underlying model delegates to interaction model, 3) both models are independent, and the GUI code pokes both models when an event occurs.
Of the three, option #1 seems the most intuitively appealing. There are still two choices, though. Either the underlying model has to be configured with the GUI object so that it can drive the GUI, or else the interaction model has to delegate both ways -- to the model when GUI events happen, and to the GUI when the model makes changes.
I'm of two minds here. The second option is far more complex. On the other hand, it means that the interaction model can respond to the underlying model, as well as the GUI. Whether that's a useful benefit, or additional complexity without much use, I'm not sure. (I look forward from hearing from others about their experiences.)
With the methodolgy he advocates, every object in the system knows how to display itself. Of course, putting actual GUI code in the object generally isn't a great idea, so the object is instead passed a "builder" object which has an interface the object uses to pass data to the GUI component.
That methodology makes it possible to make radical changes to an object's implementation (for example, adding the notion of "currency" to a money class) but at the same time restricting the changes to the class and it's builder -- because everything about "money", including how it's created and displayed -- is encapsulated in one place.
Testability improves, as well, because the builders can be objects that the test program inspects to make sure the results are correct. Different sorts of inputs and outputs are more easily accomodated, as well, for the same reason -- adding a different builder with the same API lets you change from graphic to HTML or XML output, to braille or programmatic input.
So it's clearly a great concept that deserves wider application. (The key appears to be the elimination of get() methods in the design, they indicate the kind of data-coupling that produces unwanted dependencies.)
But what's not totally clear to me is how that concept dovetails with the notion of a GUI layer that you can develop interactively, and with an interaction layer that manages component states. It may be that the notions are fundamentally at odds. Or maybe there is a way they can work together...
I suspect that the answer is to balkanize the GUI's API interface into a collection of interfaces, one per object. So if there is a Money class, there is a MoneyBuilder interface, for example. Then, when you create the GUI class with a program like RADICAL, you subclass it and implement all of the interfaces for the object APIs in the subclass -- most likely by defining some mega-interface that combines all of the object builder interfaces into a single monolith, and then implementing that.
The program that uses this architecture will be testable, flexible, and maintainable -- or at least, that's the theory. I'm looking forward to seeing what happens in practice, and to hearing from others who can report the results of their experiments.
Have an opinion? Readers have already posted 6 comments about this weblog entry. Why not add yours?
If you'd like to be notified whenever Eric Armstrong adds a new entry to his weblog, subscribe to his RSS feed.
Eric Armstrong has been programming and writing professionally since before there were personal computers. His production experience includes artificial intelligence (AI) programs, system libraries, real-time programs, and business applications in a variety of languages. He works as a writer and software consultant in the San Francisco Bay Area. He wrote The JBuilder2 Bible and authored the Java/XML programming tutorial available at http://java.sun.com. Eric is also involved in efforts to design knowledge-based collaboration systems. |
Sponsored Links
|