Summary
A GUI program can rapidly become an unholy mess of
object interactions that is difficult to get working
properly, and even more difficult to test. But with
the right architecture, it should be possible to
solve those problems.
Advertisement
In
Designing Graphic Interfaces, I described RADICAL's intriguing potential
for interactively designing a GUI interface. I ended that
article by listing three concepts that promise to improve
the design of the program that uses the GUI:
GUI classes poke the model, but never query it
GUI interactions are managed in a separate layer
GUI builders are passed to application objects
In this post, I'll examine each of those concepts in detail,
and see how those concepts can be combined using a good
collection of APIs.
GUI classes poke the model, but never query it
This concept comes from the JUnit mailing list. It's an
inversion of the standard model/view paradigm that makes
unit testing more manageable. The idea is that when a
GUI even happens, it invokes a set() method in the model,
but it never invokes a get() method to get data.
Instead, the GUI defines set() methods that model uses
to drive the GUI.
It is of course very difficult to test a GUI program,
especially in the kind of automated way that JUnit is
all about. It takes a special testing harness that's
generally difficult to use and either expensive or hard
to understand, or both.
But if the model drives the GUI, then a "false front"
can be created that has the same API as the GUI, but
which simply stores the data values it receives so they
can be inspected. The test program can then poke the
model and examine the results to make sure they're correct.
The benefit, of course, is the model can be tested
normally (and completely) so that any bugs which are
found in the GUI are guaranteed to be shallow (existing
only in the GUI code, rather than in the model).
GUI interactions are managed in a separate layer
This is another interesting concept that came from the
JUnit mailing list. I'm not quite sure how to combine it
with the previous concept, but it makes sense, so it's
an area I want to explore.
The idea is to put all GUI interections into a separate
class (or possibly a combination of them), which encapsulates
all of the interactions between components. For example,
pushing the play button may cause the button to change
it's appearance so that it's depressed, change the appearance
of the stop button so it's raised, and enable some animated
image that blinks or waves or whatever.
The underlying model doesn't need to know what kinds of GUI
components exist, or how they interact. But adding those kinds
of interactions to the GUI layer itself has two problems:
the GUI code is more complex, and there's no good way to test
it.
Putting the interactions in a separate class creates a GUI
interaction layer that can be tested. An object that has the
GUI program's API can then be used to verify that the appropriate
interactions occur. After that extraction, what remains in the
GUI class is simply components and their set() methods -- in
other words, something so simple enough that testing isn't
really required. (About the only thing that could go wrong
would be failing to poke the model when a GUI event transpires,
and that kind of problem is easily observed and fixed.)
For testing, it seems clear that the GUI needs two interfaces,
one that the underlying model uses to poke data into the GUI,
and one that the interaction model uses to drive component
states. (There may actually be a set of interfaces for each
purpose, but in theory those are the two main classifications.)
The test program can then create objects that implement the
interfaces to check the model or the interactions.
What isn't quite clear at this juncture is how the interaction
model works with the underlying model, when both concepts are
implemented simultaneously. The three possible arrangements
are 1) interaction model delegates to underlying model,
2) underlying model delegates to interaction model, 3) both
models are independent, and the GUI code pokes both models
when an event occurs.
Of the three, option #1 seems the most intuitively appealing.
There are still two choices, though. Either the underlying
model has to be configured with the GUI object so that it can
drive the GUI, or else the interaction model has to delegate
both ways -- to the model when GUI events happen, and to the
GUI when the model makes changes.
I'm of two minds here. The second option is far more complex.
On the other hand, it means that the interaction model can
respond to the underlying model, as well as the GUI. Whether
that's a useful benefit, or additional complexity without
much use, I'm not sure. (I look forward from hearing from
others about their experiences.)
GUI builders are passed to application objects
Finally, there is the notion of using GUI "builders". This
notion comes to me by way of Allen Holub, who creates and
advocates extremely "pure" object oriented designs.
With the methodolgy he advocates, every object in the system
knows how to display itself. Of course, putting actual GUI
code in the object generally isn't a great idea, so the
object is instead passed a "builder" object which has an
interface the object uses to pass data to the GUI component.
That methodology makes it possible to make radical changes
to an object's implementation (for example, adding the notion
of "currency" to a money class) but at the same time restricting
the changes to the class and it's builder -- because everything
about "money", including how it's created and displayed -- is
encapsulated in one place.
Testability improves, as well, because the builders can be
objects that the test program inspects to make sure the results
are correct. Different sorts of inputs and outputs are more
easily accomodated, as well, for the same reason -- adding a
different builder with the same API lets you change from
graphic to HTML or XML output, to braille or programmatic input.
So it's clearly a great concept that deserves wider application.
(The key appears to be the elimination of get() methods in the
design, they indicate the kind of data-coupling that produces
unwanted dependencies.)
But what's not totally clear to me is how that concept dovetails
with the notion of a GUI layer that you can develop interactively,
and with an interaction layer that manages component states. It
may be that the notions are fundamentally at odds. Or maybe
there is a way they can work together...
I suspect that the answer is to balkanize the GUI's API interface
into a collection of interfaces, one per object. So if there
is a Money class, there is a MoneyBuilder interface, for example.
Then, when you create the GUI class with a program like RADICAL,
you subclass it and implement all of the interfaces for the
object APIs in the subclass -- most likely by defining some
mega-interface that combines all of the object builder interfaces
into a single monolith, and then implementing that.
Conclusion
In theory, then, a good GUI program will have:
An interactively constructed GUI class
A subclass of it that implements:
An interaction interface
A major builder interface, composed of separate builders
for each object in the data model
An underlying model that uses the builder interfaces
An interaction model that uses the interaction interface
and which delegates to the underlying model.
The program that uses this architecture will be testable, flexible,
and maintainable -- or at least, that's the theory. I'm looking
forward to seeing what happens in practice, and to hearing
from others who can report the results of their experiments.
There has long been an very important phrase in product quality management.
YOU CAN NOT TEST IN QUALITY
No matter how hard you try, you can not make your product perfect through testing. It might be less buggy, and some might equate that to higher quality, but in the end, if the architecture is wrong, testing won't fix that. Testing might reveal an architecture problem under certain circumstances, if you are lucky to have someone create the right test.
Humans are imperfect and will fail at the most inopertune moment! :-)
> Testability improves, as well, because the builders can be > objects that the test program inspects to make sure the results > are correct. Different sorts of inputs and outputs are more > easily accomodated, as well, for the same reason -- adding a > different builder with the same API lets you change from > graphic to HTML or XML output, to braille or programmatic input. > > So it's clearly a great concept that deserves wider application. > (The key appears to be the elimination of get() methods in the > design, they indicate the kind of data-coupling that > produces unwanted dependencies.)
Imagine that your GUI and the model are connected by a one way pipe. Imagine that there is no way that the GUI can ask the model anything (as you are suggesting).
Now, put that GUI on the users desktop and the model on the server. Guess what? You have a distributed system that displays the current state of the 'server'. Each user can have their own rendering engine, and each platform can implement it independently. All you need is a definition of the protocol between the server and the client to create a client.
We did this in a status monitoring and control application that we started under JDK1.0.3. There was no serialization implemented. Instead, we used ASCII based streams from the server to the client. Now, we can create clients for any kind of display you could imagine.
We have a control channel back to the server so that the client can ask the server to take actions. But, the client display only changes based on the server's report of information back to the client.
This is a very powerful model because it allows the client to be very flexible in what it does to manipulate the server (the server API is stateless), and what it shows the user. A single item in the server might be represented as a Gauge, a percentage, a line on a graph etc. The server doesn't care.
This type of decoupling has been around for a long time. If you don't get the distributed sytems thing, and understand why state is bad, and why coupling of state and control is even worse, then you will create GUIs that are highly coupled to the software inside.
The Jini model of serviceUI reflects this great decoupling where the UI can be deployed separately from the service. It doesn't, however mandate that their be no 'getters'. The service is a set of Java interfaces that the GUI knows how to visualize. The service has to be designed with idempotent interfaces. The service has to have a remote listening mechanism so that you can listen to changes as they occur.
Getting the initial view of the world is what is the hard thing to initiate. It has to be possible to ask for the current state, without sending it to everyone else that is 'listening'.
Once you contemplate these issues and make them part of your design, if not your mantra, then you can feel the power of the flexibility this kind of system provides.
I think this phrase is true for physical objects (manufacturing) but not for software. For manufacturing, it's saying you can't run the assembly line and then just test what comes out the end (and throw away rejects). To improve quality one must tackle the assembly line itself.
For software, the agile folks are talking about automated testing, such as unit tests. This is really part of the build process. A clean compile is not good enough; you need clean compile + runs all the tests. So it does improve quality because it deals with the "assembly line" itself -- the edit, compile, test cycle. So quality is being managed at the level of individual classes and methods. And as part of the build process, tests are run all the time and they hold the quality level to... whatever level you set in the tests. That is, tests do give quality, but only as much as the quality of the tests.
Uh, yeah. That layer would be called the CONTROLLER layer in a typical MVC architecture. The key problem as I see it is that many developers and most GUI editors skip the controller layer and put UI elements directly on the model.
Swing's vaunted MVC architecture's Models are NOT domain models. They are actually interaction models (In VisualWorks Smalltalk these are called the ApplicationModel layer). The application model as a little Model/View cluster (with the controller layer built into the view - thus making it more or less two tier'd). However, the application models implement the view and a controller layer between the application model and domain model needs to be constructed. Now you have a properly factored GUI application.
Sadly, tools and classes for easily building this layer are missing from swing. The use of anonymous inner classes as adaptors often results in short coupling the view directly to the domain model and often too much logic is implemented here.
This is a very old topic - with an equally old solution that has been misunderstood and misapplied for a very long time.
Nice idea but how would you apply this to the common case where the GUI should only show valid choices and the model is the source of these choices? Add a "poke" method to the model to trigger it to push out the list of choices? Wouldn't this just add another coupling point (the GUI has to implement a receiver of the model's pushed message), require the GUI code to be reentrant (the model might call back before the poke method returns), and spread knowledge about the task among more than one function?
Maybe I'm not looking at the problem the right way?
IMO the model should have no knowledge whatsoever about the UI (of whatever kind) that's sitting on top of it.
As such having the model drive the UI is completely contrary to correct design.
You would need some sort of interface layer between your UI and your model to achieve what you are looking at. Whether that added complexity is worth implementing your GUI ideas I don't know, maybe it can be.