Translator Model Design Supplementary



Back to the Translator Model Specification.

Question 1

Why is the Requirement 1 (R1) : "The UI of services must be able to run on different kinds client machine such that the IO devices of the client machines can be very different." ?

Consider this first:

(1a) Example of using Print Service

I buy a jini-printer and I can print things to it from my PC.

(1b) Example of using Print Service

I give a voice command via my mobile phone "print the current phone lsit" and then the phone prints the phonelist to the printer in (1a).

(1c) Example of using Print Service

When I am in the 3D-virtual-reality-meeting-room (having a remote meeting, perhaps), I can print documents to the printer in (1a) via a 3d-virtual-control-panel.

(2a) Example of bank-account-transfer

I can give a voice command to my mobile phone, "connect to ABC Bank, I would like to do transfer".

After a while, the phone speaks to me, "Transfer service ready, please give instruction".

Then, I say "transfer $1000 from account 123-001 to 456-002".

The phone says "transfer $1000 from account 123-001 to 456-002, right?".

I say "right"

The phone soon replies me that "Transfer complete"

(2b) Example of bank-account-transfer

I can do the same thing in (2a) in my car.

I tell my car computer, "Connect to ABC bank".

Computer says "ABC bank connect".

I say "I want to transfer"

Computer says "how much to transfer?"

I say "$1000".

Computer says "transfer from which account".

I say "123-001".

Computer says "to which account".

I say "456-002".

Computer says "Transfer done".

Note that the transfer-service is the one in (2a).

(2c) Example of bank-account-transfer

I can do the same thing in (2a) via the 3D-projectile at home.

I say "computer, connect to ABC Bank, personal finance service"

A 3D-virtual-officer appears in front of me, and asks "may I help you, Mr. ?"

I say "I would like to transfer".

The officer says "How much to transfer?"

I say "$1000, from my account to 456-002".

The officer says "from account 123-001, right?"

I say "yes".

The officer says "transfer $1000 from account 123-001 to 456-002? Am I right?"

I say "right".

The officer says "just a moment....Ok, done"

I say "thanks".

Note that the transfer-service is still the one in (2a).





The scenarios above could become reality in the future. I personally consider that such environment is the correct way to go for the industry. Note that a single Jini service can be downloaded and ran by many different client machines. Today's environment consists PC mainly, but tomorrow's environment may consist many different kinds of client machines.

That means the UI of service must be able to run on many different kind of client machines, which contain many different IO devices and different API of UI, hence R1.





Question 2:

Why is the Requirement 2 (R2) : "The UI of services must be able to run on the client machines such that the IO devices of these client machines do not exist when the services are being coded."?

The answer is same as question 1.



The critical point is that no one can predict what kind of IO device will appear in the future, but today's service must be able to run on those IO device! Hence R2.





Question 3:

Why is the Requirement 3 (R3): "Adaptive UI [Remark] should be considered, even if adaptive UI is not planned to be implemented right now."?

One may consider that adaptive UI is just a dream. For me, it is not a dream.



Perhaps, adaptive system is not mature right now, but I believe tomorrow's computer will walk towards this direction. A clear fact is that in many cases, today's systems are excessive complexity fault [Dertouzos 1], they are too complex for users.





Question 4:

How does the Translator Model fulfills R1?

Translator Model lets client machine to generate by itself. Since the JVM of client machine should know the IO devices and API of itself well, the generated UI should fit the client machine well.



In this way, a service can have different UI on different machine and hence fulfills R1.





Question 5:

How does the Translator Model fulfills R2?

Same as answer of question 4.





Question 6:

How does the Translator Model fulfills R3?

Since UI is generated by client machine, and if the machine itself has the ability of running adaptive UI (maybe built-in API of adaptive UI), then there should be nothing stopping the machine to generate adaptive Uis.



Note that the codes of adaptivity is not part of service, but part of the local API of client machine. A service needs not to be adaptive.





Question 7:

Are there any other alternatives other than the Translator Model in order to fulfill the Requirements of service UI (R1, R2 and R3 in the Translator Model Specification)?

Yes. There are two other choices.

First choice: define a existing API (like Swing) as default-API of UI and makes all codes of UI to be written in this default-API. Different machines can interpret the default-API freely.

This approach has two drawbacks:

(1) Each client machine may need to have rather large API (like Swing, several M bytes).

(2) There may be many technical problem on implementation. Eg. a machine with text-mode-output successfully interpret and run all Swing APIs (suppose Swing is selected as default-API)? That could be very difficult.





Second choice: service generates UI on-the-fly based on the available IO devices of client machines.

This approach is not easy too. At least, I cannot figure out how to implement this approach.






Question 8:

Why do not follow today's UI-programming-model, but to create a new model?

Because today's ui-programming-model requires that codes of UI are part of the program.



By saying codes of UI are part of a program, it DOES NOT mean that codes of UI is tight with other codes or "not separating view from model (in terms of MVC)". "Codes of UI are part of program" means program side supply the codes of UI and these codes are tight with particular kind of API (like Swing).



Applying this model to tomorrow's environment implies that each service must have many different codes of UI (because many different UIs are needed). The combinations of IO devices, APIs and variation of APIs could be a large number. No matter this approach is really "implementable" or not, it is not a satisfactory answer, IMO.





Question 9:

Why introduce Human Active Request?

Because the position of human in the traditional human-machine interface architecture is too passive.



Though today's GUI is colorful and cool, they are not really user friendly[Dertouzos 2]. A text-mode machine which can understand human natural language (via keyboard input) is more friendly than today's GUI! A fact is that some of today's office suites (word processor, spreadsheet, etc) are "features-rich" enough to hold a course! If one says "today's UI is very user friendly and interactive", then I would say that it is something I cannot agree with in a moment.



Today, a user can only choose the options provided by a program. What if a user does not understand the meaning or function of an option? What if a user cannot find out the options he wants? God bless him.



On the other side, if human can give requests actively, then the situation could be improved. The options of each UI could be largely reduced, some kind of big menu can be omitted. Users can ask for new option whenever they like, by just making active requests (how to make a request is depended on machine).





Another question we should think about is how can a service understand human's request?



An obvious answer is the service is coded in a such way that it is clever enough to understand human's request.



Is it good? No. This approach forces each service to be very intelligent, such AI technology may not exists or not mature right now.



To overcome this problem, the class ActiveResponse is introduced. The idea is service side tells client machine what are the current possible responses to human's requests (if any). And let client machine to decide to select a response . If a client machine is clever enough, then it can collect human's request first, and then ask the service to give an ActiveResponse object, finally select a response by it's IQ; If a client machine is stupid, then it may wait for user to click a particular button, then ask the ActiveResponse object, and show the list of the responses (that's the list of other options available) to user and let user to pick a choice.





Question 10:

Why is that no help system in Translator model?

The Translator Model is design in a such way that each UI "in its entirely offers a maximum of help" [Gritzman Kluge Lovett]. That's why there exist no standalone help system module in the model.





Question 11:

Can menubar be implemented in the Translator Model?

In short, no.



Even if the class PickOne, which will be added in the Translator Model Specification (see progress file), is considered, the answer for this question is still same. A PickOne object in a UI will not serve the purpose of menubar, because the translator will decide when to terminate a UI. Even if user pick one choice in a PickOne list, the UI may not end at once, hence menubar cannot be made.



However, the Human Active Request should be able to serve the similar function of menubar.





Question 12:

Is it true that the exiting classes of presentation and question (as specified by the Translator Model Specification) are enough for all UI?

If the class PickOne and PickSome are taken into consideration, the answer should be yes, at least I hope yes. The Translator Model actually models the communication between human. The types of presentation and question are designed to cover all the possibilities in communication.





Question 13:

What about image, sound and video?

Image, sound and video is not considered in the current design.



The big question is that how to handle pre-formatted information (generally speaking, that's rich text, including image, sound and video) in the Translator Model. Since the model allows (and forces) client machine to generate UI by itself, handling pre-formatted could be rather difficult.



It question may left for discussion.





Question 14:

It is observed in demo3 that service side does not ask a question like "Ok or Cancel", and it is the job of translator to determine if human has finished all input or not. So, what about human does not answer all the questions?

Well, imagine you are talking to another person through a translator, if you ask a question and your listener does not reply this question, what will you do?



Ask again, of course!





Question 15:

Is it true that Translator Model is no more than a kind of Form-UI?

Not exactly.



It is true that a client machine may generate UI similar to forms. But the model itself is not a kind of form-architecture.



Translator Model introduces something that a form-based system may not or could not have:

  • encourage human-machine communication to be done in this way: "be aimed toward a specific goal you have in mind. It can't be loose, as in social chitchat. It's more like a Ping-Pong game." [Walton]

  • Human active request.

  • Guarantees UI of a program, or a service, can run on different kinds of machines with different IO devices.

  • Prepare a room for adaptive UI.





Reference

Dertouzos 1:

Dertouzos Michael L., What Will Be, Chapter 12, section 2.



Dertouzos 2:

Dertouzos, Michael L., What Will Be, chapter 12, section 3.



Gritzman Kluge Lovett:

Michael Gritzman, Anders Kluge and Hilde Lovett , Task Orientation in User Interface Design.



Walton:

Donald Walton, Are you Communicating? 1983