Summary
ICEfaces is an open-source JSF library supported by ICEsoft. The latest ICEfaces release includes support for asynchronous, non-blocking I/O, portlet-style deployment of ICEfaces components, and several new JSF components. ICEsoft's Ken Fyten discusses the latest features in an interview with Artima.
Advertisement
ICEfaces is a popular open-source JSF library supported by ICEsoft. ICEsoft released ICEfaces 1.7 this week, with new features that include support for asynchronous, non-blocking I/O in app servers that offer that capability—such as Glassfish, Tomcat, and Jetty—portlet-style deployment, and many new JSF components. In an interview with Artima, Ken Fyten, ICEsoft's VP of technology, shares details about the latest ICEfaces features.
ICEfaces' main benefit, according to Fyten, is that it takes away a lot of complexity involved in developing rich-client Web applications, mainly by exploiting and extending the JSF programming model:
ICEfaces is an extension to standard Java Server Faces (JSF) that's part of the Java EE stack. ICEfaces provides a framework and a set of components that allow people to easily develop rich Internet applications using JSF without having to develop things that you normally associate with [rich-client applications], such as JavaScript and Ajax.
We abstract all the complexity away, or at least a lot of it, so that people can get the benefit of developing rich Internet applications in Java, without having to become experts in JavaScript and browser idiosyncrasies and Ajax techniques. With ICEfaces, you get rich components with behaviors that [users] are starting to expect, such as menus, drag and drop, data presentation, things like that.
The most visible new features in ICEfaces 1.7 are rich JSF components:
1.7 is a broad-based release, and extends ICEfaces in a lot of different areas... The thing most people see and focus on is the component set we provide for ICEfaces. Even though you can use other components with ICEfaces, we provide a wealth of components right out of the box.
On that front, we managed to add a substantial number of new components. In some cases, new components were requested by our user community, and in that way, we're filling out some gaps in our component suite. We also did a lot more in terms of enhancing the existing components, to refine them, to provide capabilities people find useful.
Some of the higher-profile new components include a rich-text editor that allows you to write HTML or rich text in the page and save it back to the [server]. We have a new context menu that allows you to have a right-click capability over a component and present a menu of your own.
A new Google map component is also part of this release. That component actually has six sub-components that help you accomplish the various capabilities you can do with Google maps. We also offer a new media player for those who wish to put video or audio into their applications...
One of the most interesting aspects of ICEfaces is its support for server-initiated updates to component state, similar to Comet. In order to support such updates efficiently, ICEfaces 1.7 takes advantage of the non-blocking, asynchronous I/O support provided by several application servers:
There are also a lot of new things behind the scenes as well. One area relates to pushing the support for third-party asynchronous, or non-blocking, I/O technologies. We have a fairly unique capability in what we call server-push or server-initiated rendering: By default, ICEfaces runs in an asynchronous connection mode. That allows the application to push out updates to the browser asynchronously, without the browser having to poll or the user having to initiate that in some way.
That opens up a whole bunch of capabilities in terms of community applications, such as when you have multiple people viewing the same content, and you want state changes to be visible to all the users. ICEFaces does that out of the box...
We've supported that kind of server-push before, and the market started to catch up to the idea that people wanted this [asynchronous connectivity] capability. There are certain server-side applications that have begun to support this as well. Glassfish has an add-in called Grizzly, Jetty has continuations—they were the first ones to do it—and Tomcat 6 also has a non-blocking I/O option.
With this release, we are able to take advantage of all of those tools now and leverage the non-blocking I/O on various app servers. It allows the server's resource utilization to be much more efficient.
The reason so much work is happening around the non-blocking I/O capabilities is that under the normal, blocking I/O, app servers typically associate threads to connections. Because asynchronous connections rely on the capability of having long-lived connections, with normal I/O you end up dedicating a thread to a connection. Even though a connection may be idle most of the time, you tie up a thread per connection. As a result, the number of threads your server needs to support will increase. And then you have to consider allocating memory to those threads, and other issues as well.
What non-blocking I/O helps with is that it shares the threads, and releases them, so that you don't have to have a lot of idle threads taking up resources.
We can now leverage the asynchronous I/O capabilities of the various app servers. ICEfaces also has its own asynchronous HTTP server as well that does the same thing in an app server-neutral manner. If you're using ICEfaces with any app server that doesn't provide [asynchronous I/O], you can still use this server to accomplish the same thing.
The programming model for asynchronous I/O and server push follows the typical JSF programming model. The component on a page is wired to a backing bean on the server that holds the visual state of the component. Updates to that state get pushed out very elegantly to the client There is not a lot of wiring or mental overhead required to accomplish that. It's very similar to developing a thick-client application.
Another new area offered in ICEfaces 1.7 is support for portlet-style deployment of ICEfaces components:
A lot of effort went into this release to better support portlets in general. The portlet specifications out there today were not designed around Ajax applications. There are unique technical challenges in making Ajax work well with portlets. You have all those components pushing Ajax to the same HTML page in the browser. Some of those components, as with our components, use Ajax connectivity to accomplish what they do. To make things behave so that all those components can be intermixed, and so that things still work, is a technical challenge.
We augmented what we call our JavaScript Bridge to address that issue. Our JavaScript bridge is a small amount of JavaScript we push into the browser that actually manages the connections back to the server and supports the components in terms of synchronizing state changes. That gives you rich incremental updates by sending only the components that are changing.
A lot of the challenges we tackled in the latest release had to do with being able to support multiple portlets. Suppose you've got two or three ICEfaces portlets on a page. Because those portlets are actually in the same viewport of the browser, they need to share the connection management state so you don't hit the two-connection limit in Ajax. The two-connection limit is a feature of most browser that allows at most two connections to the same host. If you have several portlets all talking to the same server, two connections don't work.
In the latest release, the Bridge is written so that if you have more than one portlets on a page, they share a connection back to the server. You're able to multiplex across the single connection even though you're updating several viewports from that connection.
What do you think of the latest ICEfaces features?