The Artima Developer Community
Sponsored Link

Agile Buzz Forum
Looking back at the SiteMesh HTML parser

0 replies on 1 page.

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 0 replies on 1 page
Joe Walnes

Posts: 151
Nickname: jwalnes1
Registered: Aug, 2003

Joe Walnes, "The Developers' Coach" from ThoughtWorks
Looking back at the SiteMesh HTML parser Posted: Oct 6, 2004 12:57 PM
Reply to this message Reply

This post originated from an RSS feed registered with Agile Buzz by Joe Walnes.
Original Post: Looking back at the SiteMesh HTML parser
Feed Title: Joe's New Jelly
Feed URL: http://joe.truemesh.com/blog/index.rdf
Feed Description: The musings of a ThoughtWorker obsessed with Agile, XP, maintainability, Java, .NET, Ruby and OpenSource. Mmm'kay?
Latest Agile Buzz Posts
Latest Agile Buzz Posts by Joe Walnes
Latest Posts From Joe's New Jelly

Advertisement

Before talking about how the new SiteMesh HTML processor works (to be released in SiteMesh 3), I thought I'd write a bit about how the current parser has evolved since it's first attempt in 1999 - purely in the interest of nostalgia.

The original version used a bunch of regular expressions to extract the necessary chunks of text from the document. This was easy to get running, but very error prone as the matches had no context about where they were in a document. For example, a <title> element in a <head> block is very important to SiteMesh, however sometimes they appear elsewhere, such as in a comment, <script> or <xml> block.

This was dumped, in favour of a DOM based parser, which initially used JTidy to convert HTML to XHTML so it could be traversed as a standard DOM tree. Much nicer, but very slooow. Too slow, so I switched to OpenXML, an XML parser that was tolerant to nasty HTML, giving a slight boost to performance. I was much happier with OpenXML - even though it still added a fair amount of overhead and rewrote bits of HTML that I didn't want it to.

Annoyingly, not long after that, the OpenXML project merged with the IBM XML4J parser project, rebranded itself as the mighty Apache Xerces and promptly dropped support for HTML parsing. So now I was dependant on a library that no longer existed.

By this time, SiteMesh had been open-sourced, and along came Victor Salaman, who was the third user to discover it (after Mike Cannon-Brookes and Joseph Ottinger). He saw the potential but hated the parser. About three hours later, he'd produced his own version that used low-level string manipulation. It wasn't pretty, but it went like the clappers - twelve times faster than the OpenXML one, with the bonus feature of not rewriting great chunks of the document. This brought SiteMesh into the mainstream as it was now ready for use on high-traffic sites. 1.0 was released.

This parser really is the core of SiteMesh. It's been our friend thanks to its speed and reliability. It's been our enemy because of it's awkwardness to understand and change. For a couple of years it remained barely untouched, except when we occasionally poked at it from afar with a long pointy stick for the odd change. Three years later, Chris Miller took the plunge and gave its guts an overhaul - making it six times faster! Very brave.

Despite its awkwardness, it proudly lived on and is still the primary ingredient of SiteMesh today. It's even been ported to VB.Net!

I've kept my eye on other HTML parsers, such as HotSAX, NekoHTML and TagSoup, always with the intention of implementing an easier to maintain parser, but I just couldn't get the performance to be anything like what Victor and Chris achieved.

The problem is that most HTML parsers try to represent an HTML document as tree of nodes, like XML. This makes sense as that's what HTML is meant to be, however, to do this, every single tag in a document must be analysed and balanced accordingly. This is hard, error-prone and adds a lot of overhead.

There's another approach though. The new parser focusses on ease-of-use and ability to customize, without compromising on performance and robustness. I hope you'll like it...

Read: Looking back at the SiteMesh HTML parser

Topic: The dynamic languages meme spreads Previous Topic   Next Topic Topic: Highlighting what's new in CST

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use