This post originated from an RSS feed registered with Agile Buzz
by James Robertson.
Original Post: The value of Smalltalk
Feed Title: Cincom Smalltalk Blog - Smalltalk with Rants
Feed URL: http://www.cincomsmalltalk.com/rssBlog/rssBlogView.xml
Feed Description: James Robertson comments on Cincom Smalltalk, the Smalltalk development community, and IT trends and issues in general.
Niall Ross is talking about the value of Smalltalk - in the context of JP Morgan's experience with VisualWorks and Gemstone in the Kapital project. What is Kapital? Kaital is a risk and value management system. It deals with complex financial products and figuring out their actual value so that buying and selling can be done profitably - in other words, you don't want to offer a product whose value statement is "take my money". It's used in three ways:
batch jobs - run overnight, all night, every night. This derives a map of possible risks
interactive - the traders manage their books, value trades, etc
Kapital has a team of 30 developers and 500 end users across New York, London, and Tokyo. Ultimately, it's the enabler of $Large revenues for the investment bank. Niall had to remove the actual number :)
So why does it matter that this is done in Smalltalk? Kapital is a very hard problem. The issue here is simply delivering any system at all. The domain requires a meta-model in order to actually define the financial systems being modelled:
All objects can value themselves
All objects can walk their graph to explain themselves
domain models are in VW only; GS is a memory extension
It took 1.5 years to get the models right. The reason Smalltalk fits for this is that meta-modelling is so easy in Smalltalk. The domain in this business is rapidly changing and unfixed - which makes it very hard to deal with in more static systems (e.g., Java). Kapital survived by delivering value early. Smalltalk enables meta-modelling because the meta model of Smalltalk is available - everything is an object, and there are few reserved words. Nothing gets in the way. For instance - the lack of (static) typing removes the obstacles that would otherwise stand between the developers and the models.
Kapital survived the PPS/ObjectShare meltdown, and got past the Gemstone/Brokat problem as well. Management was nervous, but the traders needed the value that the system delivered - there was too much money at stake to stop development for an N month/year migration effort, during which no updates would happen in the existing system. Smalltalk allows the developers to work very closely with the traders. Example:
New financial product (can't say what) was introduced
Every client of every investment bank asked for it
For a (longish) period, only JPM could offer it, because they were the only bank that could actually handle the new product with their software systems
Result - JPM gained 100% of the business during that period, which helped drive new client relationships
Competitors got into this with expensive staff increases and dodgy spreadsheets - Kapital managed to expand with small amounts of new code
Rapid delivery has distinct value in this space. Another benefit is scalability. The Kapital system (over 10,000 classes) is delivered to traders as an unstripped (i.e., as a development image prepared for application use). This means that production problems that are unique to production runtime issues can be found, debugged, and fixed. Performance has never been an issue that they could not overcome. Smalltalk has allowed them to figure out real bottlenecks with full tool support.
Re-engineering is far, far easier because everything is available and accessible. An example:
Kapital started with 200 financial time series objects (curves). Now has 70,000
retrieving their keys (descriptors) began to slow an important UI operation. Users were unhappy
re-engineered to use lazy synchronization for this
Re-engineering this core part of the product was easier because there are no hidden bits, no final classes, etc - everything is open
Another thing that is easier is data migration. The domain model changes over time, and this has to be managed. Data is lazily migrated as it's loaded from the persistent store, and then saved in that state as necessary. This means that they don't need to do explicit schema migrations that stop operations when new revs come out - it all happens automatically. This means that developers can run the latest codebase agaiinst copies of the production database without having to upgrade everything. Data upgrade on release takes less time.
What would they like to do better?
They would like to have performant meta enabled collection classes
Would like to more easily find and get rid of dead code. In a 90 MB image, they think they might have 20MB of it
Same problem with dead data
Like the rest of us, they would like to enforce better coding standards