The Artima Developer Community
Sponsored Link

Java Buzz Forum
Preon Encoding Roadmap

0 replies on 1 page.

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 0 replies on 1 page
Wilfred Springer

Posts: 176
Nickname: springerw
Registered: Sep, 2006

Wilfred Springer is a Software Architect at Xebia
Preon Encoding Roadmap Posted: Oct 13, 2009 1:29 PM
Reply to this message Reply

This post originated from an RSS feed registered with Java Buzz by Wilfred Springer.
Original Post: Preon Encoding Roadmap
Feed Title: Distributed Reflections of the Third Kind
Feed URL: http://blog.flotsam.nl/feeds/posts/default/-/Java
Feed Description: Anything coming to my mind having to do with Java
Latest Java Buzz Posts
Latest Java Buzz Posts by Wilfred Springer
Latest Posts From Distributed Reflections of the Third Kind

Advertisement
In my previous post, I highlighted some of the challenges and questions to be answered before Preon can be made to support encoding as well. The main problem is not so much that it is hard to understand how the data should be encoded. That's all pretty clear. The meta data gathered by Preon provides sufficient detail to make that fairly easy to do. No, the real problem is in preserving consistency.

If you really try to imagine the ultimate goal, then it's immediately clear that it will be close to impossible to get there in one step. So what do you do? You break things up in phases. That's what I tried to do tonight:



The picture above depicts the different stages for getting closer to where I want Preon to be. Read it from the bottom up.

Phase 1: Writing all data to a stream

Currently, the Codec only defines a decode operation. We clearly need an encode operation as well. The decode operation is going to take the object passed in and write it to the output channel. Period. By adding this operation, you will be able to encode data, however when you actually make changes to the data, you're basically on your own.

Here are some of the complications that could occur if you change the data:
  1. In case Preon loaded an instance of its own LazyLoadingList, then trying to modify that list is going to throw exceptions.
  2. If you replace the value of an attribute that is used in Limbo expressions, there's not only a chance that you write corrupted data, but there's also a chance that you will not even be able to continue to read the data any longer. Remember, Preon is decoding lazily. It might not even have read the data that you are about to write. By changing the attributes that Preon uses in calculating the starting point of a section of data to read, you might get in trouble.
Phase 2: Binding to public accessors

We need to be able to understand when code outside of Preon is making changes to an object that was loaded using Preon, because if it does, then we can no longer afford to drop a cached version of that instance. However, currently Preon only binds to fields, and not to bean-type accessors. It's going to be close to impossible to track changes to those private fields, but it will be possible to track changes to those fields if Preon binds to the accessor methods rather than the fields. So, we will probably need that feature to be there.

Phase 3: Copy on change

Maybe this is not the right term. In some cases, all we need to do is make sure that we hold on to a cached copy indefinitely, until data is persisted. In other cases, we will need to make sure that we actually replace the entire copy of an existing object and replace it by something else.

Phase 4: Consistency checks while writing

As I said, making sure that we preserve consistency over the entire file is going to be one of the biggest challenges. The previous step has made sure that we can actually change the file and write it again, but it's not going to guarantee that whatever is going to be written is consistent. For that, we need something else. This phase is about adding a feature that will check consistency while writing.

(Consequently, data early in the file will always prevail over data later on in the file. If the file first contains an integer denoting the size of a list following, and that value is greater than the actual size of the list when the list needs to be written, the list either needs to be truncated or grow, or an exception will have to be thrown. This phase is about making sure an exception is thrown.)

Phase 5: Rewrite only if required

In many cases, it's not going to be required to first load data into an object and then write it to output again. If the object encoded did not change, then we can just stream the from the original source. (In this case, the BitBuffer.) This is - hopefully - going to be an optimization that is going to pay off big time.

Phase 6: Autocorrecting

In phase 4, I already said that there will be cases in which you ideally want the list of elements to grow or shrink if the attribute that denotes the number of item in the list is updated. This phase is about considering solutions like these. It will probably be quite hard, if not impossible at all, but it's worth to take it into consideration.

Phase 7, 8, 9, 10, ...

O man, if only I would have time.

Read: Preon Encoding Roadmap

Topic: CITCON Paris 2009: Mocks, CI Servers and Acceptance Testing Previous Topic   Next Topic Topic: July 2009 Javva The Hutt

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use