Currently, the Codec only defines a decode operation. We clearly need an encode operation as well. The decode operation is going to take the object passed in and write it to the output channel. Period. By adding this operation, you will be able to encode data, however when you actually make changes to the data, you're basically on your own.
- In case Preon loaded an instance of its own LazyLoadingList, then trying to modify that list is going to throw exceptions.
- If you replace the value of an attribute that is used in Limbo expressions, there's not only a chance that you write corrupted data, but there's also a chance that you will not even be able to continue to read the data any longer. Remember, Preon is decoding lazily. It might not even have read the data that you are about to write. By changing the attributes that Preon uses in calculating the starting point of a section of data to read, you might get in trouble.
Phase 2: Binding to public accessors
We need to be able to understand when code outside of Preon is making changes to an object that was loaded using Preon, because if it does, then we can no longer afford to drop a cached version of that instance. However, currently Preon only binds to fields, and not to bean-type accessors. It's going to be close to impossible to track changes to those private fields, but it will be possible to track changes to those fields if Preon binds to the accessor methods rather than the fields. So, we will probably need that feature to be there.
Phase 3: Copy on change
Maybe this is not the right term. In some cases, all we need to do is make sure that we hold on to a cached copy indefinitely, until data is persisted. In other cases, we will need to make sure that we actually replace the entire copy of an existing object and replace it by something else.
Phase 4: Consistency checks while writing
As I said, making sure that we preserve consistency over the entire file is going to be one of the biggest challenges. The previous step has made sure that we can actually change the file and write it again, but it's not going to guarantee that whatever is going to be written is consistent. For that, we need something else. This phase is about adding a feature that will check consistency while writing.
(Consequently, data early in the file will always prevail over data later on in the file. If the file first contains an integer denoting the size of a list following, and that value is greater than the actual size of the list when the list needs to be written, the list either needs to be truncated or grow, or an exception will have to be thrown. This phase is about making sure an exception is thrown.)
Phase 5: Rewrite only if required
In many cases, it's not going to be required to first load data into an object and then write it to output again. If the object encoded did not change, then we can just stream the from the original source. (In this case, the BitBuffer.) This is - hopefully - going to be an optimization that is going to pay off big time.
Phase 6: Autocorrecting
In phase 4, I already said that there will be cases in which you ideally want the list of elements to grow or shrink if the attribute that denotes the number of item in the list is updated. This phase is about considering solutions like these. It will probably be quite hard, if not impossible at all, but it's worth to take it into consideration.
Phase 7, 8, 9, 10, ...
O man, if only I would have time.