The Artima Developer Community
Sponsored Link

Java Buzz Forum
Big [censored] Caching

0 replies on 1 page.

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 0 replies on 1 page
Brian McCallister

Posts: 1282
Nickname: frums
Registered: Sep, 2003

Brian McCallister is JustaProgrammer who thinks too much.
Big [censored] Caching Posted: Nov 9, 2004 12:53 AM
Reply to this message Reply

This post originated from an RSS feed registered with Java Buzz by Brian McCallister.
Original Post: Big [censored] Caching
Feed Title: Waste of Time
Feed URL: http://kasparov.skife.org/blog/index.rss
Feed Description: A simple waste of time and weblog experiment
Latest Java Buzz Posts
Latest Java Buzz Posts by Brian McCallister
Latest Posts From Waste of Time

Advertisement

Reading about LiveJournal's cache development and memcached has had me thinking about caching, a lot, for a couple months.

I recently had the pleasure of implementing a fun system designed around the idea that, in most cases, local caches will hold the data you will need. If you can provide server-affinity (not terribly difficult to do, though annoying, see pound or httpd) then you will, for the most part, have the set of data needed by a current user in the local cache on that node. Then all you need to do is replicate dirty messages. JMS rocks for this, and voila.

I have been trying to reconcile the (extremely powerful!) caching systems described in the memcached/livejournal presenation with resource-level (OJB, Hibernate, etc) level caching, and have been having a rough time. Yes, you can do interesting things with cache regions, and tweaking the mapping / lazy-loading / etc but in the end, you will probably hit the database enough to swamp it at a huge load.

So, take a cue from memcached, and put caching at the application level. You can still hide it (decorate your dao) from other services, and that is probably a good idea. The problem comes in that the state of the art in Java is fine-grained domain models, and I like this, a lot. It makes figuring out what to cache, and how/when to dirty cached things, much trickier though.

Simple example is a one-to-many collection on a Foo. You cache the Foo, and either strip the collection or risk having the things in it be dirty. Every o/r mapping tool out there handles this, but every one of them doesn't do it well in a really highly replicated environment. In short, you wind up relying on the database being horizontally scalable. This is way more expensive than the cache and web server being horizontally scalable.

The solution came to me twofold. One, Eric Evans struck again. You want to carefully control the entities that people are allowed to query for. Queries are the expensive part, not the retrieval. If you know the PK you can retrieve it from the cache. Finding the PK is what hurts, and this is the query. So we take Evans's aggregates, and their root element, and build around querying for aggregates. Now there are only a small(er) number of things for which you can query. More importantly, there are object graphs (aggregates) you can place into the cache as a single cache entry.

The second component is having a means of dirtying the cache which is aware of aggregate boundaries. If you can map an entity to a set of aggregates which it can be participating in (ie, a set of cache roots) you can find all instances of the entity in the cache. It seems like this might be hard (out of foo arbitrary graphs, how do you know which ones hold Foo[id=727]?) It isn't that hard (I hope).

You have the o/r mapping metadata (hopefully soon to be standardized in JSR 220) which tells you what CAN hold a reference, and what the relationship between the instance and the aggregate root actually is. Knowing the relationship you can collect, in one query, the pk's of all the aggregate roots which hold a reference to Foo[id=727]. This query may join seventeen, or even fifty, tables but it is deterministically constructable (yes, databases can handle fifty table joins, I met a Hibernate query doing just that the other day and am still shaking my head). This puts tons of load on the db though (50 table joins are expensive). Luckily, you are caching the heck out of the thing, so actual reads against the write database only happen as data is pushed out to the read databases, which soon gets pulled to cache.

Now, the the other approach is to not query for the aggregate roots, but to work from the O/R layer where the aggregate root is already in memory. This doesn't tell you the relation between this root and other roots which can contain a given entity, but I bet for most models this isn't terribly difficult to figure out. This, added to the o/r metadata, drastically reduces the search space for the aggregate root keys.

Now that we can determine the aggregate roots, which are the keys in the cache, we can dirty the correct ones =)

Because aggregate roots are really just row/pk information you can dirty them from anywhere, including your ruby batch processes.

The weak point in this chain, and I am not totally convinced yet (need to do the proofs, I guess), though it feels right, is that aggregate root keys can be determined from relationship metadata and dirty-instance identity with sufficiently less load on the (write-only) database that it is less than the network traffic of fine grained, flat-space, caching. I am really, really willing to bet that it is.

Read: Big [censored] Caching

Topic: People Don't Know What They Want Previous Topic   Next Topic Topic: ease of development in ejb

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use