Remember your college introduction to data structures and algorithms course? Remember how you spent some time calculating the complexity cost of various algorithms based on big-O notation? There was one very important assumption that was made whenever you made those calculations: that the cost of accessing any address in memory was constant.
With today's multi-tiered CPU cache architectures, it has become increasingly obvious (to those who care) that that assumption is no longer valid. A full cache miss on a modern 3GHz+ P4 will mean that you will waste ~1000 operations while waiting for the data to show up at the CPU.
I don't have anything like the background to really comment on this, but it looks like the sort of thing that would wax a lot of venerable assumptions