Last time I discussed Xtreams, I was lamenting on some performance issues we were having with substreams. Martin and I have just finished rewriting the broken stuff and all the tests pass again. So here is a post I delayed once before that I can now finished.
Reading the contents of a file off of disk that is encoding in UTF8. My changes file is 6,613,356 bytes big, give or take. How many characters exist in the file though? and how fast can we find out?
| stream size |
stream := ('changes.cha' asFilename withEncoding: #utf8) readStream.
[contents upToEnd size] ensure: [stream close].
This yields us a result of 6,583,057 characters and it took 2.845 seconds to run. So how do we achieve the same result using xtreams?
| stream |
stream := 'changes.cha' asFilename reading encoding: #utf8.
[stream rest size] ensure: [stream close].
This yields the same character count and run in 3.427 seconds. Now imagine that the file on disk is stored using the default platform string encoding. In this case, the code for streams becomes:
'changes.cha' asFilename contentsOfEntireFile size
This ran in 1.035 seconds. The xtreams version can be smarter, since it can utilize the primitives that already exist to read using the platform encoding:
| stream |
stream := 'changes.cha' asFilename reading contentsSpecies: String.
stream rest size
This ran in 0.551 seconds. Let's say we want to read each line and count every line that starts with a < character (which has a codePoint of 60).
| stream count line |
stream := 'changes.cha' asFilename readStream binary.
count := 0.
[stream atEnd] whileFalse:
[line := stream upTo: 13.
(line notEmpty and: [line first = 60]) ifTrue: [count := count + 1]].
This ran in 0.61 seconds and returned 46,388. The xtreams version looks like:
| stream count |
stream := 'changes.cha' asFilename reading.
count := 0.
(stream ending: 13 inclusive: true) do: [:substream | substream get = 60 ifTrue: [count := count + 1]]
This version runs in 0.402 seconds. At this point we can start to diverge in simplicity from Streams. For example, what if you want every line in a file?
(('somefile.txt' asFilename encoding: #utf8) ending: Character cr) collect: #rest
You can use this technique to iterate over sections that are split by newlines and write out a transformed stream. You can take it a step forward and look for specific content as well:
stream := 'changes.cha' asFilename reading.
((stream encoding: #utf8) ending: 'class') collect: [:substream | stream position]
The above gave me each position in the file where the word 'class' exists. This ran in 3.188 seconds for me.