Work done since my previous post: Compression in ventisrv.
As you might know now, ventisrv stores blocks from 1 byte to 56kb in size on disk. These blocks are immutable: cannot be (re)moved or modified. The typical block size is 8kb for normal data blocks, and usually smaller for directory entries and pointer blocks (but that’s outside the scope of this post). Now venti from Plan 9, and the newer venti from Plan 9 From User Space try to compress a block before writing it to disk, and only if compression makes it smaller, they store it compressed. This saves disk space for many blocks.
Ventisrv at first did not have the ability to compress blocks, but always
stored them ‘raw’. With the changes of this week, the option
compression. It isn’t on by default because the compression speed limits
write throughput: Compression is expensive (more so in Limbo then in C).
Also, if you enable this, make sure you have the just-in-time compiled
enabled as well.
Ventisrv handles compression a bit differently from the other venti’s. First, ventisrv uses the deflate/inflate compression algorithms (the one used by gzip), whereas the venti’s use whack, an algorithm that doesn’t have much documentation but seems to at least be favourable when compression smaller blocks. It isn’t available for inferno though, and deflate/inflate is: easy choice.
The second difference is that the other venti’s write a header and the compressed payload for each block, whereas ventisrv gathers multiple venti blocks, compresses the payload, and writes the the headers first, and then the compressed payload (representing data for multiple blocks). The idea is that compression more data in one go makes the compression history buffer larger, allowing for better compression. A quick test on the data in the venti I have been using for backups for a few months show this really does increase the compression ratio: Gzip on the entire datafile resulted in a compressed data file of ~68% of the original, compressing up to 64KB of blocks resulted in ~70%, and compressing each block separately in ~79%. The downside to compressing multiple blocks in one go is that they all have to be decompressed (at least up to the one found) to read a single block, and all the compressed data has to be read from disk to do that. Thus, a maximum of ~64KB raw/uncompressed data to compress into one block has been chosen. This ensures we don’t have to (de)compress or read from disk too much data for each block.
The file format has been “extended” with a new header for compressed blocks. This leaves data files written by previous ventisrv’s valid. Note that this code is pretty fresh, may have bugs and hasn’t been tested a lot, so use with care (and report bugs back to me please).
There is always room for improvement.
- Currently, all blocks are compressed. However, pointer blocks have random data (the scores) and are often not very compressible. Already compressed data (e.g. a
.tgz) will not compress to a smaller block: perhaps there is some sort of cheap detection of the entropy of a block.
- Multiple blocks are compressed to the same buffer. If the end result it too large, the blocks are written without compression. When the last block added to the compression buffer is not compressable, this may cause compressible blocks to be stored raw.
But well, these are mostly details and the benefit is not obvious.
Another small thing has changed in ventisrv: Support for specifying
read-only connections. Specifying
-r addr makes ventisrv listen on
addr and disallow write and sync messages from clients that connected to
that address. With
-w addr, writable connections can be listened for.
I’ve also started on some data fingerprinting code, but that is not working yet, so nothing to show. That’s it for now!