this limits the maximum possible compression ratio compared to algorithms that significantly preprocess the data, but with the help of some enhancements to the lzw algorithm (described below) it is able to compress better than the unix "compress" utility (which this is a streaming compressor in that the data is not divided into blocks and no context information like dictionaries or huffman tables are sent ahead of the compressed data (except for one byte to signal the maximum bit depth). i have used this in several projects for storing compressed firmware images, and once i even coded the decompressor in z-80 assembly language for speed! depending on the maximum symbol size selected, the implementation can require from 2368 to 335616 bytes of ram for decoding (and about half again more for encoding). it is targeted at embedded applications that require high speed compression or decompression facilities where lots of ram for large dictionaries might not be available. this is an implementation of the lempel-ziv-welch general-purpose data compression algorithm.