SYNOPSIS
bzip2
bunzip2
bz2cat
bzip2recover file
DESCRIPTION
The bzip2 utility compresses files using the Burrows-Wheeler block-sorting text compression algorithm, and Huffman coding. Compression is generally considerably better than that achieved by more conventional LZ77/LZ78-based compressors, and approaches the performance of the PPM family of statistical compressors.
The command-line options are deliberately very similar to those of gzip, but they are not identical.
bzip2 expects a list of file names to accompany the command-line options. Each specified file is replaced by a compressed version of itself, which has the same file name as the original with .bz2 appended. Each compressed file has the same modification date and permissions as the corresponding original, so that these properties can be correctly restored at decompression time. File name handling is naive in the sense that there is no mechanism for preserving original file names, permissions and dates in file systems which lack these concepts, or have serious file name length restrictions, such as MS-DOS.
The bzip2 and bunzip2 utilities,
by default, do not overwrite existing files.
If you want this to happen, specify the
When no file names are specified, bzip2 compresses data read from standard input and writes the result to standard output. In this case, bzip2 declines to write compressed output to a terminal, as this would be entirely incomprehensible and, therefore, pointless.
The bunzip2 utility (or bzip2
The bunzip2 utility correctly decompresses a file which
is the concatenation of two or more compressed files. The result is the
concatenation of the corresponding uncompressed files. Integrity testing
(
You can also compress or decompress files to
the standard output by using the
The bz2cat utility
(or bzip2
Compression is always performed, even if the compressed file is slightly larger than the original. Files of less than about one hundred bytes tend to get larger, since the compression mechanism has a constant overhead in the region of 50 bytes. Random data (including the output of most file compressors) is coded at about 8.05 bits per byte, giving an expansion of around 0.5%.
As a self-check for your protection, bzip2 uses 32-bit CRCs to make sure that the decompressed version of a file is identical to the original. This guards against corruption of the compressed data, and against undetected bugs in bzip2 (hopefully very unlikely). The chances of data corruption going undetected is microscopic, about one chance in four billion for each file processed. Be aware, though, that the check occurs upon decompression, so it can only tell you that that something is wrong. It can't help you recover the original uncompressed data. You can use bzip2recover to try to recover data from damaged files.
Memory Management
The bzip2 utility compresses large files in blocks.
The block size affects both the
compression ratio achieved, and the amount of memory needed both for
compression and decompression. The options
Compression: 400k + ( 7 x block size ) Decompression: 100k + ( 4 x block size ), or 100k + ( 2.5 x block size )
Larger block sizes give rapidly diminishing marginal returns; most of the compression comes from the first two or three hundred k of block size, a fact worth bearing in mind when using bzip2 on small machines. It is also important to appreciate that the decompression memory requirement is set at compression-time by the choice of block size.
For files compressed with the default 900k block size,
bunzip2 will require about 3700 kbytes to decompress.
To support decompression of any file on a 4 megabyte machine,
bunzip2 has an option to decompress using approximately
half this amount of memory, about 2300 kbytes. Decompression speed is
also halved, so you should use this option only where necessary.
The relevant option is
In general, try and use the largest block size memory constraints allow, since that maximises the compression achieved. Compression and decompression speed are virtually unaffected by block size.
Another significant point applies to files which fit in a single
block — that means most files you'd encounter using a large
block size. The amount of real memory touched is proportional
to the size of the file, since the file is smaller than a block.
For example, compressing a file 20,000 bytes long with the
Here is a table which summarises the maximum memory usage for different block sizes. Also recorded is the total compressed size for 14 files of the Calgary Text Compression Corpus totalling 3,141,622 bytes. This column gives some feel for how compression varies with block size. These figures tend to understate the advantage of larger block sizes for larger files, since the Corpus is dominated by smaller files.
Compress Decompress Decompress Corpus Option usage usage -s usage Size -1 1100k 500k 350k 914704 -2 1800k 900k 600k 877703 -3 2500k 1300k 850k 860338 -4 3200k 1700k 1100k 846899 -5 3900k 2100k 1350k 845160 -6 4600k 2500k 1600k 838626 -7 5400k 2900k 1850k 834096 -8 6000k 3300k 2100k 828642 -9 6700k 3700k 2350k 828642
Recovering Data From Damaged Files
The bzip2 utility compresses files in blocks, usually 900kbytes long. Each block is handled independently. If a media or transmission error causes a multi-block .bz2 file to become damaged, it may be possible to recover data from the undamaged blocks in the file.
The compressed representation of each block is delimited by a 48-bit pattern, which makes it possible to find the block boundaries with reasonable certainty. Each block also carries its own 32-bit CRC, so damaged blocks can be distinguished from undamaged ones.
The bzip2recover utility
is a simple program whose purpose is to search for
blocks in .bz2 files, and write each block out into
its own .bz2 file. You can then use
bzip2
bzip2recover takes a single argument, the name of the damaged file, and writes a number of files rec0001file.bz2, rec0002file.bz2, and so forth, containing the extracted blocks. The output file names are designed so that the use of wildcards in subsequent processing. For example
bzip2 -dc rec*file.bz2 > recovered_data
lists the files in the correct order.
The bzip2recover utility should be of most use dealing with large .bz2 files, as these will contain many blocks. It is clearly futile to use it on damaged single-block files, since a damaged block cannot be recovered. If you wish to minimize any potential data loss through media or transmission errors, you might consider compressing with a smaller block size.
Options
-c --stdout -
compresses or decompresses to standard output.
-c decompresses multiple files to standard output, but only compresses a single file to standard output. -d --decompress -
forces decompression. bzip2, bunzip2 and bz2cat are really the same program, and the decision about what actions to take is done on the basis of which name is used. This option overrides that mechanism, and forces bzip2 to decompress.
-f --force -
forces the overwriting of output files. Normally, bzip2 does not overwrite existing output files.
-k --keep -
keeps (that is, does not delete) input files during compression or decompression.
-L --license -V --version -
displays the software version, license terms and conditions.
--repetitive-best -
is the opposite of
--repetitive-fast ; tries a lot harder before resorting to randomization. --repetitive-fast -
bzip2 injects some small pseudo-random variations into very repetitive blocks to limit worst-case performance during compression. If sorting runs into difficulties, the block is randomized, and sorting is restarted. Very roughly, bzip2 persists for three times as long as a well-behaved input would take before resorting to randomization. This option makes it give up much sooner.
-s --small -
reduces memory usage for compression, decompression and testing. Files are decompressed and tested using a modified algorithm which only requires 2.5 bytes per block byte. This means any file can be decompressed in 2300k of memory, albeit at about half the normal speed.
During compression,
-s selects a block size of 200k, which limits memory use to around the same figure, at the expense of your compression ratio. In short, if your machine is low on memory (8 megabytes or less), use-s for everything. See Memory Management above. -t --test -
checks the integrity of the specified files, but does not decompress them. This option actually performs a trial decompression and throws away the result.
-v --verbose -
uses verbose mode and shows the compression ratio for each file processed. Further
-v options on the command line increase the verbosity level, displaying lots of information which is primarily of interest for diagnostic purposes. -z --compress -
is the complement to
-d . It forces compression, regardless of the name used to invoke the utility. - #-
sets the block size to be used when compressing. # can be a single digit from 1 to 9 which represent block sizes of 100 k, 200 k .. 900 k. This option has no effect when decompressing. See Memory Management above.
DIAGNOSTICS
Possible exit status values are:
- 0
-
Successful completion.
- 1
-
An environmental problem occurred (for example, file not found, invalid options, I/O errors, and so on).
- 2
-
A compressed file was corrupt.
- 3
-
An internal consistency error (for example, a bug) occurred which caused bzip2 to panic.
PERFORMANCE NOTES
The sorting phase of compression gathers together similar strings
in the file. Because of this, files containing very long
runs of repeated symbols, like aabaabaabaab ... (repeated
several hundred times) may compress extraordinarily slowly.
You can use the
Such pathological cases seem rare in practice, appearing mostly in
artificially-constructed test files, and in low-level disk images.
It may be inadvisable to use bzip2 to compress the latter.
If you do get a file which causes severe slowness in compression,
try making the block size as small as possible, with the
The bzip2 utility usually allocates several megabytes of memory to operate in, and then charges all over it in a fairly random fashion. This means that performance, both for compressing and decompressing, is largely determined by the speed at which your machine can service cache misses. Because of this, small changes to the code to reduce the miss rate have been observed to give disproportionately large performance improvements. bzip2 will most likely perform best on machines with very large caches.
CAVEATS
-
I/O error messages are not as helpful as they could be. The bzip2 tries hard to detect I/O errors and exit cleanly, but the details of what the problem is sometimes seem rather misleading.
-
This reference page pertains to version 1.0.5 of bzip2. Compressed data created by this version is entirely forwards and backwards compatible with the previous public release, version 0.1pl2, but with the following exception: 1.0.5 can correctly decompress multiple concatenated compressed files. 0.1pl2 cannot do this; it will stop after decompressing just the first file in the stream.
-
Wildcard expansion for Windows systems is flaky.
-
The bzip2recover utility uses 32-bit integers to represent bit positions in compressed files, so it cannot handle compressed files more than 512 megabytes long. This could easily be fixed.
AUTHOR
Julian Seward, jseward@acm.org.
http://www.muraroa.demon.co.uk
The ideas embodied in bzip2 are due to (at least) the following people: Michael Burrows and David Wheeler (for the block sorting transformation), David Wheeler (again, for the Huffman coder), Peter Fenwick (for the structured coding model in the original bzip, and many refinements), and Alistair Moffat, Radford Neal and Ian Witten (for the arithmetic coder in the original bzip). I am much indebted for their help, support and advice. Christian von Roques encouraged me to look for faster sorting algorithms, so as to speed up compression. Bela Lubkin encouraged me to improve the worst-case compression performance. Many people sent patches, helped with portability problems, lent machines, gave advice and were generally helpful.AVAILABILITY
PTC MKS Toolkit for Power Users
PTC MKS Toolkit for System Administrators
PTC MKS Toolkit for Developers
PTC MKS Toolkit for Interoperability
PTC MKS Toolkit for Professional Developers
PTC MKS Toolkit for Professional Developers 64-Bit Edition
PTC MKS Toolkit for Enterprise Developers
PTC MKS Toolkit for Enterprise Developers 64-Bit Edition
SEE ALSO
- Commands:
- bzdiff, bzgrep, bzmore, compress, gzip, mkszip, pack, uncompress, unpack, unzip, zcat, zip, zipinfo
MKS Toolkit Backup and Tape Handling Solutions Guide
PTC MKS Toolkit 10.4 Documentation Build 39.