3 bzip2, bunzip2 \- a block-sorting file compressor, v1.0.6
5 bzcat \- decompresses files to stdout
7 bzip2recover \- recovers data from damaged bzip2 files
12 .RB [ " \-cdfkqstvzVL123456789 " ]
35 compresses files using the Burrows-Wheeler block sorting
36 text compression algorithm, and Huffman coding. Compression is
37 generally considerably better than that achieved by more conventional
38 LZ77/LZ78-based compressors, and approaches the performance of the PPM
39 family of statistical compressors.
41 The command-line options are deliberately very similar to
44 but they are not identical.
47 expects a list of file names to accompany the
48 command-line flags. Each file is replaced by a compressed version of
49 itself, with the name "original_name.bz2".
51 has the same modification date, permissions, and, when possible,
52 ownership as the corresponding original, so that these properties can
53 be correctly restored at decompression time. File name handling is
54 naive in the sense that there is no mechanism for preserving original
55 file names, permissions, ownerships or dates in filesystems which lack
56 these concepts, or have serious file name length restrictions, such as
62 will by default not overwrite existing
63 files. If you want this to happen, specify the \-f flag.
65 If no file names are specified,
67 compresses from standard
68 input to standard output. In this case,
71 write compressed output to a terminal, as this would be entirely
72 incomprehensible and therefore pointless.
78 specified files. Files which were not created by
80 will be detected and ignored, and a warning issued.
82 attempts to guess the filename for the decompressed file
83 from that of the compressed file as follows:
85 filename.bz2 becomes filename
86 filename.bz becomes filename
87 filename.tbz2 becomes filename.tar
88 filename.tbz becomes filename.tar
89 anyothername becomes anyothername.out
91 If the file does not end in one of the recognised endings,
98 complains that it cannot
99 guess the name of the original file, and uses the original name
104 As with compression, supplying no
105 filenames causes decompression from
106 standard input to standard output.
109 will correctly decompress a file which is the
110 concatenation of two or more compressed files. The result is the
111 concatenation of the corresponding uncompressed files. Integrity
114 compressed files is also supported.
116 You can also compress or decompress files to the standard output by
117 giving the \-c flag. Multiple files may be compressed and
118 decompressed like this. The resulting outputs are fed sequentially to
119 stdout. Compression of multiple files
120 in this manner generates a stream
121 containing multiple compressed file representations. Such a stream
122 can be decompressed correctly only by
125 later. Earlier versions of
127 will stop after decompressing
128 the first file in the stream.
133 decompresses all specified files to
137 will read arguments from the environment variables
141 in that order, and will process them
142 before any arguments read from the command line. This gives a
143 convenient way to supply default arguments.
145 Compression is always performed, even if the compressed
147 larger than the original. Files of less than about one hundred bytes
148 tend to get larger, since the compression mechanism has a constant
149 overhead in the region of 50 bytes. Random data (including the output
150 of most file compressors) is coded at about 8.05 bits per byte, giving
151 an expansion of around 0.5%.
153 As a self-check for your protection,
157 make sure that the decompressed version of a file is identical to the
158 original. This guards against corruption of the compressed data, and
159 against undetected bugs in
161 (hopefully very unlikely). The
162 chances of data corruption going undetected is microscopic, about one
163 chance in four billion for each file processed. Be aware, though, that
164 the check occurs upon decompression, so it can only tell you that
165 something is wrong. It can't help you
166 recover the original uncompressed
169 to try to recover data from
172 Return values: 0 for a normal exit, 1 for environmental problems (file
173 not found, invalid flags, I/O errors, &c), 2 to indicate a corrupt
174 compressed file, 3 for an internal consistency error (eg, bug) which
182 Compress or decompress to standard output.
191 really the same program, and the decision about what actions to take is
192 done on the basis of which name is used. This flag overrides that
193 mechanism, and forces
198 The complement to \-d: forces compression, regardless of the
202 Check integrity of the specified file(s), but don't decompress them.
203 This really performs a trial decompression and throws away the result.
206 Force overwrite of output files. Normally,
209 existing output files. Also forces
212 to files, which it otherwise wouldn't do.
214 bzip2 normally declines to decompress files which don't have the
215 correct magic header bytes. If forced (-f), however, it will pass
216 such files through unmodified. This is how GNU gzip behaves.
219 Keep (don't delete) input files during compression
223 Reduce memory usage, for compression, decompression and testing. Files
224 are decompressed and tested using a modified algorithm which only
225 requires 2.5 bytes per block byte. This means any file can be
226 decompressed in 2300k of memory, albeit at about half the normal speed.
228 During compression, \-s selects a block size of 200k, which limits
229 memory use to around the same figure, at the expense of your compression
230 ratio. In short, if your machine is low on memory (8 megabytes or
231 less), use \-s for everything. See MEMORY MANAGEMENT below.
234 Suppress non-essential warning messages. Messages pertaining to
235 I/O errors and other critical events will not be suppressed.
238 Verbose mode -- show the compression ratio for each file processed.
239 Further \-v's increase the verbosity level, spewing out lots of
240 information which is primarily of interest for diagnostic purposes.
242 .B \-L --license -V --version
243 Display the software version, license terms and conditions.
245 .B \-1 (or \-\-fast) to \-9 (or \-\-best)
246 Set the block size to 100 k, 200 k .. 900 k when compressing. Has no
247 effect when decompressing. See MEMORY MANAGEMENT below.
248 The \-\-fast and \-\-best aliases are primarily for GNU gzip
249 compatibility. In particular, \-\-fast doesn't make things
250 significantly faster.
251 And \-\-best merely selects the default behaviour.
254 Treats all subsequent arguments as file names, even if they start
255 with a dash. This is so you can handle files with names beginning
256 with a dash, for example: bzip2 \-- \-myfilename.
258 .B \--repetitive-fast --repetitive-best
259 These flags are redundant in versions 0.9.5 and above. They provided
260 some coarse control over the behaviour of the sorting algorithm in
261 earlier versions, which was sometimes useful. 0.9.5 and above have an
262 improved algorithm which renders these flags irrelevant.
264 .SH MEMORY MANAGEMENT
266 compresses large files in blocks. The block size affects
267 both the compression ratio achieved, and the amount of memory needed for
268 compression and decompression. The flags \-1 through \-9
269 specify the block size to be 100,000 bytes through 900,000 bytes (the
270 default) respectively. At decompression time, the block size used for
271 compression is read from the header of the compressed file, and
273 then allocates itself just enough memory to decompress
274 the file. Since block sizes are stored in compressed files, it follows
275 that the flags \-1 to \-9 are irrelevant to and so ignored
276 during decompression.
278 Compression and decompression requirements,
279 in bytes, can be estimated as:
281 Compression: 400k + ( 8 x block size )
283 Decompression: 100k + ( 4 x block size ), or
284 100k + ( 2.5 x block size )
286 Larger block sizes give rapidly diminishing marginal returns. Most of
287 the compression comes from the first two or three hundred k of block
288 size, a fact worth bearing in mind when using
291 It is also important to appreciate that the decompression memory
292 requirement is set at compression time by the choice of block size.
294 For files compressed with the default 900k block size,
296 will require about 3700 kbytes to decompress. To support decompression
297 of any file on a 4 megabyte machine,
300 decompress using approximately half this amount of memory, about 2300
301 kbytes. Decompression speed is also halved, so you should use this
302 option only where necessary. The relevant flag is -s.
304 In general, try and use the largest block size memory constraints allow,
305 since that maximises the compression achieved. Compression and
306 decompression speed are virtually unaffected by block size.
308 Another significant point applies to files which fit in a single block
309 -- that means most files you'd encounter using a large block size. The
310 amount of real memory touched is proportional to the size of the file,
311 since the file is smaller than a block. For example, compressing a file
312 20,000 bytes long with the flag -9 will cause the compressor to
313 allocate around 7600k of memory, but only touch 400k + 20000 * 8 = 560
314 kbytes of it. Similarly, the decompressor will allocate 3700k but only
315 touch 100k + 20000 * 4 = 180 kbytes.
317 Here is a table which summarises the maximum memory usage for different
318 block sizes. Also recorded is the total compressed size for 14 files of
319 the Calgary Text Compression Corpus totalling 3,141,622 bytes. This
320 column gives some feel for how compression varies with block size.
321 These figures tend to understate the advantage of larger block sizes for
322 larger files, since the Corpus is dominated by smaller files.
324 Compress Decompress Decompress Corpus
325 Flag usage usage -s usage Size
327 -1 1200k 500k 350k 914704
328 -2 2000k 900k 600k 877703
329 -3 2800k 1300k 850k 860338
330 -4 3600k 1700k 1100k 846899
331 -5 4400k 2100k 1350k 845160
332 -6 5200k 2500k 1600k 838626
333 -7 6100k 2900k 1850k 834096
334 -8 6800k 3300k 2100k 828642
335 -9 7600k 3700k 2350k 828642
337 .SH RECOVERING DATA FROM DAMAGED FILES
339 compresses files in blocks, usually 900kbytes long. Each
340 block is handled independently. If a media or transmission error causes
342 file to become damaged, it may be possible to
343 recover data from the undamaged blocks in the file.
345 The compressed representation of each block is delimited by a 48-bit
346 pattern, which makes it possible to find the block boundaries with
347 reasonable certainty. Each block also carries its own 32-bit CRC, so
348 damaged blocks can be distinguished from undamaged ones.
351 is a simple program whose purpose is to search for
352 blocks in .bz2 files, and write each block out into its own .bz2
353 file. You can then use
357 integrity of the resulting files, and decompress those which are
361 takes a single argument, the name of the damaged file,
362 and writes a number of files "rec00001file.bz2",
363 "rec00002file.bz2", etc, containing the extracted blocks.
364 The output filenames are designed so that the use of
365 wildcards in subsequent processing -- for example,
366 "bzip2 -dc rec*file.bz2 > recovered_data" -- processes the files in
370 should be of most use dealing with large .bz2
371 files, as these will contain many blocks. It is clearly
372 futile to use it on damaged single-block files, since a
373 damaged block cannot be recovered. If you wish to minimise
374 any potential data loss through media or transmission errors,
375 you might consider compressing with a smaller
378 .SH PERFORMANCE NOTES
379 The sorting phase of compression gathers together similar strings in the
380 file. Because of this, files containing very long runs of repeated
381 symbols, like "aabaabaabaab ..." (repeated several hundred times) may
382 compress more slowly than normal. Versions 0.9.5 and above fare much
383 better than previous versions in this respect. The ratio between
384 worst-case and average-case compression time is in the region of 10:1.
385 For previous versions, this figure was more like 100:1. You can use the
386 \-vvvv option to monitor progress in great detail, if you want.
388 Decompression speed is unaffected by these phenomena.
391 usually allocates several megabytes of memory to operate
392 in, and then charges all over it in a fairly random fashion. This means
393 that performance, both for compressing and decompressing, is largely
394 determined by the speed at which your machine can service cache misses.
395 Because of this, small changes to the code to reduce the miss rate have
396 been observed to give disproportionately large performance improvements.
399 will perform best on machines with very large caches.
402 I/O error messages are not as helpful as they could be.
404 tries hard to detect I/O errors and exit cleanly, but the details of
405 what the problem is sometimes seem rather misleading.
407 This manual page pertains to version 1.0.6 of
409 Compressed data created by this version is entirely forwards and
410 backwards compatible with the previous public releases, versions
411 0.1pl2, 0.9.0, 0.9.5, 1.0.0, 1.0.1, 1.0.2 and above, but with the following
412 exception: 0.9.0 and above can correctly decompress multiple
413 concatenated compressed files. 0.1pl2 cannot do this; it will stop
414 after decompressing just the first file in the stream.
417 versions prior to 1.0.2 used 32-bit integers to represent
418 bit positions in compressed files, so they could not handle compressed
419 files more than 512 megabytes long. Versions 1.0.2 and above use
420 64-bit ints on some platforms which support them (GNU supported
421 targets, and Windows). To establish whether or not bzip2recover was
422 built with such a limitation, run it without arguments. In any event
423 you can build yourself an unlimited version if you can recompile it
424 with MaybeUInt64 set to be an unsigned 64-bit integer.
429 Julian Seward, jsewardbzip.org.
433 The ideas embodied in
435 are due to (at least) the following
436 people: Michael Burrows and David Wheeler (for the block sorting
437 transformation), David Wheeler (again, for the Huffman coder), Peter
438 Fenwick (for the structured coding model in the original
440 and many refinements), and Alistair Moffat, Radford Neal and Ian Witten
441 (for the arithmetic coder in the original
444 indebted for their help, support and advice. See the manual in the
445 source distribution for pointers to sources of documentation. Christian
446 von Roques encouraged me to look for faster sorting algorithms, so as to
447 speed up compression. Bela Lubkin encouraged me to improve the
448 worst-case compression performance.
449 Donna Robinson XMLised the documentation.
450 The bz* scripts are derived from those of GNU gzip.
451 Many people sent patches, helped
452 with portability problems, lent machines, gave advice and were generally