yet another unzip library for node

Related tags

Compression yauzl
Overview

yauzl

Build Status Coverage Status

yet another unzip library for node. For zipping, see yazl.

Design principles:

  • Follow the spec. Don't scan for local file headers. Read the central directory for file metadata. (see No Streaming Unzip API).
  • Don't block the JavaScript thread. Use and provide async APIs.
  • Keep memory usage under control. Don't attempt to buffer entire files in RAM at once.
  • Never crash (if used properly). Don't let malformed zip files bring down client applications who are trying to catch errors.
  • Catch unsafe file names. See validateFileName().

Usage

var yauzl = require("yauzl");

yauzl.open("path/to/file.zip", {lazyEntries: true}, function(err, zipfile) {
  if (err) throw err;
  zipfile.readEntry();
  zipfile.on("entry", function(entry) {
    if (/\/$/.test(entry.fileName)) {
      // Directory file names end with '/'.
      // Note that entries for directories themselves are optional.
      // An entry's fileName implicitly requires its parent directories to exist.
      zipfile.readEntry();
    } else {
      // file entry
      zipfile.openReadStream(entry, function(err, readStream) {
        if (err) throw err;
        readStream.on("end", function() {
          zipfile.readEntry();
        });
        readStream.pipe(somewhere);
      });
    }
  });
});

See also examples/ for more usage examples.

API

The default for every optional callback parameter is:

function defaultCallback(err) {
  if (err) throw err;
}

open(path, [options], [callback])

Calls fs.open(path, "r") and reads the fd effectively the same as fromFd() would.

options may be omitted or null. The defaults are {autoClose: true, lazyEntries: false, decodeStrings: true, validateEntrySizes: true, strictFileNames: false}.

autoClose is effectively equivalent to:

zipfile.once("end", function() {
  zipfile.close();
});

lazyEntries indicates that entries should be read only when readEntry() is called. If lazyEntries is false, entry events will be emitted as fast as possible to allow pipe()ing file data from all entries in parallel. This is not recommended, as it can lead to out of control memory usage for zip files with many entries. See issue #22. If lazyEntries is true, an entry or end event will be emitted in response to each call to readEntry(). This allows processing of one entry at a time, and will keep memory usage under control for zip files with many entries.

decodeStrings is the default and causes yauzl to decode strings with CP437 or UTF-8 as required by the spec. The exact effects of turning this option off are:

  • zipfile.comment, entry.fileName, and entry.fileComment will be Buffer objects instead of Strings.
  • Any Info-ZIP Unicode Path Extra Field will be ignored. See extraFields.
  • Automatic file name validation will not be performed. See validateFileName().

validateEntrySizes is the default and ensures that an entry's reported uncompressed size matches its actual uncompressed size. This check happens as early as possible, which is either before emitting each "entry" event (for entries with no compression), or during the readStream piping after calling openReadStream(). See openReadStream() for more information on defending against zip bomb attacks.

When strictFileNames is false (the default) and decodeStrings is true, all backslash (\) characters in each entry.fileName are replaced with forward slashes (/). The spec forbids file names with backslashes, but Microsoft's System.IO.Compression.ZipFile class in .NET versions 4.5.0 until 4.6.1 creates non-conformant zipfiles with backslashes in file names. strictFileNames is false by default so that clients can read these non-conformant zipfiles without knowing about this Microsoft-specific bug. When strictFileNames is true and decodeStrings is true, entries with backslashes in their file names will result in an error. See validateFileName(). When decodeStrings is false, strictFileNames has no effect.

The callback is given the arguments (err, zipfile). An err is provided if the End of Central Directory Record cannot be found, or if its metadata appears malformed. This kind of error usually indicates that this is not a zip file. Otherwise, zipfile is an instance of ZipFile.

fromFd(fd, [options], [callback])

Reads from the fd, which is presumed to be an open .zip file. Note that random access is required by the zip file specification, so the fd cannot be an open socket or any other fd that does not support random access.

options may be omitted or null. The defaults are {autoClose: false, lazyEntries: false, decodeStrings: true, validateEntrySizes: true, strictFileNames: false}.

See open() for the meaning of the options and callback.

fromBuffer(buffer, [options], [callback])

Like fromFd(), but reads from a RAM buffer instead of an open file. buffer is a Buffer.

If a ZipFile is acquired from this method, it will never emit the close event, and calling close() is not necessary.

options may be omitted or null. The defaults are {lazyEntries: false, decodeStrings: true, validateEntrySizes: true, strictFileNames: false}.

See open() for the meaning of the options and callback. The autoClose option is ignored for this method.

fromRandomAccessReader(reader, totalSize, [options], [callback])

This method of reading a zip file allows clients to implement their own back-end file system. For example, a client might translate read calls into network requests.

The reader parameter must be of a type that is a subclass of RandomAccessReader that implements the required methods. The totalSize is a Number and indicates the total file size of the zip file.

options may be omitted or null. The defaults are {autoClose: true, lazyEntries: false, decodeStrings: true, validateEntrySizes: true, strictFileNames: false}.

See open() for the meaning of the options and callback.

dosDateTimeToDate(date, time)

Converts MS-DOS date and time data into a JavaScript Date object. Each parameter is a Number treated as an unsigned 16-bit integer. Note that this format does not support timezones, so the returned object will use the local timezone.

validateFileName(fileName)

Returns null or a String error message depending on the validity of fileName. If fileName starts with "/" or /[A-Za-z]:\// or if it contains ".." path segments or "\\", this function returns an error message appropriate for use like this:

var errorMessage = yauzl.validateFileName(fileName);
if (errorMessage != null) throw new Error(errorMessage);

This function is automatically run for each entry, as long as decodeStrings is true. See open(), strictFileNames, and Event: "entry" for more information.

Class: ZipFile

The constructor for the class is not part of the public API. Use open(), fromFd(), fromBuffer(), or fromRandomAccessReader() instead.

Event: "entry"

Callback gets (entry), which is an Entry. See open() and readEntry() for when this event is emitted.

If decodeStrings is true, entries emitted via this event have already passed file name validation. See validateFileName() and open() for more information.

If validateEntrySizes is true and this entry's compressionMethod is 0 (stored without compression), this entry has already passed entry size validation. See open() for more information.

Event: "end"

Emitted after the last entry event has been emitted. See open() and readEntry() for more info on when this event is emitted.

Event: "close"

Emitted after the fd is actually closed. This is after calling close() (or after the end event when autoClose is true), and after all stream pipelines created from openReadStream() have finished reading data from the fd.

If this ZipFile was acquired from fromRandomAccessReader(), the "fd" in the previous paragraph refers to the RandomAccessReader implemented by the client.

If this ZipFile was acquired from fromBuffer(), this event is never emitted.

Event: "error"

Emitted in the case of errors with reading the zip file. (Note that other errors can be emitted from the streams created from openReadStream() as well.) After this event has been emitted, no further entry, end, or error events will be emitted, but the close event may still be emitted.

readEntry()

Causes this ZipFile to emit an entry or end event (or an error event). This method must only be called when this ZipFile was created with the lazyEntries option set to true (see open()). When this ZipFile was created with the lazyEntries option set to true, entry and end events are only ever emitted in response to this method call.

The event that is emitted in response to this method will not be emitted until after this method has returned, so it is safe to call this method before attaching event listeners.

After calling this method, calling this method again before the response event has been emitted will cause undefined behavior. Calling this method after the end event has been emitted will cause undefined behavior. Calling this method after calling close() will cause undefined behavior.

openReadStream(entry, [options], callback)

entry must be an Entry object from this ZipFile. callback gets (err, readStream), where readStream is a Readable Stream that provides the file data for this entry. If this zipfile is already closed (see close()), the callback will receive an err.

options may be omitted or null, and has the following defaults:

{
  decompress: entry.isCompressed() ? true : null,
  decrypt: null,
  start: 0,                  // actually the default is null, see below
  end: entry.compressedSize, // actually the default is null, see below
}

If the entry is compressed (with a supported compression method), and the decompress option is true (or omitted), the read stream provides the decompressed data. Omitting the decompress option is what most clients should do.

The decompress option must be null (or omitted) when the entry is not compressed (see isCompressed()), and either true (or omitted) or false when the entry is compressed. Specifying decompress: false for a compressed entry causes the read stream to provide the raw compressed file data without going through a zlib inflate transform.

If the entry is encrypted (see isEncrypted()), clients may want to avoid calling openReadStream() on the entry entirely. Alternatively, clients may call openReadStream() for encrypted entries and specify decrypt: false. If the entry is also compressed, clients must also specify decompress: false. Specifying decrypt: false for an encrypted entry causes the read stream to provide the raw, still-encrypted file data. (This data includes the 12-byte header described in the spec.)

The decrypt option must be null (or omitted) for non-encrypted entries, and false for encrypted entries. Omitting the decrypt option (or specifying it as null) for an encrypted entry will result in the callback receiving an err. This default behavior is so that clients not accounting for encrypted files aren't surprised by bogus file data.

The start (inclusive) and end (exclusive) options are byte offsets into this entry's file data, and can be used to obtain part of an entry's file data rather than the whole thing. If either of these options are specified and non-null, then the above options must be used to obain the file's raw data. Specifying {start: 0, end: entry.compressedSize} will result in the complete file, which is effectively the default values for these options, but note that unlike omitting the options, when you specify start or end as any non-null value, the above requirement is still enforced that you must also pass the appropriate options to get the file's raw data.

It's possible for the readStream provided to the callback to emit errors for several reasons. For example, if zlib cannot decompress the data, the zlib error will be emitted from the readStream. Two more error cases (when validateEntrySizes is true) are if the decompressed data has too many or too few actual bytes compared to the reported byte count from the entry's uncompressedSize field. yauzl notices this false information and emits an error from the readStream after some number of bytes have already been piped through the stream.

This check allows clients to trust the uncompressedSize field in Entry objects. Guarding against zip bomb attacks can be accomplished by doing some heuristic checks on the size metadata and then watching out for the above errors. Such heuristics are outside the scope of this library, but enforcing the uncompressedSize is implemented here as a security feature.

It is possible to destroy the readStream before it has piped all of its data. To do this, call readStream.destroy(). You must unpipe() the readStream from any destination before calling readStream.destroy(). If this zipfile was created using fromRandomAccessReader(), the RandomAccessReader implementation must provide readable streams that implement a .destroy() method (see randomAccessReader._readStreamForRange()) in order for calls to readStream.destroy() to work in this context.

close()

Causes all future calls to openReadStream() to fail, and closes the fd, if any, after all streams created by openReadStream() have emitted their end events.

If the autoClose option is set to true (see open()), this function will be called automatically effectively in response to this object's end event.

If the lazyEntries option is set to false (see open()) and this object's end event has not been emitted yet, this function causes undefined behavior. If the lazyEntries option is set to true, you can call this function instead of calling readEntry() to abort reading the entries of a zipfile.

It is safe to call this function multiple times; after the first call, successive calls have no effect. This includes situations where the autoClose option effectively calls this function for you.

If close() is never called, then the zipfile is "kept open". For zipfiles created with fromFd(), this will leave the fd open, which may be desirable. For zipfiles created with open(), this will leave the underlying fd open, thereby "leaking" it, which is probably undesirable. For zipfiles created with fromRandomAccessReader(), the reader's close() method will never be called. For zipfiles created with fromBuffer(), the close() function has no effect whether called or not.

Regardless of how this ZipFile was created, there are no resources other than those listed above that require cleanup from this function. This means it may be desirable to never call close() in some usecases.

isOpen

Boolean. true until close() is called; then it's false.

entryCount

Number. Total number of central directory records.

comment

String. Always decoded with CP437 per the spec.

If decodeStrings is false (see open()), this field is the undecoded Buffer instead of a decoded String.

Class: Entry

Objects of this class represent Central Directory Records. Refer to the zipfile specification for more details about these fields.

These fields are of type Number:

  • versionMadeBy
  • versionNeededToExtract
  • generalPurposeBitFlag
  • compressionMethod
  • lastModFileTime (MS-DOS format, see getLastModDateTime)
  • lastModFileDate (MS-DOS format, see getLastModDateTime)
  • crc32
  • compressedSize
  • uncompressedSize
  • fileNameLength (bytes)
  • extraFieldLength (bytes)
  • fileCommentLength (bytes)
  • internalFileAttributes
  • externalFileAttributes
  • relativeOffsetOfLocalHeader

fileName

String. Following the spec, the bytes for the file name are decoded with UTF-8 if generalPurposeBitFlag & 0x800, otherwise with CP437. Alternatively, this field may be populated from the Info-ZIP Unicode Path Extra Field (see extraFields).

This field is automatically validated by validateFileName() before yauzl emits an "entry" event. If this field would contain unsafe characters, yauzl emits an error instead of an entry.

If decodeStrings is false (see open()), this field is the undecoded Buffer instead of a decoded String. Therefore, generalPurposeBitFlag and any Info-ZIP Unicode Path Extra Field are ignored. Furthermore, no automatic file name validation is performed for this file name.

extraFields

Array with each entry in the form {id: id, data: data}, where id is a Number and data is a Buffer.

This library looks for and reads the ZIP64 Extended Information Extra Field (0x0001) in order to support ZIP64 format zip files.

This library also looks for and reads the Info-ZIP Unicode Path Extra Field (0x7075) in order to support some zipfiles that use it instead of General Purpose Bit 11 to convey UTF-8 file names. When the field is identified and verified to be reliable (see the zipfile spec), the the file name in this field is stored in the fileName property, and the file name in the central directory record for this entry is ignored. Note that when decodeStrings is false, all Info-ZIP Unicode Path Extra Fields are ignored.

None of the other fields are considered significant by this library. Fields that this library reads are left unalterned in the extraFields array.

fileComment

String decoded with the charset indicated by generalPurposeBitFlag & 0x800 as with the fileName. (The Info-ZIP Unicode Path Extra Field has no effect on the charset used for this field.)

If decodeStrings is false (see open()), this field is the undecoded Buffer instead of a decoded String.

Prior to yauzl version 2.7.0, this field was erroneously documented as comment instead of fileComment. For compatibility with any code that uses the field name comment, yauzl creates an alias field named comment which is identical to fileComment.

getLastModDate()

Effectively implemented as:

return dosDateTimeToDate(this.lastModFileDate, this.lastModFileTime);

isEncrypted()

Returns is this entry encrypted with "Traditional Encryption". Effectively implemented as:

return (this.generalPurposeBitFlag & 0x1) !== 0;

See openReadStream() for the implications of this value.

Note that "Strong Encryption" is not supported, and will result in an "error" event emitted from the ZipFile.

isCompressed()

Effectively implemented as:

return this.compressionMethod === 8;

See openReadStream() for the implications of this value.

Class: RandomAccessReader

This class is meant to be subclassed by clients and instantiated for the fromRandomAccessReader() function.

An example implementation can be found in test/test.js.

randomAccessReader._readStreamForRange(start, end)

Subclasses must implement this method.

start and end are Numbers and indicate byte offsets from the start of the file. end is exclusive, so _readStreamForRange(0x1000, 0x2000) would indicate to read 0x1000 bytes. end - start will always be at least 1.

This method should return a readable stream which will be pipe()ed into another stream. It is expected that the readable stream will provide data in several chunks if necessary. If the readable stream provides too many or too few bytes, an error will be emitted. (Note that validateEntrySizes has no effect on this check, because this is a low-level API that should behave correctly regardless of the contents of the file.) Any errors emitted on the readable stream will be handled and re-emitted on the client-visible stream (returned from zipfile.openReadStream()) or provided as the err argument to the appropriate callback (for example, for fromRandomAccessReader()).

The returned stream must implement a method .destroy() if you call readStream.destroy() on streams you get from openReadStream(). If you never call readStream.destroy(), then streams returned from this method do not need to implement a method .destroy(). .destroy() should abort any streaming that is in progress and clean up any associated resources. .destroy() will only be called after the stream has been unpipe()d from its destination.

Note that the stream returned from this method might not be the same object that is provided by openReadStream(). The stream returned from this method might be pipe()d through one or more filter streams (for example, a zlib inflate stream).

randomAccessReader.read(buffer, offset, length, position, callback)

Subclasses may implement this method. The default implementation uses createReadStream() to fill the buffer.

This method should behave like fs.read().

randomAccessReader.close(callback)

Subclasses may implement this method. The default implementation is effectively setImmediate(callback);.

callback takes parameters (err).

This method is called once the all streams returned from _readStreamForRange() have ended, and no more _readStreamForRange() or read() requests will be issued to this object.

How to Avoid Crashing

When a malformed zipfile is encountered, the default behavior is to crash (throw an exception). If you want to handle errors more gracefully than this, be sure to do the following:

  • Provide callback parameters where they are allowed, and check the err parameter.
  • Attach a listener for the error event on any ZipFile object you get from open(), fromFd(), fromBuffer(), or fromRandomAccessReader().
  • Attach a listener for the error event on any stream you get from openReadStream().

Minor version updates to yauzl will not add any additional requirements to this list.

Limitations

No Streaming Unzip API

Due to the design of the .zip file format, it's impossible to interpret a .zip file from start to finish (such as from a readable stream) without sacrificing correctness. The Central Directory, which is the authority on the contents of the .zip file, is at the end of a .zip file, not the beginning. A streaming API would need to either buffer the entire .zip file to get to the Central Directory before interpreting anything (defeating the purpose of a streaming interface), or rely on the Local File Headers which are interspersed through the .zip file. However, the Local File Headers are explicitly denounced in the spec as being unreliable copies of the Central Directory, so trusting them would be a violation of the spec.

Any library that offers a streaming unzip API must make one of the above two compromises, which makes the library either dishonest or nonconformant (usually the latter). This library insists on correctness and adherence to the spec, and so does not offer a streaming API.

Here is a way to create a spec-conformant .zip file using the zip command line program (Info-ZIP) available in most unix-like environments, that is (nearly) impossible to parse correctly with a streaming parser:

$ echo -ne '\x50\x4b\x07\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' > file.txt
$ zip -q0 - file.txt | cat > out.zip

This .zip file contains a single file entry that uses General Purpose Bit 3, which means the Local File Header doesn't know the size of the file. Any streaming parser that encounters this situation will either immediately fail, or attempt to search for the Data Descriptor after the file's contents. The file's contents is a sequence of 16-bytes crafted to exactly mimic a valid Data Descriptor for an empty file, which will fool any parser that gets this far into thinking that the file is empty rather than containing 16-bytes. What follows the file's real contents is the file's real Data Descriptor, which will likely cause some kind of signature mismatch error for a streaming parser (if one hasn't occurred already).

By using General Purpose Bit 3 (and compression method 0), it's possible to create arbitrarily ambiguous .zip files that distract parsers with file contents that contain apparently valid .zip file metadata.

Limitted ZIP64 Support

For ZIP64, only zip files smaller than 8PiB are supported, not the full 16EiB range that a 64-bit integer should be able to index. This is due to the JavaScript Number type being an IEEE 754 double precision float.

The Node.js fs module probably has this same limitation.

ZIP64 Extensible Data Sector Is Ignored

The spec does not allow zip file creators to put arbitrary data here, but rather reserves its use for PKWARE and mentions something about Z390. This doesn't seem useful to expose in this library, so it is ignored.

No Multi-Disk Archive Support

This library does not support multi-disk zip files. The multi-disk fields in the zipfile spec were intended for a zip file to span multiple floppy disks, which probably never happens now. If the "number of this disk" field in the End of Central Directory Record is not 0, the open(), fromFd(), fromBuffer(), or fromRandomAccessReader() callback will receive an err. By extension the following zip file fields are ignored by this library and not provided to clients:

  • Disk where central directory starts
  • Number of central directory records on this disk
  • Disk number where file starts

Limited Encryption Handling

You can detect when a file entry is encrypted with "Traditional Encryption" via isEncrypted(), but yauzl will not help you decrypt it. See openReadStream().

If a zip file contains file entries encrypted with "Strong Encryption", yauzl emits an error.

If the central directory is encrypted or compressed, yauzl emits an error.

Local File Headers Are Ignored

Many unzip libraries mistakenly read the Local File Header data in zip files. This data is officially defined to be redundant with the Central Directory information, and is not to be trusted. Aside from checking the signature, yauzl ignores the content of the Local File Header.

No CRC-32 Checking

This library provides the crc32 field of Entry objects read from the Central Directory. However, this field is not used for anything in this library.

versionNeededToExtract Is Ignored

The field versionNeededToExtract is ignored, because this library doesn't support the complete zip file spec at any version,

No Support For Obscure Compression Methods

Regarding the compressionMethod field of Entry objects, only method 0 (stored with no compression) and method 8 (deflated) are supported. Any of the other 15 official methods will cause the openReadStream() callback to receive an err.

Data Descriptors Are Ignored

There may or may not be Data Descriptor sections in a zip file. This library provides no support for finding or interpreting them.

Archive Extra Data Record Is Ignored

There may or may not be an Archive Extra Data Record section in a zip file. This library provides no support for finding or interpreting it.

No Language Encoding Flag Support

Zip files officially support charset encodings other than CP437 and UTF-8, but the zip file spec does not specify how it works. This library makes no attempt to interpret the Language Encoding Flag.

Change History

  • 2.10.0
    • Added support for non-conformant zipfiles created by Microsoft, and added option strictFileNames to disable the workaround. issue #66, issue #88
  • 2.9.2
    • Removed tools/hexdump-zip.js and tools/hex2bin.js. Those tools are now located here: thejoshwolfe/hexdump-zip and thejoshwolfe/hex2bin
    • Worked around performance problem with zlib when using fromBuffer() and readStream.destroy() for large compressed files. issue #87
  • 2.9.1
    • Removed console.log() accidentally introduced in 2.9.0. issue #64
  • 2.9.0
    • Throw an exception if readEntry() is called without lazyEntries:true. Previously this caused undefined behavior. issue #63
  • 2.8.0
    • Added option validateEntrySizes. issue #53
    • Added examples/promises.js
    • Added ability to read raw file data via decompress and decrypt options. issue #11, issue #38, pull #39
    • Added start and end options to openReadStream(). issue #38
  • 2.7.0
    • Added option decodeStrings. issue #42
    • Fixed documentation for entry.fileComment and added compatibility alias. issue #47
  • 2.6.0
    • Support Info-ZIP Unicode Path Extra Field, used by WinRAR for Chinese file names. issue #33
  • 2.5.0
    • Ignore malformed Extra Field that is common in Android .apk files. issue #31
  • 2.4.3
    • Fix crash when parsing malformed Extra Field buffers. issue #31
  • 2.4.2
    • Remove .npmignore and .travis.yml from npm package.
  • 2.4.1
    • Fix error handling.
  • 2.4.0
  • 2.3.1
    • Documentation updates.
  • 2.3.0
    • Check that uncompressedSize is correct, or else emit an error. issue #13
  • 2.2.1
    • Update dependencies.
  • 2.2.0
    • Update dependencies.
  • 2.1.0
    • Remove dependency on iconv.
  • 2.0.3
    • Fix crash when trying to read a 0-byte file.
  • 2.0.2
    • Fix event behavior after errors.
  • 2.0.1
    • Fix bug with using iconv.
  • 2.0.0
    • Initial release.
Comments
  • Support a promise paradigm

    Support a promise paradigm

    Before

        yauzl.open('file.zip', (err, zipfile) => {
          if (err) throw err
          zipfile.on('entry', (entry) => {
            zipfile.openReadStream(entry, (err, readStream) => {
              if (err) throw err
              readStream.pipe(somewhere)
            })
          })
        })
    

    After

        const zipfile = await yauzl.open('file.zip')
        zipfile.on('entry', async (entry) => {
          const readStream = zipfile.openReadStream(entry)
            readStream.pipe(somewhere)
          })
        })
    

    If yauzl.open and zipfile.openReadStream return Promises if no callbacks are provided, it would simplify code for people who choose to use async/await. We could remove all if (err) throw err because unhandled errors would be thrown automatically. And we don't have as deep nesting of the code, which makes things hard to read unless functions are extracted which may otherwise not have needed to be extracted.

    Of course, if callbacks are provided then no Promise is returned and code behaves as it currently does, making this change backwards-compatible.

    Node v7, which will be released soon, will make native async/await available behind flags. And Node v8 will make them available without any flags. This feature would give the option to write simpler code for anyone using those versions of Node, or any version of Node with Babel (as I'm currently doing it).

    enhancement 
    opened by rightaway 13
  • usage demo comes error:

    usage demo comes error: "TypeError: dest.on is not a function"

    I run the usage demo:

    const yauzl = require("yauzl");
    
    yauzl.open("./OMStarPS_npi.capacity_11.5.0.zip", {lazyEntries: true}, function(err, zipfile) {
      if (err) throw err;
      zipfile.readEntry();
      zipfile.on("entry", function(entry) {
        if (/\/$/.test(entry.fileName)) {
          // Directory file names end with '/'.
          // Note that entires for directories themselves are optional.
          // An entry's fileName implicitly requires its parent directories to exist.
          zipfile.readEntry();
        } else {
          // file entry
          zipfile.openReadStream(entry, function(err, readStream) {
            if (err) throw err;
            readStream.on("end", function() {
              zipfile.readEntry();
            });
            readStream.pipe('./');
          });
        }
      });
    });
    

    comes error:

    _stream_readable.js:501
      dest.on('unpipe', onunpipe);
           ^
    
    TypeError: dest.on is not a function
        at AssertByteCountStream.Readable.pipe (_stream_readable.js:501:8)
        at D:\node\www\test\test-unzip\yauzl.js:19:20
        at D:\node\www\test\test-unzip\node_modules\yauzl\index.js:575:7
        at D:\node\www\test\test-unzip\node_modules\yauzl\index.js:631:5
        at D:\node\www\test\test-unzip\node_modules\fd-slicer\index.js:32:7
        at FSReqWrap.wrapper [as oncomplete] (fs.js:682:17)
    

    I just wanna unzip a zip file.

    opened by bi-kai 12
  • non-fs API

    non-fs API

    hey I have a weird request. I wrote this https://github.com/maxogden/punzip for a somewhat common but annoying use case: given a large zip on a server, only extract a single file from it, as a stream, without downloading the whole thing.

    here's more detail on the use case: https://gist.github.com/maxogden/11a85ae12074fed0b9f6

    the cool thing is that it totally works! I can mount a 500mb zip, point yauzl at it, and my code translates yauzls calls into HTTP range calls like this:

      mount-url requested +542ms 514105344-514170879 received 65536 bytes
      mount-url requested +173ms 514170880-514172204 received 1325 bytes
    

    those were yauzl getting the entry table at the end of the file (I think).

    unfortunately I had to use fuse to make it compatible with yauzl. It would be nice, though, if I could give yauzl a function with e.g. 'getBytes(offset, length)` or something and it would be able to use that as the data source rather than a file descriptor/path to a file.

    i'm open to any suggestions or ideas you might have for this use case!

    enhancement 
    opened by maxogden 11
  • Zip bomb prevention?

    Zip bomb prevention?

    Hi,

    How is one supposed to abort processing of zip entry / file while processing entries?

    Some background: I want to prevent a zip bomb from hogging CPU/memory resources, and would like to check for actual, cumulative uncompressed size while uncompressing an entry. For that, I implemented my own Writable stream which raises an error (through callback) when it gets too much data. I then catch this error and currently I call .close() for the readStream I got in yauzl's entry callback.

    However, this seems to trigger a bug in node's zlib implementation (I tried both 0.10.28 and 0.12.2) and aborts the execution:

    Assertion failed: (ctx->mode_ != NONE && "already finalized"), function Write, file ../src/node_zlib.cc, line 147.
    Abort trap: 6
    

    While I theoretically could patch my way around this, I naturally wouldn't want to fork both zlib.js and your library. So can I abort the processing of an entry / entire zip file by some other way cleanly, without any excessive CPU or memory usage?

    Full sample code available at https://github.com/timotm/node-zip-bomb

    opened by timotm 11
  • Yauzl throws uncatchable error

    Yauzl throws uncatchable error

    yauzl throws this error that I seem to be unable to catch or prevent:

     events.js:183
           throw er; // Unhandled 'error' event
          ^
     Error: invalid characters in fileName: some name\, de something something 5.6.vcf
         at /app/node_modules/yauzl/index.js:417:70
         at /app/node_modules/yauzl/index.js:622:5
         at /app/node_modules/fd-slicer/index.js:32:7
         at FSReqWrap.wrapper [as oncomplete] (fs.js:658:17)
    

    Am I doing something wrong? This started happening a few weeks ago, on very few files. Sanitizing file names doesnt seem to solve it.

    opened by PanMan 10
  • Can't unzip .zip files created by OS X 10.12's native zipper

    Can't unzip .zip files created by OS X 10.12's native zipper

    Error

    On Mac OS X 10.12.6 the native zip utility (right-click "compress folder") creates a zip file that gives this error:

    extra field length exceeds extra field buffer size
    

    Not that I don't believe it's possible for Apple to make a mistake, but my guess is that the underlying zip utility is probably the same Unix / Darwin zip that has been in use for a very long time and that it's statistically more likely to be a problem with this library.

    For reference, here is the failing zip file (just a couple of test files for a website):

    Example Zip File

    I've attached fails-to-parse.zip for the test case.

    The contents are as follows:

    .
    ├── css
    │   ├── custom.css
    │   ├── plugins
    │   │   └── bootstrap.css.min
    │   └── styles.css
    ├── index-three.html
    ├── index-two.html
    ├── index.html
    └── js
        ├── app.js
        ├── config.js
        └── custom.js
    

    I have also zipped files that do unzip just fine. My guess is that there's some sort of off-by-one byte alignment issue.

    Test Case

    And a test case (based on the example):

    mkdir -p /tmp/yauzl-test
    pushd /tmp/yauzl-test
    npm install yauzl@latest
    touch test.js
    wget -c https://github.com/thejoshwolfe/yauzl/files/1420073/fails-to-parse.zip
    # => copy code below into test.js <= #
    node test.js
    

    test.js:

    'use strict';
    
    var yauzl = require('yauzl');
    
    yauzl.open('./fails-to-parse.zip', function (err, zipfile) {
      if (err) throw err;
    
      zipfile.readEntry();
      zipfile.on('entry', function (entry) {
        if (/\/$/.test(entry.fileName)) {
          zipfile.readEntry();
          return;
        }
    
        zipfile.openReadStream(entry, function(err, readStream) {
          if (err) throw err;
    
          readStream.on("end", function() {
            zipfile.readEntry();
          });
    
          readStream.on("data", function (data) {
            console.log("data of length", data.length);
          });
        });
      });
      zipfile.on('error', function (err) {
        throw err;
      });
      zipfile.on('end', function () {
        console.log("all entries read and processed");
      });
    });
    
    opened by coolaj86 10
  • test suite failure

    test suite failure

    npm install && npm test fails for me:

    test/success/linux-info-zip.zip(buffer): pass
    
    /Users/maxogden/src/js/yauzl/test/test.js:55
                  throw new Error(messagePrefix + "not supposed to exist");
                        ^
    Error: test/success/unicode.zip(fd): Turmion Kätilöt/: not supposed to exist
        at /Users/maxogden/src/js/yauzl/test/test.js:55:21
        at pendGo (/Users/maxogden/src/js/yauzl/node_modules/pend/index.js:30:3)
        at Pend.go (/Users/maxogden/src/js/yauzl/node_modules/pend/index.js:13:5)
        at ZipFile.<anonymous> (/Users/maxogden/src/js/yauzl/test/test.js:51:27)
        at ZipFile.EventEmitter.emit (events.js:95:17)
        at /Users/maxogden/src/js/yauzl/index.js:237:12
        at /Users/maxogden/src/js/yauzl/index.js:329:5
        at /Users/maxogden/src/js/yauzl/node_modules/fd-slicer/index.js:28:7
        at Object.wrapper [as oncomplete] (fs.js:454:17)
    npm ERR! Test failed.  See above for more details.
    npm ERR! not ok code 0
    

    this happens for a bunch of the zip files in the success/ folder. if I delete them then npm test finally passes.

    could you set up travis CI on this repo? npm install -g travisjs && travisjs init

    opened by maxogden 10
  • ENOENT

    ENOENT

    events.js:85 throw er; // Unhandled 'error' event ^ Error: ENOENT, open 'TestZip/001.png'

    at Error (native)
    

    When running example script on any file I get this. I don't know if it's really an error, but I'm using the example script exactly as written changing only the path to my .zip file.

    opened by foxpjustin 9
  • Invalid comment length

    Invalid comment length

    I'm having trouble unzipping a zip file uploaded by a user. The zip opens fine in any other unzip software I've tried. The error I'm getting is:

    Error: invalid comment length. expected: 12298. found: 0
        at /usr/src/app/node_modules/yauzl/index.js:125:25
        at /usr/src/app/node_modules/yauzl/index.js:539:5
        at /usr/src/app/node_modules/fd-slicer/index.js:32:7
        at FSReqWrap.wrapper [as oncomplete] (fs.js:681:17)
    

    If I comment out line 125 of index.js where the error is thrown, the file does seem to unzip properly. Any thoughts?

    opened by dtjohnson 8
  • option to emit raw string buffers instead of decoded strings

    option to emit raw string buffers instead of decoded strings

    I'm using version 2.6.0 FWIW, node 6.

    ± node debug bin.js foo.zip
    < Debugger listening on [::]:5858
    connecting to 127.0.0.1:5858 ... ok
    break in bin.js:2
      1
    > 2 'use strict'
      3 const extractExec = require('./')
      4 const fs = require('fs')
    c
    break in index.js:46
     44     // TODO: what if we get multiple plists?
     45     const plist = plists[0]
    >46     debugger
     47     getExecStream(fd, plist.CFBundleExecutable, (err, entry, exec) => {
     48       debugger
    c
    break in index.js:19
     17     zip.on('entry', function onentry (entry) {
     18       if ((/XXXThing.*app\/XXXThing-.*/i).test(entry.fileName)) {
    >19         debugger;
     20       }
     21       if (!isOurExec(entry, execname)) { return }
    repl
    Press Ctrl + C to leave debug repl
    > entry.fileName
    'Payload/XXXThing-╬▓.app/XXXThing-╬▓'
    > execname
    'XXXThing-β'
    
    

    as you can see the execname is right but the entry.fileName is not right utf-8 AFAICT.

    enhancement 
    opened by dweinstein 8
  • RangeError in buffer.js due to error at index.js:283

    RangeError in buffer.js due to error at index.js:283

    I was getting the following stack trace while unzipping a file:

    buffer.js:620 throw new RangeError('index out of range'); ^

    RangeError: index out of range at checkOffset (buffer.js:620:11) at Buffer.readUInt16LE (buffer.js:666:5) at /usr/local/lib/node_modules/yauzl/index.js:286:41 at /usr/local/lib/node_modules/yauzl/index.js:474:5 at /usr/local/lib/node_modules/yauzl/node_modules/fd-slicer/index.js:32:7 at FSReqWrap.wrapper as oncomplete

    I tracked it down to line 283: while (i < extraFieldBuffer.length) {

    which should instead be at least: while (i+4 < extraFieldBuffer.length) {

    to avoid attempting to read past the end of the buffer at line 284 and 285.

    However, I think you should add another check before line 289 to ensure the extraFieldBuffer.copy does not fail due to an invalid size field in the zip file. That is, if it is invalid, you should throw a descriptive exception rather than letting it be handled by another RangeCheck error (which doesn't explain the problem very well to the casual user of a damaged zip file).

    (Note that there appears to be no easy way to trap this exception since it occurs within FSReqWrap wrapper).

    bug 
    opened by downwa 8
  • What file types are supported?

    What file types are supported?

    Hello there, I'm looking for a list of the supported file types that this library can extract from. Obviously zip files are supported, but how about the following:

    • 7z
    • ar
    • bz2
    • gz
    • lz
    • lzh
    • rar
    • tar
    • zst

    I am looking to combine this with the file-type library and those are all the archive types that can be detected by the library, so I just want to ensure that my code covers all of the overlap between your library and their file types.

    Thanks in advance!

    opened by ajmeese7 0
  • Avoid overriding endpointStream.destroy and instead override _destroy so that errors correctly bubble up

    Avoid overriding endpointStream.destroy and instead override _destroy so that errors correctly bubble up

    Avoided overriding endpointStream.destroy and instead override _destroy as recommended in the stream doc.

    https://nodejs.org/api/stream.html#writabledestroyerror

    Implementors should not override this method, but instead implement [writable._destroy()](https://nodejs.org/api/stream.html#writable_destroyerr-callback).
    

    With this change, I was able to capture entry size validation errors, which I could not without the change.

    opened by hideaki 0
  • Unzipping nested folders from archive

    Unzipping nested folders from archive

    I made a stackoverflow post here to describe the situation, but thought it would be best to share here to get an answer.

    TLDR is, I need to unzip archives while keeping the subfolders, is that possible with yauzl?

    opened by konradgodel 0
  • use bigint

    use bigint

    haven't looked into any issues or the code - just discovered this package

    but i read this:

    Limitted ZIP64 Support For ZIP64, only zip files smaller than 8PiB are supported, not the full 16EiB range that a 64-bit integer should be able to index. This is due to the JavaScript Number type being an IEEE 754 double precision float.

    The Node.js fs module probably has this same limitation.

    you should be using BigInts to handle large numbers (and plz no bn.js or other polyfill - just use native bigints instead)

    opened by jimmywarting 0
  • how to handle different values for extraFieldLength in local file header vs central directory file header

    how to handle different values for extraFieldLength in local file header vs central directory file header

    I'm trying to extract the data offset byte ranges for each entry in a zipfile. The files I'm working with seem to have different values for the extraFieldLength in the central directory file header vs the local file header. I've noticed that in the readme, you state that the local file headers are ignored except for checking the signature, but that doesn't seem exactly right. When creating a readStream for an entry, this library (correctly) uses the values of extraFieldLength and fileNameLength from the local file header to calculate the localFileHeaderEnd.

    Do you know why the value for extraFieldLength would differ between these two locations, and why the local file header would be the correct value? What was your reason for using the value from the local file header? I also need to use the correct value to extract the true data ranges for each entry, but it seems I can't rely on the extraFieldLength that is emitted for the entries generated by readEntry(). I'm considering forking your excellent lib so I can add an extra function to get at the correct offsets, but if you've got a better idea I'd love to hear it!

    Thank you!

    opened by bennettrogers 0
Owner
Josh Wolfe
Josh Wolfe
high speed zlib port to javascript, works in browser & node.js

pako zlib port to javascript, very fast! Why pako is cool: Results are binary equal to well known zlib (now contains ported zlib v1.2.8). Almost as fa

Nodeca 4.5k Dec 30, 2022
Yet-Another-Relog-Mod - Just another relog mod. Call it YARM!

Yet Another Relog Mod A relog mod with a name so long, you can just call it YARM for short. Features An aesthetic relog list design that follows my "p

Hail 0 Oct 19, 2022
yet another zip library for node

yazl yet another zip library for node. For unzipping, see yauzl. Design principles: Don't block the JavaScript thread. Use and provide async APIs. Kee

Josh Wolfe 307 Dec 3, 2022
Grupprojekt för kurserna 'Javascript med Ramverk' och 'Agil Utveckling'

JavaScript-med-Ramverk-Laboration-3 Grupprojektet för kurserna Javascript med Ramverk och Agil Utveckling. Utvecklingsguide För information om hur utv

Svante Jonsson IT-Högskolan 3 May 18, 2022
Hemsida för personer i Sverige som kan och vill erbjuda boende till människor på flykt

Getting Started with Create React App This project was bootstrapped with Create React App. Available Scripts In the project directory, you can run: np

null 4 May 3, 2022
Kurs-repo för kursen Webbserver och Databaser

Webbserver och databaser This repository is meant for CME students to access exercises and codealongs that happen throughout the course. I hope you wi

null 14 Jan 3, 2023
Yet Another Clickhouse Client for Node.js

yacc-node - Yet Another Clickhouse Client for NodeJS Introduction yacc-node is a zero depencies Clickhouse Client written in Typescript. Installation

Antonio Vizuete 3 Nov 3, 2022
Yet another library for generating NFT artwork, uploading NFT assets and metadata to IPFS, deploying NFT smart contracts, and minting NFT collections

eznft Yet another library for generating NFT artwork, uploading NFT assets and metadata to IPFS, deploying NFT smart contracts, and minting NFT collec

null 3 Sep 21, 2022
Yet another megamenu for Bootstrap 3

Yamm This is Yet another megamenu for Bootstrap 3 from Twitter. Lightweight and pure CSS megamenu that uses the standard navbar markup and the fluid g

geedmo 1.2k Nov 10, 2022
Yet another Linux distribution for voice-enabled IoT and embrace Web standards

YodaOS is Yet another Linux Distribution for voice-enabled IoT and embrace Web standards, thus it uses JavaScript as the main application/scripting la

YODAOS Project 1.2k Dec 22, 2022
Yet Another JSX using tagged template

우아한 JSX Yet Another Simple JSX using tagged template 언어의 한계가 곧 세계의 한계다 - Ludwig Wittgenstein 우아한 JSX가 캠퍼들의 표현의 자유를 넓히고 세계를 넓히는데 도움이 되었으면 합니다 Example i

null 20 Sep 22, 2022
Alternatively called Yet Another Enhancement Point Tracker

Yet Another Talent Tracker Alternatively called Yet Another Enhancement Point Tracker, but that name doesn't sound as cool in an acronym, so let's cal

Hail 2 Oct 17, 2022
Yet another concurrent priority task queue, yay!

YQueue Yet another concurrent priority task queue, yay! Install npm install yqueue Features Concurrency control Prioritized tasks Error handling for b

null 6 Apr 4, 2022
Yet another eslint blame (might) with better adaptability

yet-another-eslint-blame Yet another eslint blame (might) with better adaptability. The input is eslint's output with json format (You can see it here

快手“探索组”前端 5 Mar 7, 2022
Yet another basic minter.

Mojito Yet another basic minter. Live demo: https://mojito-app.netlify.app/ Motivation The create-eth-app team recently added useDApp in their v1.8.0,

Julien Béranger 3 Apr 26, 2022
Yet another advanced djs slash command handler made by dano with ❤️

Advanced djs slash command handler Yet another advanced djs slash command handler made by dano with ❤️ Ultimate, Efficient, Slash command handler for

null 5 Nov 7, 2022
💚 Yet another mutli purpose discord bot, allowing you to maintain and manage your discord server with ease.

Jade Jade is once again another mutli purpose bot, allowing you to maintain and manage your discord server with ease. Completely open source or use th

Saige 1 Sep 13, 2022
☁ Yet Another Cloud Notepad.

☁ Serverless Cloud Notepad English | 简体中文 Build for recording text or sharing between friends. Powerby Cloudflare Workers, easy to deploy privately. ✨

冇 35 Oct 29, 2022
Yet another linter rule to detect compatibility of CSS features.

stylelint-browser-compat Yet another linter rule to detect compatibility of CSS features. This plugin checks if the CSS you're using is supported by t

Masahiro Miyashiro (3846masa) 16 Dec 15, 2022