start nodejs bot

This commit is contained in:
2021-02-11 02:38:15 +01:00
parent 968105f35d
commit 6d84704b1f
1814 changed files with 303741 additions and 0 deletions

457
node_modules/node-pre-gyp/CHANGELOG.md generated vendored Normal file
View File

@@ -0,0 +1,457 @@
# node-pre-gyp changelog
## 0.17.0
- Got travis + appveyor green again
- Added support for more node versions
## 0.16.0
- Added Node 15 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/520)
## 0.15.0
- Bump dependency on `mkdirp` from `^0.5.1` to `^0.5.3` (https://github.com/mapbox/node-pre-gyp/pull/492)
- Bump dependency on `needle` from `^2.2.1` to `^2.5.0` (https://github.com/mapbox/node-pre-gyp/pull/502)
- Added Node 14 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/501)
## 0.14.0
- Defer modules requires in napi.js (https://github.com/mapbox/node-pre-gyp/pull/434)
- Bump dependency on `tar` from `^4` to `^4.4.2` (https://github.com/mapbox/node-pre-gyp/pull/454)
- Support extracting compiled binary from local offline mirror (https://github.com/mapbox/node-pre-gyp/pull/459)
- Added Node 13 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/483)
## 0.13.0
- Added Node 12 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/449)
## 0.12.0
- Fixed double-build problem with node v10 (https://github.com/mapbox/node-pre-gyp/pull/428)
- Added node 11 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/422)
## 0.11.0
- Fixed double-install problem with node v10
- Significant N-API improvements (https://github.com/mapbox/node-pre-gyp/pull/405)
## 0.10.3
- Now will use `request` over `needle` if request is installed. By default `needle` is used for `https`. This should unbreak proxy support that regressed in v0.9.0
## 0.10.2
- Fixed rc/deep-extent security vulnerability
- Fixed broken reinstall script do to incorrectly named get_best_napi_version
## 0.10.1
- Fix needle error event (@medns)
## 0.10.0
- Allow for a single-level module path when packing @allenluce (https://github.com/mapbox/node-pre-gyp/pull/371)
- Log warnings instead of errors when falling back @xzyfer (https://github.com/mapbox/node-pre-gyp/pull/366)
- Add Node.js v10 support to tests (https://github.com/mapbox/node-pre-gyp/pull/372)
- Remove retire.js from CI (https://github.com/mapbox/node-pre-gyp/pull/372)
- Remove support for Node.js v4 due to [EOL on April 30th, 2018](https://github.com/nodejs/Release/blob/7dd52354049cae99eed0e9fe01345b0722a86fde/schedule.json#L14)
- Update appveyor tests to install default NPM version instead of NPM v2.x for all Windows builds (https://github.com/mapbox/node-pre-gyp/pull/375)
## 0.9.1
- Fixed regression (in v0.9.0) with support for http redirects @allenluce (https://github.com/mapbox/node-pre-gyp/pull/361)
## 0.9.0
- Switched from using `request` to `needle` to reduce size of module deps (https://github.com/mapbox/node-pre-gyp/pull/350)
## 0.8.0
- N-API support (@inspiredware)
## 0.7.1
- Upgraded to tar v4.x
## 0.7.0
- Updated request and hawk (#347)
- Dropped node v0.10.x support
## 0.6.40
- Improved error reporting if an install fails
## 0.6.39
- Support for node v9
- Support for versioning on `{libc}` to allow binaries to work on non-glic linux systems like alpine linux
## 0.6.38
- Maintaining compatibility (for v0.6.x series) with node v0.10.x
## 0.6.37
- Solved one part of #276: now now deduce the node ABI from the major version for node >= 2 even when not stored in the abi_crosswalk.json
- Fixed docs to avoid mentioning the deprecated and dangerous `prepublish` in package.json (#291)
- Add new node versions to crosswalk
- Ported tests to use tape instead of mocha
- Got appveyor tests passing by downgrading npm and node-gyp
## 0.6.36
- Removed the running of `testbinary` during install. Because this was regressed for so long, it is too dangerous to re-enable by default. Developers needing validation can call `node-pre-gyp testbinary` directory.
- Fixed regression in v0.6.35 for electron installs (now skipping binary validation which is not yet supported for electron)
## 0.6.35
- No longer recommending `npm ls` in `prepublish` (#291)
- Fixed testbinary command (#283) @szdavid92
## 0.6.34
- Added new node versions to crosswalk, including v8
- Upgraded deps to latest versions, started using `^` instead of `~` for all deps.
## 0.6.33
- Improved support for yarn
## 0.6.32
- Honor npm configuration for CA bundles (@heikkipora)
- Add node-pre-gyp and npm versions to user agent (@addaleax)
- Updated various deps
- Add known node version for v7.x
## 0.6.31
- Updated various deps
## 0.6.30
- Update to npmlog@4.x and semver@5.3.x
- Add known node version for v6.5.0
## 0.6.29
- Add known node versions for v0.10.45, v0.12.14, v4.4.4, v5.11.1, and v6.1.0
## 0.6.28
- Now more verbose when remote binaries are not available. This is needed since npm is increasingly more quiet by default
and users need to know why builds are falling back to source compiles that might then error out.
## 0.6.27
- Add known node version for node v6
- Stopped bundling dependencies
- Documented method for module authors to avoid bundling node-pre-gyp
- See https://github.com/mapbox/node-pre-gyp/tree/master#configuring for details
## 0.6.26
- Skip validation for nw runtime (https://github.com/mapbox/node-pre-gyp/pull/181) via @fleg
## 0.6.25
- Improved support for auto-detection of electron runtime in `node-pre-gyp.find()`
- Pull request from @enlight - https://github.com/mapbox/node-pre-gyp/pull/187
- Add known node version for 4.4.1 and 5.9.1
## 0.6.24
- Add known node version for 5.8.0, 5.9.0, and 4.4.0.
## 0.6.23
- Add known node version for 0.10.43, 0.12.11, 4.3.2, and 5.7.1.
## 0.6.22
- Add known node version for 4.3.1, and 5.7.0.
## 0.6.21
- Add known node version for 0.10.42, 0.12.10, 4.3.0, and 5.6.0.
## 0.6.20
- Add known node version for 4.2.5, 4.2.6, 5.4.0, 5.4.1,and 5.5.0.
## 0.6.19
- Add known node version for 4.2.4
## 0.6.18
- Add new known node versions for 0.10.x, 0.12.x, 4.x, and 5.x
## 0.6.17
- Re-tagged to fix packaging problem of `Error: Cannot find module 'isarray'`
## 0.6.16
- Added known version in crosswalk for 5.1.0.
## 0.6.15
- Upgraded tar-pack (https://github.com/mapbox/node-pre-gyp/issues/182)
- Support custom binary hosting mirror (https://github.com/mapbox/node-pre-gyp/pull/170)
- Added known version in crosswalk for 4.2.2.
## 0.6.14
- Added node 5.x version
## 0.6.13
- Added more known node 4.x versions
## 0.6.12
- Added support for [Electron](http://electron.atom.io/). Just pass the `--runtime=electron` flag when building/installing. Thanks @zcbenz
## 0.6.11
- Added known node and io.js versions including more 3.x and 4.x versions
## 0.6.10
- Added known node and io.js versions including 3.x and 4.x versions
- Upgraded `tar` dep
## 0.6.9
- Upgraded `rc` dep
- Updated known io.js version: v2.4.0
## 0.6.8
- Upgraded `semver` and `rimraf` deps
- Updated known node and io.js versions
## 0.6.7
- Fixed `node_abi` versions for io.js 1.1.x -> 1.8.x (should be 43, but was stored as 42) (refs https://github.com/iojs/build/issues/94)
## 0.6.6
- Updated with known io.js 2.0.0 version
## 0.6.5
- Now respecting `npm_config_node_gyp` (https://github.com/npm/npm/pull/4887)
- Updated to semver@4.3.2
- Updated known node v0.12.x versions and io.js 1.x versions.
## 0.6.4
- Improved support for `io.js` (@fengmk2)
- Test coverage improvements (@mikemorris)
- Fixed support for `--dist-url` that regressed in 0.6.3
## 0.6.3
- Added support for passing raw options to node-gyp using `--` separator. Flags passed after
the `--` to `node-pre-gyp configure` will be passed directly to gyp while flags passed
after the `--` will be passed directly to make/visual studio.
- Added `node-pre-gyp configure` command to be able to call `node-gyp configure` directly
- Fix issue with require validation not working on windows 7 (@edgarsilva)
## 0.6.2
- Support for io.js >= v1.0.2
- Deferred require of `request` and `tar` to help speed up command line usage of `node-pre-gyp`.
## 0.6.1
- Fixed bundled `tar` version
## 0.6.0
- BREAKING: node odd releases like v0.11.x now use `major.minor.patch` for `{node_abi}` instead of `NODE_MODULE_VERSION` (#124)
- Added support for `toolset` option in versioning. By default is an empty string but `--toolset` can be passed to publish or install to select alternative binaries that target a custom toolset like C++11. For example to target Visual Studio 2014 modules like node-sqlite3 use `--toolset=v140`.
- Added support for `--no-rollback` option to request that a failed binary test does not remove the binary module leaves it in place.
- Added support for `--update-binary` option to request an existing binary be re-installed and the check for a valid local module be skipped.
- Added support for passing build options from `npm` through `node-pre-gyp` to `node-gyp`: `--nodedir`, `--disturl`, `--python`, and `--msvs_version`
## 0.5.31
- Added support for deducing node_abi for node.js runtime from previous release if the series is even
- Added support for --target=0.10.33
## 0.5.30
- Repackaged with latest bundled deps
## 0.5.29
- Added support for semver `build`.
- Fixed support for downloading from urls that include `+`.
## 0.5.28
- Now reporting unix style paths only in reveal command
## 0.5.27
- Fixed support for auto-detecting s3 bucket name when it contains `.` - @taavo
- Fixed support for installing when path contains a `'` - @halfdan
- Ported tests to mocha
## 0.5.26
- Fix node-webkit support when `--target` option is not provided
## 0.5.25
- Fix bundling of deps
## 0.5.24
- Updated ABI crosswalk to incldue node v0.10.30 and v0.10.31
## 0.5.23
- Added `reveal` command. Pass no options to get all versioning data as json. Pass a second arg to grab a single versioned property value
- Added support for `--silent` (shortcut for `--loglevel=silent`)
## 0.5.22
- Fixed node-webkit versioning name (NOTE: node-webkit support still experimental)
## 0.5.21
- New package to fix `shasum check failed` error with v0.5.20
## 0.5.20
- Now versioning node-webkit binaries based on major.minor.patch - assuming no compatible ABI across versions (#90)
## 0.5.19
- Updated to know about more node-webkit releases
## 0.5.18
- Updated to know about more node-webkit releases
## 0.5.17
- Updated to know about node v0.10.29 release
## 0.5.16
- Now supporting all aws-sdk configuration parameters (http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-configuring.html) (#86)
## 0.5.15
- Fixed installation of windows packages sub directories on unix systems (#84)
## 0.5.14
- Finished support for cross building using `--target_platform` option (#82)
- Now skipping binary validation on install if target arch/platform do not match the host.
- Removed multi-arch validing for OS X since it required a FAT node.js binary
## 0.5.13
- Fix problem in 0.5.12 whereby the wrong versions of mkdirp and semver where bundled.
## 0.5.12
- Improved support for node-webkit (@Mithgol)
## 0.5.11
- Updated target versions listing
## 0.5.10
- Fixed handling of `-debug` flag passed directory to node-pre-gyp (#72)
- Added optional second arg to `node_pre_gyp.find` to customize the default versioning options used to locate the runtime binary
- Failed install due to `testbinary` check failure no longer leaves behind binary (#70)
## 0.5.9
- Fixed regression in `testbinary` command causing installs to fail on windows with 0.5.7 (#60)
## 0.5.8
- Started bundling deps
## 0.5.7
- Fixed the `testbinary` check, which is used to determine whether to re-download or source compile, to work even in complex dependency situations (#63)
- Exposed the internal `testbinary` command in node-pre-gyp command line tool
- Fixed minor bug so that `fallback_to_build` option is always respected
## 0.5.6
- Added support for versioning on the `name` value in `package.json` (#57).
- Moved to using streams for reading tarball when publishing (#52)
## 0.5.5
- Improved binary validation that also now works with node-webkit (@Mithgol)
- Upgraded test apps to work with node v0.11.x
- Improved test coverage
## 0.5.4
- No longer depends on external install of node-gyp for compiling builds.
## 0.5.3
- Reverted fix for debian/nodejs since it broke windows (#45)
## 0.5.2
- Support for debian systems where the node binary is named `nodejs` (#45)
- Added `bin/node-pre-gyp.cmd` to be able to run command on windows locally (npm creates an .npm automatically when globally installed)
- Updated abi-crosswalk with node v0.10.26 entry.
## 0.5.1
- Various minor bug fixes, several improving windows support for publishing.
## 0.5.0
- Changed property names in `binary` object: now required are `module_name`, `module_path`, and `host`.
- Now `module_path` supports versioning, which allows developers to opt-in to using a versioned install path (#18).
- Added `remote_path` which also supports versioning.
- Changed `remote_uri` to `host`.
## 0.4.2
- Added support for `--target` flag to request cross-compile against a specific node/node-webkit version.
- Added preliminary support for node-webkit
- Fixed support for `--target_arch` option being respected in all cases.
## 0.4.1
- Fixed exception when only stderr is available in binary test (@bendi / #31)
## 0.4.0
- Enforce only `https:` based remote publishing access.
- Added `node-pre-gyp info` command to display listing of published binaries
- Added support for changing the directory node-pre-gyp should build in with the `-C/--directory` option.
- Added support for S3 prefixes.
## 0.3.1
- Added `unpublish` command.
- Fixed module path construction in tests.
- Added ability to disable falling back to build behavior via `npm install --fallback-to-build=false` which overrides setting in a depedencies package.json `install` target.
## 0.3.0
- Support for packaging all files in `module_path` directory - see `app4` for example
- Added `testpackage` command.
- Changed `clean` command to only delete `.node` not entire `build` directory since node-gyp will handle that.
- `.node` modules must be in a folder of there own since tar-pack will remove everything when it unpacks.

27
node_modules/node-pre-gyp/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c), Mapbox
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of node-pre-gyp nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

690
node_modules/node-pre-gyp/README.md generated vendored Normal file
View File

@@ -0,0 +1,690 @@
# node-pre-gyp
#### node-pre-gyp makes it easy to publish and install Node.js C++ addons from binaries
[![Build Status](https://travis-ci.com/mapbox/node-pre-gyp.svg?branch=master)](https://travis-ci.com/mapbox/node-pre-gyp)
[![Build status](https://ci.appveyor.com/api/projects/status/3nxewb425y83c0gv)](https://ci.appveyor.com/project/Mapbox/node-pre-gyp)
`node-pre-gyp` stands between [npm](https://github.com/npm/npm) and [node-gyp](https://github.com/Tootallnate/node-gyp) and offers a cross-platform method of binary deployment.
### Features
- A command line tool called `node-pre-gyp` that can install your package's C++ module from a binary.
- A variety of developer targeted commands for packaging, testing, and publishing binaries.
- A JavaScript module that can dynamically require your installed binary: `require('node-pre-gyp').find`
For a hello world example of a module packaged with `node-pre-gyp` see <https://github.com/springmeyer/node-addon-example> and [the wiki ](https://github.com/mapbox/node-pre-gyp/wiki/Modules-using-node-pre-gyp) for real world examples.
## Credits
- The module is modeled after [node-gyp](https://github.com/Tootallnate/node-gyp) by [@Tootallnate](https://github.com/Tootallnate)
- Motivation for initial development came from [@ErisDS](https://github.com/ErisDS) and the [Ghost Project](https://github.com/TryGhost/Ghost).
- Development is sponsored by [Mapbox](https://www.mapbox.com/)
## FAQ
See the [Frequently Ask Questions](https://github.com/mapbox/node-pre-gyp/wiki/FAQ).
## Depends
- Node.js >= node v6.x
## Install
`node-pre-gyp` is designed to be installed as a local dependency of your Node.js C++ addon and accessed like:
./node_modules/.bin/node-pre-gyp --help
But you can also install it globally:
npm install node-pre-gyp -g
## Usage
### Commands
View all possible commands:
node-pre-gyp --help
- clean - Remove the entire folder containing the compiled .node module
- install - Install pre-built binary for module
- reinstall - Run "clean" and "install" at once
- build - Compile the module by dispatching to node-gyp or nw-gyp
- rebuild - Run "clean" and "build" at once
- package - Pack binary into tarball
- testpackage - Test that the staged package is valid
- publish - Publish pre-built binary
- unpublish - Unpublish pre-built binary
- info - Fetch info on published binaries
You can also chain commands:
node-pre-gyp clean build unpublish publish info
### Options
Options include:
- `-C/--directory`: run the command in this directory
- `--build-from-source`: build from source instead of using pre-built binary
- `--update-binary`: reinstall by replacing previously installed local binary with remote binary
- `--runtime=node-webkit`: customize the runtime: `node`, `electron` and `node-webkit` are the valid options
- `--fallback-to-build`: fallback to building from source if pre-built binary is not available
- `--target=0.4.0`: Pass the target node or node-webkit version to compile against
- `--target_arch=ia32`: Pass the target arch and override the host `arch`. Valid values are 'ia32','x64', or `arm`.
- `--target_platform=win32`: Pass the target platform and override the host `platform`. Valid values are `linux`, `darwin`, `win32`, `sunos`, `freebsd`, `openbsd`, and `aix`.
Both `--build-from-source` and `--fallback-to-build` can be passed alone or they can provide values. You can pass `--fallback-to-build=false` to override the option as declared in package.json. In addition to being able to pass `--build-from-source` you can also pass `--build-from-source=myapp` where `myapp` is the name of your module.
For example: `npm install --build-from-source=myapp`. This is useful if:
- `myapp` is referenced in the package.json of a larger app and therefore `myapp` is being installed as a dependency with `npm install`.
- The larger app also depends on other modules installed with `node-pre-gyp`
- You only want to trigger a source compile for `myapp` and the other modules.
### Configuring
This is a guide to configuring your module to use node-pre-gyp.
#### 1) Add new entries to your `package.json`
- Add `node-pre-gyp` to `dependencies`
- Add `aws-sdk` as a `devDependency`
- Add a custom `install` script
- Declare a `binary` object
This looks like:
```js
"dependencies" : {
"node-pre-gyp": "0.6.x"
},
"devDependencies": {
"aws-sdk": "2.x"
}
"scripts": {
"install": "node-pre-gyp install --fallback-to-build"
},
"binary": {
"module_name": "your_module",
"module_path": "./lib/binding/",
"host": "https://your_module.s3-us-west-1.amazonaws.com"
}
```
For a full example see [node-addon-examples's package.json](https://github.com/springmeyer/node-addon-example/blob/master/package.json).
Let's break this down:
- Dependencies need to list `node-pre-gyp`
- Your devDependencies should list `aws-sdk` so that you can run `node-pre-gyp publish` locally or a CI system. We recommend using `devDependencies` only since `aws-sdk` is large and not needed for `node-pre-gyp install` since it only uses http to fetch binaries
- Your `scripts` section should override the `install` target with `"install": "node-pre-gyp install --fallback-to-build"`. This allows node-pre-gyp to be used instead of the default npm behavior of always source compiling with `node-gyp` directly.
- Your package.json should contain a `binary` section describing key properties you provide to allow node-pre-gyp to package optimally. They are detailed below.
Note: in the past we recommended putting `node-pre-gyp` in the `bundledDependencies`, but we no longer recommend this. In the past there were npm bugs (with node versions 0.10.x) that could lead to node-pre-gyp not being available at the right time during install (unless we bundled). This should no longer be the case. Also, for a time we recommended using `"preinstall": "npm install node-pre-gyp"` as an alternative method to avoid needing to bundle. But this did not behave predictably across all npm versions - see https://github.com/mapbox/node-pre-gyp/issues/260 for the details. So we do not recommend using `preinstall` to install `node-pre-gyp`. More history on this at https://github.com/strongloop/fsevents/issues/157#issuecomment-265545908.
##### The `binary` object has three required properties
###### module_name
The name of your native node module. This value must:
- Match the name passed to [the NODE_MODULE macro](http://nodejs.org/api/addons.html#addons_hello_world)
- Must be a valid C variable name (e.g. it cannot contain `-`)
- Should not include the `.node` extension.
###### module_path
The location your native module is placed after a build. This should be an empty directory without other Javascript files. This entire directory will be packaged in the binary tarball. When installing from a remote package this directory will be overwritten with the contents of the tarball.
Note: This property supports variables based on [Versioning](#versioning).
###### host
A url to the remote location where you've published tarball binaries (must be `https` not `http`).
It is highly recommended that you use Amazon S3. The reasons are:
- Various node-pre-gyp commands like `publish` and `info` only work with an S3 host.
- S3 is a very solid hosting platform for distributing large files.
- We provide detail documentation for using [S3 hosting](#s3-hosting) with node-pre-gyp.
Why then not require S3? Because while some applications using node-pre-gyp need to distribute binaries as large as 20-30 MB, others might have very small binaries and might wish to store them in a GitHub repo. This is not recommended, but if an author really wants to host in a non-S3 location then it should be possible.
It should also be mentioned that there is an optional and entirely separate npm module called [node-pre-gyp-github](https://github.com/bchr02/node-pre-gyp-github) which is intended to complement node-pre-gyp and be installed along with it. It provides the ability to store and publish your binaries within your repositories GitHub Releases if you would rather not use S3 directly. Installation and usage instructions can be found [here](https://github.com/bchr02/node-pre-gyp-github), but the basic premise is that instead of using the ```node-pre-gyp publish``` command you would use ```node-pre-gyp-github publish```.
##### The `binary` object has two optional properties
###### remote_path
It **is recommended** that you customize this property. This is an extra path to use for publishing and finding remote tarballs. The default value for `remote_path` is `""` meaning that if you do not provide it then all packages will be published at the base of the `host`. It is recommended to provide a value like `./{name}/v{version}` to help organize remote packages in the case that you choose to publish multiple node addons to the same `host`.
Note: This property supports variables based on [Versioning](#versioning).
###### package_name
It is **not recommended** to override this property unless you are also overriding the `remote_path`. This is the versioned name of the remote tarball containing the binary `.node` module and any supporting files you've placed inside the `module_path` directory. Unless you specify `package_name` in your `package.json` then it defaults to `{module_name}-v{version}-{node_abi}-{platform}-{arch}.tar.gz` which allows your binary to work across node versions, platforms, and architectures. If you are using `remote_path` that is also versioned by `./{module_name}/v{version}` then you could remove these variables from the `package_name` and just use: `{node_abi}-{platform}-{arch}.tar.gz`. Then your remote tarball will be looked up at, for example, `https://example.com/your-module/v0.1.0/node-v11-linux-x64.tar.gz`.
Avoiding the version of your module in the `package_name` and instead only embedding in a directory name can be useful when you want to make a quick tag of your module that does not change any C++ code. In this case you can just copy binaries to the new version behind the scenes like:
```sh
aws s3 sync --acl public-read s3://mapbox-node-binary/sqlite3/v3.0.3/ s3://mapbox-node-binary/sqlite3/v3.0.4/
```
Note: This property supports variables based on [Versioning](#versioning).
#### 2) Add a new target to binding.gyp
`node-pre-gyp` calls out to `node-gyp` to compile the module and passes variables along like [module_name](#module_name) and [module_path](#module_path).
A new target must be added to `binding.gyp` that moves the compiled `.node` module from `./build/Release/module_name.node` into the directory specified by `module_path`.
Add a target like this at the end of your `targets` list:
```js
{
"target_name": "action_after_build",
"type": "none",
"dependencies": [ "<(module_name)" ],
"copies": [
{
"files": [ "<(PRODUCT_DIR)/<(module_name).node" ],
"destination": "<(module_path)"
}
]
}
```
For a full example see [node-addon-example's binding.gyp](https://github.com/springmeyer/node-addon-example/blob/2ff60a8ded7f042864ad21db00c3a5a06cf47075/binding.gyp).
#### 3) Dynamically require your `.node`
Inside the main js file that requires your addon module you are likely currently doing:
```js
var binding = require('../build/Release/binding.node');
```
or:
```js
var bindings = require('./bindings')
```
Change those lines to:
```js
var binary = require('node-pre-gyp');
var path = require('path');
var binding_path = binary.find(path.resolve(path.join(__dirname,'./package.json')));
var binding = require(binding_path);
```
For a full example see [node-addon-example's index.js](https://github.com/springmeyer/node-addon-example/blob/2ff60a8ded7f042864ad21db00c3a5a06cf47075/index.js#L1-L4)
#### 4) Build and package your app
Now build your module from source:
npm install --build-from-source
The `--build-from-source` tells `node-pre-gyp` to not look for a remote package and instead dispatch to node-gyp to build.
Now `node-pre-gyp` should now also be installed as a local dependency so the command line tool it offers can be found at `./node_modules/.bin/node-pre-gyp`.
#### 5) Test
Now `npm test` should work just as it did before.
#### 6) Publish the tarball
Then package your app:
./node_modules/.bin/node-pre-gyp package
Once packaged, now you can publish:
./node_modules/.bin/node-pre-gyp publish
Currently the `publish` command pushes your binary to S3. This requires:
- You have installed `aws-sdk` with `npm install aws-sdk`
- You have created a bucket already.
- The `host` points to an S3 http or https endpoint.
- You have configured node-pre-gyp to read your S3 credentials (see [S3 hosting](#s3-hosting) for details).
You can also host your binaries elsewhere. To do this requires:
- You manually publish the binary created by the `package` command to an `https` endpoint
- Ensure that the `host` value points to your custom `https` endpoint.
#### 7) Automate builds
Now you need to publish builds for all the platforms and node versions you wish to support. This is best automated.
- See [Appveyor Automation](#appveyor-automation) for how to auto-publish builds on Windows.
- See [Travis Automation](#travis-automation) for how to auto-publish builds on OS X and Linux.
#### 8) You're done!
Now publish your module to the npm registry. Users will now be able to install your module from a binary.
What will happen is this:
1. `npm install <your package>` will pull from the npm registry
2. npm will run the `install` script which will call out to `node-pre-gyp`
3. `node-pre-gyp` will fetch the binary `.node` module and unpack in the right place
4. Assuming that all worked, you are done
If a a binary was not available for a given platform and `--fallback-to-build` was used then `node-gyp rebuild` will be called to try to source compile the module.
## N-API Considerations
[N-API](https://nodejs.org/api/n-api.html#n_api_n_api) is an ABI-stable alternative to previous technologies such as [nan](https://github.com/nodejs/nan) which are tied to a specific Node runtime engine. N-API is Node runtime engine agnostic and guarantees modules created today will continue to run, without changes, into the future.
Using `node-pre-gyp` with N-API projects requires a handful of additional configuration values and imposes some additional requirements.
The most significant difference is that an N-API module can be coded to target multiple N-API versions. Therefore, an N-API module must declare in its `package.json` file which N-API versions the module is designed to run against. In addition, since multiple builds may be required for a single module, path and file names must be specified in way that avoids naming conflicts.
### The `napi_versions` array property
An N-API modules must declare in its `package.json` file, the N-API versions the module is intended to support. This is accomplished by including an `napi-versions` array property in the `binary` object. For example:
```js
"binary": {
"module_name": "your_module",
"module_path": "your_module_path",
"host": "https://your_bucket.s3-us-west-1.amazonaws.com",
"napi_versions": [1,3]
}
```
If the `napi_versions` array property is *not* present, `node-pre-gyp` operates as it always has. Including the `napi_versions` array property instructs `node-pre-gyp` that this is a N-API module build.
When the `napi_versions` array property is present, `node-pre-gyp` fires off multiple operations, one for each of the N-API versions in the array. In the example above, two operations are initiated, one for N-API version 1 and second for N-API version 3. How this version number is communicated is described next.
### The `napi_build_version` value
For each of the N-API module operations `node-pre-gyp` initiates, it ensures that the `napi_build_version` is set appropriately.
This value is of importance in two areas:
1. The C/C++ code which needs to know against which N-API version it should compile.
2. `node-pre-gyp` itself which must assign appropriate path and file names to avoid collisions.
### Defining `NAPI_VERSION` for the C/C++ code
The `napi_build_version` value is communicated to the C/C++ code by adding this code to the `binding.gyp` file:
```
"defines": [
"NAPI_VERSION=<(napi_build_version)",
]
```
This ensures that `NAPI_VERSION`, an integer value, is declared appropriately to the C/C++ code for each build.
> Note that earlier versions of this document recommended defining the symbol `NAPI_BUILD_VERSION`. `NAPI_VERSION` is prefered because it used by the N-API C/C++ headers to configure the specific N-API veriosn being requested.
### Path and file naming requirements in `package.json`
Since `node-pre-gyp` fires off multiple operations for each request, it is essential that path and file names be created in such a way as to avoid collisions. This is accomplished by imposing additional path and file naming requirements.
Specifically, when performing N-API builds, the `{napi_build_version}` text configuration value *must* be present in the `module_path` property. In addition, the `{napi_build_version}` text configuration value *must* be present in either the `remote_path` or `package_name` property. (No problem if it's in both.)
Here's an example:
```js
"binary": {
"module_name": "your_module",
"module_path": "./lib/binding/napi-v{napi_build_version}",
"remote_path": "./{module_name}/v{version}/{configuration}/",
"package_name": "{platform}-{arch}-napi-v{napi_build_version}.tar.gz",
"host": "https://your_bucket.s3-us-west-1.amazonaws.com",
"napi_versions": [1,3]
}
```
## Supporting both N-API and NAN builds
You may have a legacy native add-on that you wish to continue supporting for those versions of Node that do not support N-API, as you add N-API support for later Node versions. This can be accomplished by specifying the `node_napi_label` configuration value in the package.json `binary.package_name` property.
Placing the configuration value `node_napi_label` in the package.json `binary.package_name` property instructs `node-pre-gyp` to build all viable N-API binaries supported by the current Node instance. If the current Node instance does not support N-API, `node-pre-gyp` will request a traditional, non-N-API build.
The configuration value `node_napi_label` is set by `node-pre-gyp` to the type of build created, `napi` or `node`, and the version number. For N-API builds, the string contains the N-API version nad has values like `napi-v3`. For traditional, non-N-API builds, the string contains the ABI version with values like `node-v46`.
Here's how the `binary` configuration above might be changed to support both N-API and NAN builds:
```js
"binary": {
"module_name": "your_module",
"module_path": "./lib/binding/{node_napi_label}",
"remote_path": "./{module_name}/v{version}/{configuration}/",
"package_name": "{platform}-{arch}-{node_napi_label}.tar.gz",
"host": "https://your_bucket.s3-us-west-1.amazonaws.com",
"napi_versions": [1,3]
}
```
The C/C++ symbol `NAPI_VERSION` can be used to distinguish N-API and non-N-API builds. The value of `NAPI_VERSION` is set to the integer N-API version for N-API builds and is set to `0` for non-N-API builds.
For example:
```C
#if NAPI_VERSION
// N-API code goes here
#else
// NAN code goes here
#endif
```
### Two additional configuration values
The following two configuration values, which were implemented in previous versions of `node-pre-gyp`, continue to exist, but have been replaced by the `node_napi_label` configuration value described above.
1. `napi_version` If N-API is supported by the currently executing Node instance, this value is the N-API version number supported by Node. If N-API is not supported, this value is an empty string.
2. `node_abi_napi` If the value returned for `napi_version` is non empty, this value is `'napi'`. If the value returned for `napi_version` is empty, this value is the value returned for `node_abi`.
These values are present for use in the `binding.gyp` file and may be used as `{napi_version}` and `{node_abi_napi}` for text substituion in the `binary` properties of the `package.json` file.
## S3 Hosting
You can host wherever you choose but S3 is cheap, `node-pre-gyp publish` expects it, and S3 can be integrated well with [Travis.ci](http://travis-ci.org) to automate builds for OS X and Ubuntu, and with [Appveyor](http://appveyor.com) to automate builds for Windows. Here is an approach to do this:
First, get setup locally and test the workflow:
#### 1) Create an S3 bucket
And have your **key** and **secret key** ready for writing to the bucket.
It is recommended to create a IAM user with a policy that only gives permissions to the specific bucket you plan to publish to. This can be done in the [IAM console](https://console.aws.amazon.com/iam/) by: 1) adding a new user, 2) choosing `Attach User Policy`, 3) Using the `Policy Generator`, 4) selecting `Amazon S3` for the service, 5) adding the actions: `DeleteObject`, `GetObject`, `GetObjectAcl`, `ListBucket`, `PutObject`, `PutObjectAcl`, 6) adding an ARN of `arn:aws:s3:::bucket/*` (replacing `bucket` with your bucket name), and finally 7) clicking `Add Statement` and saving the policy. It should generate a policy like:
```js
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1394587197000",
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::node-pre-gyp-tests/*"
]
}
]
}
```
#### 2) Install node-pre-gyp
Either install it globally:
npm install node-pre-gyp -g
Or put the local version on your PATH
export PATH=`pwd`/node_modules/.bin/:$PATH
#### 3) Configure AWS credentials
There are several ways to do this.
You can use any of the methods described at http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-configuring.html.
Or you can create a `~/.node_pre_gyprc`
Or pass options in any way supported by [RC](https://github.com/dominictarr/rc#standards)
A `~/.node_pre_gyprc` looks like:
```js
{
"accessKeyId": "xxx",
"secretAccessKey": "xxx"
}
```
Another way is to use your environment:
export node_pre_gyp_accessKeyId=xxx
export node_pre_gyp_secretAccessKey=xxx
You may also need to specify the `region` if it is not explicit in the `host` value you use. The `bucket` can also be specified but it is optional because `node-pre-gyp` will detect it from the `host` value.
#### 4) Package and publish your build
Install the `aws-sdk`:
npm install aws-sdk
Then publish:
node-pre-gyp package publish
Note: if you hit an error like `Hostname/IP doesn't match certificate's altnames` it may mean that you need to provide the `region` option in your config.
## Appveyor Automation
[Appveyor](http://www.appveyor.com/) can build binaries and publish the results per commit and supports:
- Windows Visual Studio 2013 and related compilers
- Both 64 bit (x64) and 32 bit (x86) build configurations
- Multiple Node.js versions
For an example of doing this see [node-sqlite3's appveyor.yml](https://github.com/mapbox/node-sqlite3/blob/master/appveyor.yml).
Below is a guide to getting set up:
#### 1) Create a free Appveyor account
Go to https://ci.appveyor.com/signup/free and sign in with your GitHub account.
#### 2) Create a new project
Go to https://ci.appveyor.com/projects/new and select the GitHub repo for your module
#### 3) Add appveyor.yml and push it
Once you have committed an `appveyor.yml` ([appveyor.yml reference](http://www.appveyor.com/docs/appveyor-yml)) to your GitHub repo and pushed it AppVeyor should automatically start building your project.
#### 4) Create secure variables
Encrypt your S3 AWS keys by going to <https://ci.appveyor.com/tools/encrypt> and hitting the `encrypt` button.
Then paste the result into your `appveyor.yml`
```yml
environment:
node_pre_gyp_accessKeyId:
secure: Dn9HKdLNYvDgPdQOzRq/DqZ/MPhjknRHB1o+/lVU8MA=
node_pre_gyp_secretAccessKey:
secure: W1rwNoSnOku1r+28gnoufO8UA8iWADmL1LiiwH9IOkIVhDTNGdGPJqAlLjNqwLnL
```
NOTE: keys are per account but not per repo (this is difference than Travis where keys are per repo but not related to the account used to encrypt them).
#### 5) Hook up publishing
Just put `node-pre-gyp package publish` in your `appveyor.yml` after `npm install`.
#### 6) Publish when you want
You might wish to publish binaries only on a specific commit. To do this you could borrow from the [Travis CI idea of commit keywords](http://about.travis-ci.org/docs/user/how-to-skip-a-build/) and add special handling for commit messages with `[publish binary]`:
SET CM=%APPVEYOR_REPO_COMMIT_MESSAGE%
if not "%CM%" == "%CM:[publish binary]=%" node-pre-gyp --msvs_version=2013 publish
If your commit message contains special characters (e.g. `&`) this method might fail. An alternative is to use PowerShell, which gives you additional possibilities, like ignoring case by using `ToLower()`:
ps: if($env:APPVEYOR_REPO_COMMIT_MESSAGE.ToLower().Contains('[publish binary]')) { node-pre-gyp --msvs_version=2013 publish }
Remember this publishing is not the same as `npm publish`. We're just talking about the binary module here and not your entire npm package.
## Travis Automation
[Travis](https://travis-ci.org/) can push to S3 after a successful build and supports both:
- Ubuntu Precise and OS X (64 bit)
- Multiple Node.js versions
For an example of doing this see [node-add-example's .travis.yml](https://github.com/springmeyer/node-addon-example/blob/2ff60a8ded7f042864ad21db00c3a5a06cf47075/.travis.yml).
Note: if you need 32 bit binaries, this can be done from a 64 bit Travis machine. See [the node-sqlite3 scripts for an example of doing this](https://github.com/mapbox/node-sqlite3/blob/bae122aa6a2b8a45f6b717fab24e207740e32b5d/scripts/build_against_node.sh#L54-L74).
Below is a guide to getting set up:
#### 1) Install the Travis gem
gem install travis
#### 2) Create secure variables
Make sure you run this command from within the directory of your module.
Use `travis-encrypt` like:
travis encrypt node_pre_gyp_accessKeyId=${node_pre_gyp_accessKeyId}
travis encrypt node_pre_gyp_secretAccessKey=${node_pre_gyp_secretAccessKey}
Then put those values in your `.travis.yml` like:
```yaml
env:
global:
- secure: F+sEL/v56CzHqmCSSES4pEyC9NeQlkoR0Gs/ZuZxX1ytrj8SKtp3MKqBj7zhIclSdXBz4Ev966Da5ctmcTd410p0b240MV6BVOkLUtkjZJyErMBOkeb8n8yVfSoeMx8RiIhBmIvEn+rlQq+bSFis61/JkE9rxsjkGRZi14hHr4M=
- secure: o2nkUQIiABD139XS6L8pxq3XO5gch27hvm/gOdV+dzNKc/s2KomVPWcOyXNxtJGhtecAkABzaW8KHDDi5QL1kNEFx6BxFVMLO8rjFPsMVaBG9Ks6JiDQkkmrGNcnVdxI/6EKTLHTH5WLsz8+J7caDBzvKbEfTux5EamEhxIWgrI=
```
More details on Travis encryption at http://about.travis-ci.org/docs/user/encryption-keys/.
#### 3) Hook up publishing
Just put `node-pre-gyp package publish` in your `.travis.yml` after `npm install`.
##### OS X publishing
If you want binaries for OS X in addition to linux you can enable [multi-os for Travis](http://docs.travis-ci.com/user/multi-os/#Setting-.travis.yml)
Use a configuration like:
```yml
language: cpp
os:
- linux
- osx
env:
matrix:
- NODE_VERSION="4"
- NODE_VERSION="6"
before_install:
- rm -rf ~/.nvm/ && git clone --depth 1 https://github.com/creationix/nvm.git ~/.nvm
- source ~/.nvm/nvm.sh
- nvm install $NODE_VERSION
- nvm use $NODE_VERSION
```
See [Travis OS X Gotchas](#travis-os-x-gotchas) for why we replace `language: node_js` and `node_js:` sections with `language: cpp` and a custom matrix.
Also create platform specific sections for any deps that need install. For example if you need libpng:
```yml
- if [ $(uname -s) == 'Linux' ]; then apt-get install libpng-dev; fi;
- if [ $(uname -s) == 'Darwin' ]; then brew install libpng; fi;
```
For detailed multi-OS examples see [node-mapnik](https://github.com/mapnik/node-mapnik/blob/master/.travis.yml) and [node-sqlite3](https://github.com/mapbox/node-sqlite3/blob/master/.travis.yml).
##### Travis OS X Gotchas
First, unlike the Travis Linux machines, the OS X machines do not put `node-pre-gyp` on PATH by default. To do so you will need to:
```sh
export PATH=$(pwd)/node_modules/.bin:${PATH}
```
Second, the OS X machines do not support using a matrix for installing different Node.js versions. So you need to bootstrap the installation of Node.js in a cross platform way.
By doing:
```yml
env:
matrix:
- NODE_VERSION="4"
- NODE_VERSION="6"
before_install:
- rm -rf ~/.nvm/ && git clone --depth 1 https://github.com/creationix/nvm.git ~/.nvm
- source ~/.nvm/nvm.sh
- nvm install $NODE_VERSION
- nvm use $NODE_VERSION
```
You can easily recreate the previous behavior of this matrix:
```yml
node_js:
- "4"
- "6"
```
#### 4) Publish when you want
You might wish to publish binaries only on a specific commit. To do this you could borrow from the [Travis CI idea of commit keywords](http://about.travis-ci.org/docs/user/how-to-skip-a-build/) and add special handling for commit messages with `[publish binary]`:
COMMIT_MESSAGE=$(git log --format=%B --no-merges -n 1 | tr -d '\n')
if [[ ${COMMIT_MESSAGE} =~ "[publish binary]" ]]; then node-pre-gyp publish; fi;
Then you can trigger new binaries to be built like:
git commit -a -m "[publish binary]"
Or, if you don't have any changes to make simply run:
git commit --allow-empty -m "[publish binary]"
WARNING: if you are working in a pull request and publishing binaries from there then you will want to avoid double publishing when Travis CI builds both the `push` and `pr`. You only want to run the publish on the `push` commit. See https://github.com/Project-OSRM/node-osrm/blob/8eb837abe2e2e30e595093d16e5354bc5c573575/scripts/is_pr_merge.sh which is called from https://github.com/Project-OSRM/node-osrm/blob/8eb837abe2e2e30e595093d16e5354bc5c573575/scripts/publish.sh for an example of how to do this.
Remember this publishing is not the same as `npm publish`. We're just talking about the binary module here and not your entire npm package. To automate the publishing of your entire package to npm on Travis see http://about.travis-ci.org/docs/user/deployment/npm/
# Versioning
The `binary` properties of `module_path`, `remote_path`, and `package_name` support variable substitution. The strings are evaluated by `node-pre-gyp` depending on your system and any custom build flags you passed.
- `node_abi`: The node C++ `ABI` number. This value is available in Javascript as `process.versions.modules` as of [`>= v0.10.4 >= v0.11.7`](https://github.com/joyent/node/commit/ccabd4a6fa8a6eb79d29bc3bbe9fe2b6531c2d8e) and in C++ as the `NODE_MODULE_VERSION` define much earlier. For versions of Node before this was available we fallback to the V8 major and minor version.
- `platform` matches node's `process.platform` like `linux`, `darwin`, and `win32` unless the user passed the `--target_platform` option to override.
- `arch` matches node's `process.arch` like `x64` or `ia32` unless the user passes the `--target_arch` option to override.
- `libc` matches `require('detect-libc').family` like `glibc` or `musl` unless the user passes the `--target_libc` option to override.
- `configuration` - Either 'Release' or 'Debug' depending on if `--debug` is passed during the build.
- `module_name` - the `binary.module_name` attribute from `package.json`.
- `version` - the semver `version` value for your module from `package.json` (NOTE: ignores the `semver.build` property).
- `major`, `minor`, `patch`, and `prelease` match the individual semver values for your module's `version`
- `build` - the sevmer `build` value. For example it would be `this.that` if your package.json `version` was `v1.0.0+this.that`
- `prerelease` - the semver `prerelease` value. For example it would be `alpha.beta` if your package.json `version` was `v1.0.0-alpha.beta`
The options are visible in the code at <https://github.com/mapbox/node-pre-gyp/blob/612b7bca2604508d881e1187614870ba19a7f0c5/lib/util/versioning.js#L114-L127>
# Download binary files from a mirror
S3 is broken in China for the well known reason.
Using the `npm` config argument: `--{module_name}_binary_host_mirror` can download binary files through a mirror.
e.g.: Install [v8-profiler](https://www.npmjs.com/package/v8-profiler) from `npm`.
```bash
$ npm install v8-profiler --profiler_binary_host_mirror=https://npm.taobao.org/mirrors/node-inspector/
```

134
node_modules/node-pre-gyp/bin/node-pre-gyp generated vendored Normal file
View File

@@ -0,0 +1,134 @@
#!/usr/bin/env node
"use strict";
/**
* Set the title.
*/
process.title = 'node-pre-gyp';
/**
* Module dependencies.
*/
var node_pre_gyp = require('../');
var log = require('npmlog');
/**
* Process and execute the selected commands.
*/
var prog = new node_pre_gyp.Run();
var completed = false;
prog.parseArgv(process.argv);
if (prog.todo.length === 0) {
if (~process.argv.indexOf('-v') || ~process.argv.indexOf('--version')) {
console.log('v%s', prog.version);
return process.exit(0);
} else if (~process.argv.indexOf('-h') || ~process.argv.indexOf('--help')) {
console.log('%s', prog.usage());
return process.exit(0);
}
console.log('%s', prog.usage());
return process.exit(1);
}
// if --no-color is passed
if (prog.opts && prog.opts.hasOwnProperty('color') && !prog.opts.color) {
log.disableColor();
}
log.info('it worked if it ends with', 'ok');
log.verbose('cli', process.argv);
log.info('using', process.title + '@%s', prog.version);
log.info('using', 'node@%s | %s | %s', process.versions.node, process.platform, process.arch);
/**
* Change dir if -C/--directory was passed.
*/
var dir = prog.opts.directory;
if (dir) {
var fs = require('fs');
try {
var stat = fs.statSync(dir);
if (stat.isDirectory()) {
log.info('chdir', dir);
process.chdir(dir);
} else {
log.warn('chdir', dir + ' is not a directory');
}
} catch (e) {
if (e.code === 'ENOENT') {
log.warn('chdir', dir + ' is not a directory');
} else {
log.warn('chdir', 'error during chdir() "%s"', e.message);
}
}
}
function run () {
var command = prog.todo.shift();
if (!command) {
// done!
completed = true;
log.info('ok');
return;
}
prog.commands[command.name](command.args, function (err) {
if (err) {
log.error(command.name + ' error');
log.error('stack', err.stack);
errorMessage();
log.error('not ok');
console.log(err.message);
return process.exit(1);
}
var args_array = [].slice.call(arguments, 1);
if (args_array.length) {
console.log.apply(console, args_array);
}
// now run the next command in the queue
process.nextTick(run);
});
}
process.on('exit', function (code) {
if (!completed && !code) {
log.error('Completion callback never invoked!');
issueMessage();
process.exit(6);
}
});
process.on('uncaughtException', function (err) {
log.error('UNCAUGHT EXCEPTION');
log.error('stack', err.stack);
issueMessage();
process.exit(7);
});
function errorMessage () {
// copied from npm's lib/util/error-handler.js
var os = require('os');
log.error('System', os.type() + ' ' + os.release());
log.error('command', process.argv.map(JSON.stringify).join(' '));
log.error('cwd', process.cwd());
log.error('node -v', process.version);
log.error(process.title+' -v', 'v' + prog.package.version);
}
function issueMessage () {
errorMessage();
log.error('', [ 'This is a bug in `'+process.title+'`.',
'Try to update '+process.title+' and file an issue if it does not help:',
' <https://github.com/mapbox/'+process.title+'/issues>',
].join('\n'));
}
// start running the given commands!
run();

2
node_modules/node-pre-gyp/bin/node-pre-gyp.cmd generated vendored Normal file
View File

@@ -0,0 +1,2 @@
@echo off
node "%~dp0\node-pre-gyp" %*

10
node_modules/node-pre-gyp/contributing.md generated vendored Normal file
View File

@@ -0,0 +1,10 @@
# Contributing
### Releasing a new version:
- Ensure tests are passing on travis and appveyor
- Run `node scripts/abi_crosswalk.js` and commit any changes
- Update the changelog
- Tag a new release like: `git tag -a v0.6.34 -m "tagging v0.6.34" && git push --tags`
- Run `npm publish`

15
node_modules/node-pre-gyp/node_modules/.bin/nopt generated vendored Normal file
View File

@@ -0,0 +1,15 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*) basedir=`cygpath -w "$basedir"`;;
esac
if [ -x "$basedir/node" ]; then
"$basedir/node" "$basedir/../nopt/bin/nopt.js" "$@"
ret=$?
else
node "$basedir/../nopt/bin/nopt.js" "$@"
ret=$?
fi
exit $ret

17
node_modules/node-pre-gyp/node_modules/.bin/nopt.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
"%_prog%" "%dp0%\..\nopt\bin\nopt.js" %*
ENDLOCAL
EXIT /b %errorlevel%
:find_dp0
SET dp0=%~dp0
EXIT /b

18
node_modules/node-pre-gyp/node_modules/.bin/nopt.ps1 generated vendored Normal file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
& "$basedir/node$exe" "$basedir/../nopt/bin/nopt.js" $args
$ret=$LASTEXITCODE
} else {
& "node$exe" "$basedir/../nopt/bin/nopt.js" $args
$ret=$LASTEXITCODE
}
exit $ret

15
node_modules/node-pre-gyp/node_modules/.bin/rimraf generated vendored Normal file
View File

@@ -0,0 +1,15 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*) basedir=`cygpath -w "$basedir"`;;
esac
if [ -x "$basedir/node" ]; then
"$basedir/node" "$basedir/../rimraf/bin.js" "$@"
ret=$?
else
node "$basedir/../rimraf/bin.js" "$@"
ret=$?
fi
exit $ret

17
node_modules/node-pre-gyp/node_modules/.bin/rimraf.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
"%_prog%" "%dp0%\..\rimraf\bin.js" %*
ENDLOCAL
EXIT /b %errorlevel%
:find_dp0
SET dp0=%~dp0
EXIT /b

18
node_modules/node-pre-gyp/node_modules/.bin/rimraf.ps1 generated vendored Normal file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
& "$basedir/node$exe" "$basedir/../rimraf/bin.js" $args
$ret=$LASTEXITCODE
} else {
& "node$exe" "$basedir/../rimraf/bin.js" $args
$ret=$LASTEXITCODE
}
exit $ret

15
node_modules/node-pre-gyp/node_modules/.bin/semver generated vendored Normal file
View File

@@ -0,0 +1,15 @@
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*|*MINGW*|*MSYS*) basedir=`cygpath -w "$basedir"`;;
esac
if [ -x "$basedir/node" ]; then
"$basedir/node" "$basedir/../semver/bin/semver" "$@"
ret=$?
else
node "$basedir/../semver/bin/semver" "$@"
ret=$?
fi
exit $ret

17
node_modules/node-pre-gyp/node_modules/.bin/semver.cmd generated vendored Normal file
View File

@@ -0,0 +1,17 @@
@ECHO off
SETLOCAL
CALL :find_dp0
IF EXIST "%dp0%\node.exe" (
SET "_prog=%dp0%\node.exe"
) ELSE (
SET "_prog=node"
SET PATHEXT=%PATHEXT:;.JS;=;%
)
"%_prog%" "%dp0%\..\semver\bin\semver" %*
ENDLOCAL
EXIT /b %errorlevel%
:find_dp0
SET dp0=%~dp0
EXIT /b

18
node_modules/node-pre-gyp/node_modules/.bin/semver.ps1 generated vendored Normal file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env pwsh
$basedir=Split-Path $MyInvocation.MyCommand.Definition -Parent
$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
# Fix case when both the Windows and Linux builds of Node
# are installed in the same directory
$exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
& "$basedir/node$exe" "$basedir/../semver/bin/semver" $args
$ret=$LASTEXITCODE
} else {
& "node$exe" "$basedir/../semver/bin/semver" $args
$ret=$LASTEXITCODE
}
exit $ret

15
node_modules/node-pre-gyp/node_modules/chownr/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,3 @@
Like `chown -R`.
Takes the same arguments as `fs.chown()`

167
node_modules/node-pre-gyp/node_modules/chownr/chownr.js generated vendored Normal file
View File

@@ -0,0 +1,167 @@
'use strict'
const fs = require('fs')
const path = require('path')
/* istanbul ignore next */
const LCHOWN = fs.lchown ? 'lchown' : 'chown'
/* istanbul ignore next */
const LCHOWNSYNC = fs.lchownSync ? 'lchownSync' : 'chownSync'
/* istanbul ignore next */
const needEISDIRHandled = fs.lchown &&
!process.version.match(/v1[1-9]+\./) &&
!process.version.match(/v10\.[6-9]/)
const lchownSync = (path, uid, gid) => {
try {
return fs[LCHOWNSYNC](path, uid, gid)
} catch (er) {
if (er.code !== 'ENOENT')
throw er
}
}
/* istanbul ignore next */
const chownSync = (path, uid, gid) => {
try {
return fs.chownSync(path, uid, gid)
} catch (er) {
if (er.code !== 'ENOENT')
throw er
}
}
/* istanbul ignore next */
const handleEISDIR =
needEISDIRHandled ? (path, uid, gid, cb) => er => {
// Node prior to v10 had a very questionable implementation of
// fs.lchown, which would always try to call fs.open on a directory
// Fall back to fs.chown in those cases.
if (!er || er.code !== 'EISDIR')
cb(er)
else
fs.chown(path, uid, gid, cb)
}
: (_, __, ___, cb) => cb
/* istanbul ignore next */
const handleEISDirSync =
needEISDIRHandled ? (path, uid, gid) => {
try {
return lchownSync(path, uid, gid)
} catch (er) {
if (er.code !== 'EISDIR')
throw er
chownSync(path, uid, gid)
}
}
: (path, uid, gid) => lchownSync(path, uid, gid)
// fs.readdir could only accept an options object as of node v6
const nodeVersion = process.version
let readdir = (path, options, cb) => fs.readdir(path, options, cb)
let readdirSync = (path, options) => fs.readdirSync(path, options)
/* istanbul ignore next */
if (/^v4\./.test(nodeVersion))
readdir = (path, options, cb) => fs.readdir(path, cb)
const chown = (cpath, uid, gid, cb) => {
fs[LCHOWN](cpath, uid, gid, handleEISDIR(cpath, uid, gid, er => {
// Skip ENOENT error
cb(er && er.code !== 'ENOENT' ? er : null)
}))
}
const chownrKid = (p, child, uid, gid, cb) => {
if (typeof child === 'string')
return fs.lstat(path.resolve(p, child), (er, stats) => {
// Skip ENOENT error
if (er)
return cb(er.code !== 'ENOENT' ? er : null)
stats.name = child
chownrKid(p, stats, uid, gid, cb)
})
if (child.isDirectory()) {
chownr(path.resolve(p, child.name), uid, gid, er => {
if (er)
return cb(er)
const cpath = path.resolve(p, child.name)
chown(cpath, uid, gid, cb)
})
} else {
const cpath = path.resolve(p, child.name)
chown(cpath, uid, gid, cb)
}
}
const chownr = (p, uid, gid, cb) => {
readdir(p, { withFileTypes: true }, (er, children) => {
// any error other than ENOTDIR or ENOTSUP means it's not readable,
// or doesn't exist. give up.
if (er) {
if (er.code === 'ENOENT')
return cb()
else if (er.code !== 'ENOTDIR' && er.code !== 'ENOTSUP')
return cb(er)
}
if (er || !children.length)
return chown(p, uid, gid, cb)
let len = children.length
let errState = null
const then = er => {
if (errState)
return
if (er)
return cb(errState = er)
if (-- len === 0)
return chown(p, uid, gid, cb)
}
children.forEach(child => chownrKid(p, child, uid, gid, then))
})
}
const chownrKidSync = (p, child, uid, gid) => {
if (typeof child === 'string') {
try {
const stats = fs.lstatSync(path.resolve(p, child))
stats.name = child
child = stats
} catch (er) {
if (er.code === 'ENOENT')
return
else
throw er
}
}
if (child.isDirectory())
chownrSync(path.resolve(p, child.name), uid, gid)
handleEISDirSync(path.resolve(p, child.name), uid, gid)
}
const chownrSync = (p, uid, gid) => {
let children
try {
children = readdirSync(p, { withFileTypes: true })
} catch (er) {
if (er.code === 'ENOENT')
return
else if (er.code === 'ENOTDIR' || er.code === 'ENOTSUP')
return handleEISDirSync(p, uid, gid)
else
throw er
}
if (children && children.length)
children.forEach(child => chownrKidSync(p, child, uid, gid))
return handleEISDirSync(p, uid, gid)
}
module.exports = chownr
chownr.sync = chownrSync

View File

@@ -0,0 +1,62 @@
{
"_from": "chownr@^1.1.1",
"_id": "chownr@1.1.4",
"_inBundle": false,
"_integrity": "sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg==",
"_location": "/node-pre-gyp/chownr",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "chownr@^1.1.1",
"name": "chownr",
"escapedName": "chownr",
"rawSpec": "^1.1.1",
"saveSpec": null,
"fetchSpec": "^1.1.1"
},
"_requiredBy": [
"/node-pre-gyp/tar"
],
"_resolved": "https://registry.npmjs.org/chownr/-/chownr-1.1.4.tgz",
"_shasum": "6fc9d7b42d32a583596337666e7d08084da2cc6b",
"_spec": "chownr@^1.1.1",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot\\node_modules\\node-pre-gyp\\node_modules\\tar",
"author": {
"name": "Isaac Z. Schlueter",
"email": "i@izs.me",
"url": "http://blog.izs.me/"
},
"bugs": {
"url": "https://github.com/isaacs/chownr/issues"
},
"bundleDependencies": false,
"deprecated": false,
"description": "like `chown -R`",
"devDependencies": {
"mkdirp": "0.3",
"rimraf": "^2.7.1",
"tap": "^14.10.6"
},
"files": [
"chownr.js"
],
"homepage": "https://github.com/isaacs/chownr#readme",
"license": "ISC",
"main": "chownr.js",
"name": "chownr",
"repository": {
"type": "git",
"url": "git://github.com/isaacs/chownr.git"
},
"scripts": {
"postversion": "npm publish",
"prepublishOnly": "git push origin --follow-tags",
"preversion": "npm test",
"test": "tap"
},
"tap": {
"check-coverage": true
},
"version": "1.1.4"
}

View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,70 @@
# fs-minipass
Filesystem streams based on [minipass](http://npm.im/minipass).
4 classes are exported:
- ReadStream
- ReadStreamSync
- WriteStream
- WriteStreamSync
When using `ReadStreamSync`, all of the data is made available
immediately upon consuming the stream. Nothing is buffered in memory
when the stream is constructed. If the stream is piped to a writer,
then it will synchronously `read()` and emit data into the writer as
fast as the writer can consume it. (That is, it will respect
backpressure.) If you call `stream.read()` then it will read the
entire file and return the contents.
When using `WriteStreamSync`, every write is flushed to the file
synchronously. If your writes all come in a single tick, then it'll
write it all out in a single tick. It's as synchronous as you are.
The async versions work much like their node builtin counterparts,
with the exception of introducing significantly less Stream machinery
overhead.
## USAGE
It's just streams, you pipe them or read() them or write() to them.
```js
const fsm = require('fs-minipass')
const readStream = new fsm.ReadStream('file.txt')
const writeStream = new fsm.WriteStream('output.txt')
writeStream.write('some file header or whatever\n')
readStream.pipe(writeStream)
```
## ReadStream(path, options)
Path string is required, but somewhat irrelevant if an open file
descriptor is passed in as an option.
Options:
- `fd` Pass in a numeric file descriptor, if the file is already open.
- `readSize` The size of reads to do, defaults to 16MB
- `size` The size of the file, if known. Prevents zero-byte read()
call at the end.
- `autoClose` Set to `false` to prevent the file descriptor from being
closed when the file is done being read.
## WriteStream(path, options)
Path string is required, but somewhat irrelevant if an open file
descriptor is passed in as an option.
Options:
- `fd` Pass in a numeric file descriptor, if the file is already open.
- `mode` The mode to create the file with. Defaults to `0o666`.
- `start` The position in the file to start reading. If not
specified, then the file will start writing at position zero, and be
truncated by default.
- `autoClose` Set to `false` to prevent the file descriptor from being
closed when the stream is ended.
- `flags` Flags to use when opening the file. Irrelevant if `fd` is
passed in, since file won't be opened in that case. Defaults to
`'a'` if a `pos` is specified, or `'w'` otherwise.

View File

@@ -0,0 +1,387 @@
'use strict'
const MiniPass = require('minipass')
const EE = require('events').EventEmitter
const fs = require('fs')
// for writev
const binding = process.binding('fs')
const writeBuffers = binding.writeBuffers
/* istanbul ignore next */
const FSReqWrap = binding.FSReqWrap || binding.FSReqCallback
const _autoClose = Symbol('_autoClose')
const _close = Symbol('_close')
const _ended = Symbol('_ended')
const _fd = Symbol('_fd')
const _finished = Symbol('_finished')
const _flags = Symbol('_flags')
const _flush = Symbol('_flush')
const _handleChunk = Symbol('_handleChunk')
const _makeBuf = Symbol('_makeBuf')
const _mode = Symbol('_mode')
const _needDrain = Symbol('_needDrain')
const _onerror = Symbol('_onerror')
const _onopen = Symbol('_onopen')
const _onread = Symbol('_onread')
const _onwrite = Symbol('_onwrite')
const _open = Symbol('_open')
const _path = Symbol('_path')
const _pos = Symbol('_pos')
const _queue = Symbol('_queue')
const _read = Symbol('_read')
const _readSize = Symbol('_readSize')
const _reading = Symbol('_reading')
const _remain = Symbol('_remain')
const _size = Symbol('_size')
const _write = Symbol('_write')
const _writing = Symbol('_writing')
const _defaultFlag = Symbol('_defaultFlag')
class ReadStream extends MiniPass {
constructor (path, opt) {
opt = opt || {}
super(opt)
this.writable = false
if (typeof path !== 'string')
throw new TypeError('path must be a string')
this[_fd] = typeof opt.fd === 'number' ? opt.fd : null
this[_path] = path
this[_readSize] = opt.readSize || 16*1024*1024
this[_reading] = false
this[_size] = typeof opt.size === 'number' ? opt.size : Infinity
this[_remain] = this[_size]
this[_autoClose] = typeof opt.autoClose === 'boolean' ?
opt.autoClose : true
if (typeof this[_fd] === 'number')
this[_read]()
else
this[_open]()
}
get fd () { return this[_fd] }
get path () { return this[_path] }
write () {
throw new TypeError('this is a readable stream')
}
end () {
throw new TypeError('this is a readable stream')
}
[_open] () {
fs.open(this[_path], 'r', (er, fd) => this[_onopen](er, fd))
}
[_onopen] (er, fd) {
if (er)
this[_onerror](er)
else {
this[_fd] = fd
this.emit('open', fd)
this[_read]()
}
}
[_makeBuf] () {
return Buffer.allocUnsafe(Math.min(this[_readSize], this[_remain]))
}
[_read] () {
if (!this[_reading]) {
this[_reading] = true
const buf = this[_makeBuf]()
/* istanbul ignore if */
if (buf.length === 0) return process.nextTick(() => this[_onread](null, 0, buf))
fs.read(this[_fd], buf, 0, buf.length, null, (er, br, buf) =>
this[_onread](er, br, buf))
}
}
[_onread] (er, br, buf) {
this[_reading] = false
if (er)
this[_onerror](er)
else if (this[_handleChunk](br, buf))
this[_read]()
}
[_close] () {
if (this[_autoClose] && typeof this[_fd] === 'number') {
fs.close(this[_fd], _ => this.emit('close'))
this[_fd] = null
}
}
[_onerror] (er) {
this[_reading] = true
this[_close]()
this.emit('error', er)
}
[_handleChunk] (br, buf) {
let ret = false
// no effect if infinite
this[_remain] -= br
if (br > 0)
ret = super.write(br < buf.length ? buf.slice(0, br) : buf)
if (br === 0 || this[_remain] <= 0) {
ret = false
this[_close]()
super.end()
}
return ret
}
emit (ev, data) {
switch (ev) {
case 'prefinish':
case 'finish':
break
case 'drain':
if (typeof this[_fd] === 'number')
this[_read]()
break
default:
return super.emit(ev, data)
}
}
}
class ReadStreamSync extends ReadStream {
[_open] () {
let threw = true
try {
this[_onopen](null, fs.openSync(this[_path], 'r'))
threw = false
} finally {
if (threw)
this[_close]()
}
}
[_read] () {
let threw = true
try {
if (!this[_reading]) {
this[_reading] = true
do {
const buf = this[_makeBuf]()
/* istanbul ignore next */
const br = buf.length === 0 ? 0 : fs.readSync(this[_fd], buf, 0, buf.length, null)
if (!this[_handleChunk](br, buf))
break
} while (true)
this[_reading] = false
}
threw = false
} finally {
if (threw)
this[_close]()
}
}
[_close] () {
if (this[_autoClose] && typeof this[_fd] === 'number') {
try {
fs.closeSync(this[_fd])
} catch (er) {}
this[_fd] = null
this.emit('close')
}
}
}
class WriteStream extends EE {
constructor (path, opt) {
opt = opt || {}
super(opt)
this.readable = false
this[_writing] = false
this[_ended] = false
this[_needDrain] = false
this[_queue] = []
this[_path] = path
this[_fd] = typeof opt.fd === 'number' ? opt.fd : null
this[_mode] = opt.mode === undefined ? 0o666 : opt.mode
this[_pos] = typeof opt.start === 'number' ? opt.start : null
this[_autoClose] = typeof opt.autoClose === 'boolean' ?
opt.autoClose : true
// truncating makes no sense when writing into the middle
const defaultFlag = this[_pos] !== null ? 'r+' : 'w'
this[_defaultFlag] = opt.flags === undefined
this[_flags] = this[_defaultFlag] ? defaultFlag : opt.flags
if (this[_fd] === null)
this[_open]()
}
get fd () { return this[_fd] }
get path () { return this[_path] }
[_onerror] (er) {
this[_close]()
this[_writing] = true
this.emit('error', er)
}
[_open] () {
fs.open(this[_path], this[_flags], this[_mode],
(er, fd) => this[_onopen](er, fd))
}
[_onopen] (er, fd) {
if (this[_defaultFlag] &&
this[_flags] === 'r+' &&
er && er.code === 'ENOENT') {
this[_flags] = 'w'
this[_open]()
} else if (er)
this[_onerror](er)
else {
this[_fd] = fd
this.emit('open', fd)
this[_flush]()
}
}
end (buf, enc) {
if (buf)
this.write(buf, enc)
this[_ended] = true
// synthetic after-write logic, where drain/finish live
if (!this[_writing] && !this[_queue].length &&
typeof this[_fd] === 'number')
this[_onwrite](null, 0)
}
write (buf, enc) {
if (typeof buf === 'string')
buf = new Buffer(buf, enc)
if (this[_ended]) {
this.emit('error', new Error('write() after end()'))
return false
}
if (this[_fd] === null || this[_writing] || this[_queue].length) {
this[_queue].push(buf)
this[_needDrain] = true
return false
}
this[_writing] = true
this[_write](buf)
return true
}
[_write] (buf) {
fs.write(this[_fd], buf, 0, buf.length, this[_pos], (er, bw) =>
this[_onwrite](er, bw))
}
[_onwrite] (er, bw) {
if (er)
this[_onerror](er)
else {
if (this[_pos] !== null)
this[_pos] += bw
if (this[_queue].length)
this[_flush]()
else {
this[_writing] = false
if (this[_ended] && !this[_finished]) {
this[_finished] = true
this[_close]()
this.emit('finish')
} else if (this[_needDrain]) {
this[_needDrain] = false
this.emit('drain')
}
}
}
}
[_flush] () {
if (this[_queue].length === 0) {
if (this[_ended])
this[_onwrite](null, 0)
} else if (this[_queue].length === 1)
this[_write](this[_queue].pop())
else {
const iovec = this[_queue]
this[_queue] = []
writev(this[_fd], iovec, this[_pos],
(er, bw) => this[_onwrite](er, bw))
}
}
[_close] () {
if (this[_autoClose] && typeof this[_fd] === 'number') {
fs.close(this[_fd], _ => this.emit('close'))
this[_fd] = null
}
}
}
class WriteStreamSync extends WriteStream {
[_open] () {
let fd
try {
fd = fs.openSync(this[_path], this[_flags], this[_mode])
} catch (er) {
if (this[_defaultFlag] &&
this[_flags] === 'r+' &&
er && er.code === 'ENOENT') {
this[_flags] = 'w'
return this[_open]()
} else
throw er
}
this[_onopen](null, fd)
}
[_close] () {
if (this[_autoClose] && typeof this[_fd] === 'number') {
try {
fs.closeSync(this[_fd])
} catch (er) {}
this[_fd] = null
this.emit('close')
}
}
[_write] (buf) {
try {
this[_onwrite](null,
fs.writeSync(this[_fd], buf, 0, buf.length, this[_pos]))
} catch (er) {
this[_onwrite](er, 0)
}
}
}
const writev = (fd, iovec, pos, cb) => {
const done = (er, bw) => cb(er, bw, iovec)
const req = new FSReqWrap()
req.oncomplete = done
binding.writeBuffers(fd, iovec, pos, req)
}
exports.ReadStream = ReadStream
exports.ReadStreamSync = ReadStreamSync
exports.WriteStream = WriteStream
exports.WriteStreamSync = WriteStreamSync

View File

@@ -0,0 +1,65 @@
{
"_from": "fs-minipass@^1.2.5",
"_id": "fs-minipass@1.2.7",
"_inBundle": false,
"_integrity": "sha512-GWSSJGFy4e9GUeCcbIkED+bgAoFyj7XF1mV8rma3QW4NIqX9Kyx79N/PF61H5udOV3aY1IaMLs6pGbH71nlCTA==",
"_location": "/node-pre-gyp/fs-minipass",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "fs-minipass@^1.2.5",
"name": "fs-minipass",
"escapedName": "fs-minipass",
"rawSpec": "^1.2.5",
"saveSpec": null,
"fetchSpec": "^1.2.5"
},
"_requiredBy": [
"/node-pre-gyp/tar"
],
"_resolved": "https://registry.npmjs.org/fs-minipass/-/fs-minipass-1.2.7.tgz",
"_shasum": "ccff8570841e7fe4265693da88936c55aed7f7c7",
"_spec": "fs-minipass@^1.2.5",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot\\node_modules\\node-pre-gyp\\node_modules\\tar",
"author": {
"name": "Isaac Z. Schlueter",
"email": "i@izs.me",
"url": "http://blog.izs.me/"
},
"bugs": {
"url": "https://github.com/npm/fs-minipass/issues"
},
"bundleDependencies": false,
"dependencies": {
"minipass": "^2.6.0"
},
"deprecated": false,
"description": "fs read and write streams based on minipass",
"devDependencies": {
"mutate-fs": "^2.0.1",
"tap": "^14.6.4"
},
"files": [
"index.js"
],
"homepage": "https://github.com/npm/fs-minipass#readme",
"keywords": [],
"license": "ISC",
"main": "index.js",
"name": "fs-minipass",
"repository": {
"type": "git",
"url": "git+https://github.com/npm/fs-minipass.git"
},
"scripts": {
"postpublish": "git push origin --follow-tags",
"postversion": "npm publish",
"preversion": "npm test",
"test": "tap"
},
"tap": {
"check-coverage": true
},
"version": "1.2.7"
}

View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) npm, Inc. and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,606 @@
# minipass
A _very_ minimal implementation of a [PassThrough
stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough)
[It's very
fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0)
for objects, strings, and buffers.
Supports pipe()ing (including multi-pipe() and backpressure
transmission), buffering data until either a `data` event handler or
`pipe()` is added (so you don't lose the first chunk), and most other
cases where PassThrough is a good idea.
There is a `read()` method, but it's much more efficient to consume
data from this stream via `'data'` events or by calling `pipe()` into
some other stream. Calling `read()` requires the buffer to be
flattened in some cases, which requires copying memory.
There is also no `unpipe()` method. Once you start piping, there is
no stopping it!
If you set `objectMode: true` in the options, then whatever is written
will be emitted. Otherwise, it'll do a minimal amount of Buffer
copying to ensure proper Streams semantics when `read(n)` is called.
`objectMode` can also be set by doing `stream.objectMode = true`, or by
writing any non-string/non-buffer data. `objectMode` cannot be set to
false once it is set.
This is not a `through` or `through2` stream. It doesn't transform
the data, it just passes it right through. If you want to transform
the data, extend the class, and override the `write()` method. Once
you're done transforming the data however you want, call
`super.write()` with the transform output.
For some examples of streams that extend Minipass in various ways, check
out:
- [minizlib](http://npm.im/minizlib)
- [fs-minipass](http://npm.im/fs-minipass)
- [tar](http://npm.im/tar)
- [minipass-collect](http://npm.im/minipass-collect)
- [minipass-flush](http://npm.im/minipass-flush)
- [minipass-pipeline](http://npm.im/minipass-pipeline)
- [tap](http://npm.im/tap)
- [tap-parser](http://npm.im/tap)
- [treport](http://npm.im/tap)
## Differences from Node.js Streams
There are several things that make Minipass streams different from (and in
some ways superior to) Node.js core streams.
Please read these caveats if you are familiar with noode-core streams and
intend to use Minipass streams in your programs.
### Timing
Minipass streams are designed to support synchronous use-cases. Thus, data
is emitted as soon as it is available, always. It is buffered until read,
but no longer. Another way to look at it is that Minipass streams are
exactly as synchronous as the logic that writes into them.
This can be surprising if your code relies on `PassThrough.write()` always
providing data on the next tick rather than the current one, or being able
to call `resume()` and not have the entire buffer disappear immediately.
However, without this synchronicity guarantee, there would be no way for
Minipass to achieve the speeds it does, or support the synchronous use
cases that it does. Simply put, waiting takes time.
This non-deferring approach makes Minipass streams much easier to reason
about, especially in the context of Promises and other flow-control
mechanisms.
### No High/Low Water Marks
Node.js core streams will optimistically fill up a buffer, returning `true`
on all writes until the limit is hit, even if the data has nowhere to go.
Then, they will not attempt to draw more data in until the buffer size dips
below a minimum value.
Minipass streams are much simpler. The `write()` method will return `true`
if the data has somewhere to go (which is to say, given the timing
guarantees, that the data is already there by the time `write()` returns).
If the data has nowhere to go, then `write()` returns false, and the data
sits in a buffer, to be drained out immediately as soon as anyone consumes
it.
### Hazards of Buffering (or: Why Minipass Is So Fast)
Since data written to a Minipass stream is immediately written all the way
through the pipeline, and `write()` always returns true/false based on
whether the data was fully flushed, backpressure is communicated
immediately to the upstream caller. This minimizes buffering.
Consider this case:
```js
const {PassThrough} = require('stream')
const p1 = new PassThrough({ highWaterMark: 1024 })
const p2 = new PassThrough({ highWaterMark: 1024 })
const p3 = new PassThrough({ highWaterMark: 1024 })
const p4 = new PassThrough({ highWaterMark: 1024 })
p1.pipe(p2).pipe(p3).pipe(p4)
p4.on('data', () => console.log('made it through'))
// this returns false and buffers, then writes to p2 on next tick (1)
// p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2)
// p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3)
// p4 returns false and buffers, pausing p3, then emits 'data' and 'drain'
// on next tick (4)
// p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and
// 'drain' on next tick (5)
// p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6)
// p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next
// tick (7)
p1.write(Buffer.alloc(2048)) // returns false
```
Along the way, the data was buffered and deferred at each stage, and
multiple event deferrals happened, for an unblocked pipeline where it was
perfectly safe to write all the way through!
Furthermore, setting a `highWaterMark` of `1024` might lead someone reading
the code to think an advisory maximum of 1KiB is being set for the
pipeline. However, the actual advisory buffering level is the _sum_ of
`highWaterMark` values, since each one has its own bucket.
Consider the Minipass case:
```js
const m1 = new Minipass()
const m2 = new Minipass()
const m3 = new Minipass()
const m4 = new Minipass()
m1.pipe(m2).pipe(m3).pipe(m4)
m4.on('data', () => console.log('made it through'))
// m1 is flowing, so it writes the data to m2 immediately
// m2 is flowing, so it writes the data to m3 immediately
// m3 is flowing, so it writes the data to m4 immediately
// m4 is flowing, so it fires the 'data' event immediately, returns true
// m4's write returned true, so m3 is still flowing, returns true
// m3's write returned true, so m2 is still flowing, returns true
// m2's write returned true, so m1 is still flowing, returns true
// No event deferrals or buffering along the way!
m1.write(Buffer.alloc(2048)) // returns true
```
It is extremely unlikely that you _don't_ want to buffer any data written,
or _ever_ buffer data that can be flushed all the way through. Neither
node-core streams nor Minipass ever fail to buffer written data, but
node-core streams do a lot of unnecessary buffering and pausing.
As always, the faster implementation is the one that does less stuff and
waits less time to do it.
### Immediately emit `end` for empty streams (when not paused)
If a stream is not paused, and `end()` is called before writing any data
into it, then it will emit `end` immediately.
If you have logic that occurs on the `end` event which you don't want to
potentially happen immediately (for example, closing file descriptors,
moving on to the next entry in an archive parse stream, etc.) then be sure
to call `stream.pause()` on creation, and then `stream.resume()` once you
are ready to respond to the `end` event.
### Emit `end` When Asked
One hazard of immediately emitting `'end'` is that you may not yet have had
a chance to add a listener. In order to avoid this hazard, Minipass
streams safely re-emit the `'end'` event if a new listener is added after
`'end'` has been emitted.
Ie, if you do `stream.on('end', someFunction)`, and the stream has already
emitted `end`, then it will call the handler right away. (You can think of
this somewhat like attaching a new `.then(fn)` to a previously-resolved
Promise.)
To prevent calling handlers multiple times who would not expect multiple
ends to occur, all listeners are removed from the `'end'` event whenever it
is emitted.
### Impact of "immediate flow" on Tee-streams
A "tee stream" is a stream piping to multiple destinations:
```js
const tee = new Minipass()
t.pipe(dest1)
t.pipe(dest2)
t.write('foo') // goes to both destinations
```
Since Minipass streams _immediately_ process any pending data through the
pipeline when a new pipe destination is added, this can have surprising
effects, especially when a stream comes in from some other function and may
or may not have data in its buffer.
```js
// WARNING! WILL LOSE DATA!
const src = new Minipass()
src.write('foo')
src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone
src.pipe(dest2) // gets nothing!
```
The solution is to create a dedicated tee-stream junction that pipes to
both locations, and then pipe to _that_ instead.
```js
// Safe example: tee to both places
const src = new Minipass()
src.write('foo')
const tee = new Minipass()
tee.pipe(dest1)
tee.pipe(dest2)
stream.pipe(tee) // tee gets 'foo', pipes to both locations
```
The same caveat applies to `on('data')` event listeners. The first one
added will _immediately_ receive all of the data, leaving nothing for the
second:
```js
// WARNING! WILL LOSE DATA!
const src = new Minipass()
src.write('foo')
src.on('data', handler1) // receives 'foo' right away
src.on('data', handler2) // nothing to see here!
```
Using a dedicated tee-stream can be used in this case as well:
```js
// Safe example: tee to both data handlers
const src = new Minipass()
src.write('foo')
const tee = new Minipass()
tee.on('data', handler1)
tee.on('data', handler2)
src.pipe(tee)
```
## USAGE
It's a stream! Use it like a stream and it'll most likely do what you want.
```js
const Minipass = require('minipass')
const mp = new Minipass(options) // optional: { encoding, objectMode }
mp.write('foo')
mp.pipe(someOtherStream)
mp.end('bar')
```
### OPTIONS
* `encoding` How would you like the data coming _out_ of the stream to be
encoded? Accepts any values that can be passed to `Buffer.toString()`.
* `objectMode` Emit data exactly as it comes in. This will be flipped on
by default if you write() something other than a string or Buffer at any
point. Setting `objectMode: true` will prevent setting any encoding
value.
### API
Implements the user-facing portions of Node.js's `Readable` and `Writable`
streams.
### Methods
* `write(chunk, [encoding], [callback])` - Put data in. (Note that, in the
base Minipass class, the same data will come out.) Returns `false` if
the stream will buffer the next write, or true if it's still in
"flowing" mode.
* `end([chunk, [encoding]], [callback])` - Signal that you have no more
data to write. This will queue an `end` event to be fired when all the
data has been consumed.
* `setEncoding(encoding)` - Set the encoding for data coming of the
stream. This can only be done once.
* `pause()` - No more data for a while, please. This also prevents `end`
from being emitted for empty streams until the stream is resumed.
* `resume()` - Resume the stream. If there's data in the buffer, it is
all discarded. Any buffered events are immediately emitted.
* `pipe(dest)` - Send all output to the stream provided. There is no way
to unpipe. When data is emitted, it is immediately written to any and
all pipe destinations.
* `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are EventEmitters.
Some events are given special treatment, however. (See below under
"events".)
* `promise()` - Returns a Promise that resolves when the stream emits
`end`, or rejects if the stream emits `error`.
* `collect()` - Return a Promise that resolves on `end` with an array
containing each chunk of data that was emitted, or rejects if the
stream emits `error`. Note that this consumes the stream data.
* `concat()` - Same as `collect()`, but concatenates the data into a
single Buffer object. Will reject the returned promise if the stream is
in objectMode, or if it goes into objectMode by the end of the data.
* `read(n)` - Consume `n` bytes of data out of the buffer. If `n` is not
provided, then consume all of it. If `n` bytes are not available, then
it returns null. **Note** consuming streams in this way is less
efficient, and can lead to unnecessary Buffer copying.
* `destroy([er])` - Destroy the stream. If an error is provided, then an
`'error'` event is emitted. If the stream has a `close()` method, and
has not emitted a `'close'` event yet, then `stream.close()` will be
called. Any Promises returned by `.promise()`, `.collect()` or
`.concat()` will be rejected. After being destroyed, writing to the
stream will emit an error. No more data will be emitted if the stream is
destroyed, even if it was previously buffered.
### Properties
* `bufferLength` Read-only. Total number of bytes buffered, or in the case
of objectMode, the total number of objects.
* `encoding` The encoding that has been set. (Setting this is equivalent
to calling `setEncoding(enc)` and has the same prohibition against
setting multiple times.)
* `flowing` Read-only. Boolean indicating whether a chunk written to the
stream will be immediately emitted.
* `emittedEnd` Read-only. Boolean indicating whether the end-ish events
(ie, `end`, `prefinish`, `finish`) have been emitted. Note that
listening on any end-ish event will immediateyl re-emit it if it has
already been emitted.
* `writable` Whether the stream is writable. Default `true`. Set to
`false` when `end()`
* `readable` Whether the stream is readable. Default `true`.
* `buffer` A [yallist](http://npm.im/yallist) linked list of chunks written
to the stream that have not yet been emitted. (It's probably a bad idea
to mess with this.)
* `pipes` A [yallist](http://npm.im/yallist) linked list of streams that
this stream is piping into. (It's probably a bad idea to mess with
this.)
* `destroyed` A getter that indicates whether the stream was destroyed.
* `paused` True if the stream has been explicitly paused, otherwise false.
* `objectMode` Indicates whether the stream is in `objectMode`. Once set
to `true`, it cannot be set to `false`.
### Events
* `data` Emitted when there's data to read. Argument is the data to read.
This is never emitted while not flowing. If a listener is attached, that
will resume the stream.
* `end` Emitted when there's no more data to read. This will be emitted
immediately for empty streams when `end()` is called. If a listener is
attached, and `end` was already emitted, then it will be emitted again.
All listeners are removed when `end` is emitted.
* `prefinish` An end-ish event that follows the same logic as `end` and is
emitted in the same conditions where `end` is emitted. Emitted after
`'end'`.
* `finish` An end-ish event that follows the same logic as `end` and is
emitted in the same conditions where `end` is emitted. Emitted after
`'prefinish'`.
* `close` An indication that an underlying resource has been released.
Minipass does not emit this event, but will defer it until after `end`
has been emitted, since it throws off some stream libraries otherwise.
* `drain` Emitted when the internal buffer empties, and it is again
suitable to `write()` into the stream.
* `readable` Emitted when data is buffered and ready to be read by a
consumer.
* `resume` Emitted when stream changes state from buffering to flowing
mode. (Ie, when `resume` is called, `pipe` is called, or a `data` event
listener is added.)
### Static Methods
* `Minipass.isStream(stream)` Returns `true` if the argument is a stream,
and false otherwise. To be considered a stream, the object must be
either an instance of Minipass, or an EventEmitter that has either a
`pipe()` method, or both `write()` and `end()` methods. (Pretty much any
stream in node-land will return `true` for this.)
## EXAMPLES
Here are some examples of things you can do with Minipass streams.
### simple "are you done yet" promise
```js
mp.promise().then(() => {
// stream is finished
}, er => {
// stream emitted an error
})
```
### collecting
```js
mp.collect().then(all => {
// all is an array of all the data emitted
// encoding is supported in this case, so
// so the result will be a collection of strings if
// an encoding is specified, or buffers/objects if not.
//
// In an async function, you may do
// const data = await stream.collect()
})
```
### collecting into a single blob
This is a bit slower because it concatenates the data into one chunk for
you, but if you're going to do it yourself anyway, it's convenient this
way:
```js
mp.concat().then(onebigchunk => {
// onebigchunk is a string if the stream
// had an encoding set, or a buffer otherwise.
})
```
### iteration
You can iterate over streams synchronously or asynchronously in
platforms that support it.
Synchronous iteration will end when the currently available data is
consumed, even if the `end` event has not been reached. In string and
buffer mode, the data is concatenated, so unless multiple writes are
occurring in the same tick as the `read()`, sync iteration loops will
generally only have a single iteration.
To consume chunks in this way exactly as they have been written, with
no flattening, create the stream with the `{ objectMode: true }`
option.
```js
const mp = new Minipass({ objectMode: true })
mp.write('a')
mp.write('b')
for (let letter of mp) {
console.log(letter) // a, b
}
mp.write('c')
mp.write('d')
for (let letter of mp) {
console.log(letter) // c, d
}
mp.write('e')
mp.end()
for (let letter of mp) {
console.log(letter) // e
}
for (let letter of mp) {
console.log(letter) // nothing
}
```
Asynchronous iteration will continue until the end event is reached,
consuming all of the data.
```js
const mp = new Minipass({ encoding: 'utf8' })
// some source of some data
let i = 5
const inter = setInterval(() => {
if (i --> 0)
mp.write(Buffer.from('foo\n', 'utf8'))
else {
mp.end()
clearInterval(inter)
}
}, 100)
// consume the data with asynchronous iteration
async function consume () {
for await (let chunk of mp) {
console.log(chunk)
}
return 'ok'
}
consume().then(res => console.log(res))
// logs `foo\n` 5 times, and then `ok`
```
### subclass that `console.log()`s everything written into it
```js
class Logger extends Minipass {
write (chunk, encoding, callback) {
console.log('WRITE', chunk, encoding)
return super.write(chunk, encoding, callback)
}
end (chunk, encoding, callback) {
console.log('END', chunk, encoding)
return super.end(chunk, encoding, callback)
}
}
someSource.pipe(new Logger()).pipe(someDest)
```
### same thing, but using an inline anonymous class
```js
// js classes are fun
someSource
.pipe(new (class extends Minipass {
emit (ev, ...data) {
// let's also log events, because debugging some weird thing
console.log('EMIT', ev)
return super.emit(ev, ...data)
}
write (chunk, encoding, callback) {
console.log('WRITE', chunk, encoding)
return super.write(chunk, encoding, callback)
}
end (chunk, encoding, callback) {
console.log('END', chunk, encoding)
return super.end(chunk, encoding, callback)
}
}))
.pipe(someDest)
```
### subclass that defers 'end' for some reason
```js
class SlowEnd extends Minipass {
emit (ev, ...args) {
if (ev === 'end') {
console.log('going to end, hold on a sec')
setTimeout(() => {
console.log('ok, ready to end now')
super.emit('end', ...args)
}, 100)
} else {
return super.emit(ev, ...args)
}
}
}
```
### transform that creates newline-delimited JSON
```js
class NDJSONEncode extends Minipass {
write (obj, cb) {
try {
// JSON.stringify can throw, emit an error on that
return super.write(JSON.stringify(obj) + '\n', 'utf8', cb)
} catch (er) {
this.emit('error', er)
}
}
end (obj, cb) {
if (typeof obj === 'function') {
cb = obj
obj = undefined
}
if (obj !== undefined) {
this.write(obj)
}
return super.end(cb)
}
}
```
### transform that parses newline-delimited JSON
```js
class NDJSONDecode extends Minipass {
constructor (options) {
// always be in object mode, as far as Minipass is concerned
super({ objectMode: true })
this._jsonBuffer = ''
}
write (chunk, encoding, cb) {
if (typeof chunk === 'string' &&
typeof encoding === 'string' &&
encoding !== 'utf8') {
chunk = Buffer.from(chunk, encoding).toString()
} else if (Buffer.isBuffer(chunk))
chunk = chunk.toString()
}
if (typeof encoding === 'function') {
cb = encoding
}
const jsonData = (this._jsonBuffer + chunk).split('\n')
this._jsonBuffer = jsonData.pop()
for (let i = 0; i < jsonData.length; i++) {
let parsed
try {
super.write(parsed)
} catch (er) {
this.emit('error', er)
continue
}
}
if (cb)
cb()
}
}
```

View File

@@ -0,0 +1,537 @@
'use strict'
const EE = require('events')
const Yallist = require('yallist')
const SD = require('string_decoder').StringDecoder
const EOF = Symbol('EOF')
const MAYBE_EMIT_END = Symbol('maybeEmitEnd')
const EMITTED_END = Symbol('emittedEnd')
const EMITTING_END = Symbol('emittingEnd')
const CLOSED = Symbol('closed')
const READ = Symbol('read')
const FLUSH = Symbol('flush')
const FLUSHCHUNK = Symbol('flushChunk')
const ENCODING = Symbol('encoding')
const DECODER = Symbol('decoder')
const FLOWING = Symbol('flowing')
const PAUSED = Symbol('paused')
const RESUME = Symbol('resume')
const BUFFERLENGTH = Symbol('bufferLength')
const BUFFERPUSH = Symbol('bufferPush')
const BUFFERSHIFT = Symbol('bufferShift')
const OBJECTMODE = Symbol('objectMode')
const DESTROYED = Symbol('destroyed')
// TODO remove when Node v8 support drops
const doIter = global._MP_NO_ITERATOR_SYMBOLS_ !== '1'
const ASYNCITERATOR = doIter && Symbol.asyncIterator
|| Symbol('asyncIterator not implemented')
const ITERATOR = doIter && Symbol.iterator
|| Symbol('iterator not implemented')
// Buffer in node 4.x < 4.5.0 doesn't have working Buffer.from
// or Buffer.alloc, and Buffer in node 10 deprecated the ctor.
// .M, this is fine .\^/M..
const B = Buffer.alloc ? Buffer
: /* istanbul ignore next */ require('safe-buffer').Buffer
// events that mean 'the stream is over'
// these are treated specially, and re-emitted
// if they are listened for after emitting.
const isEndish = ev =>
ev === 'end' ||
ev === 'finish' ||
ev === 'prefinish'
const isArrayBuffer = b => b instanceof ArrayBuffer ||
typeof b === 'object' &&
b.constructor &&
b.constructor.name === 'ArrayBuffer' &&
b.byteLength >= 0
const isArrayBufferView = b => !B.isBuffer(b) && ArrayBuffer.isView(b)
module.exports = class Minipass extends EE {
constructor (options) {
super()
this[FLOWING] = false
// whether we're explicitly paused
this[PAUSED] = false
this.pipes = new Yallist()
this.buffer = new Yallist()
this[OBJECTMODE] = options && options.objectMode || false
if (this[OBJECTMODE])
this[ENCODING] = null
else
this[ENCODING] = options && options.encoding || null
if (this[ENCODING] === 'buffer')
this[ENCODING] = null
this[DECODER] = this[ENCODING] ? new SD(this[ENCODING]) : null
this[EOF] = false
this[EMITTED_END] = false
this[EMITTING_END] = false
this[CLOSED] = false
this.writable = true
this.readable = true
this[BUFFERLENGTH] = 0
this[DESTROYED] = false
}
get bufferLength () { return this[BUFFERLENGTH] }
get encoding () { return this[ENCODING] }
set encoding (enc) {
if (this[OBJECTMODE])
throw new Error('cannot set encoding in objectMode')
if (this[ENCODING] && enc !== this[ENCODING] &&
(this[DECODER] && this[DECODER].lastNeed || this[BUFFERLENGTH]))
throw new Error('cannot change encoding')
if (this[ENCODING] !== enc) {
this[DECODER] = enc ? new SD(enc) : null
if (this.buffer.length)
this.buffer = this.buffer.map(chunk => this[DECODER].write(chunk))
}
this[ENCODING] = enc
}
setEncoding (enc) {
this.encoding = enc
}
get objectMode () { return this[OBJECTMODE] }
set objectMode ( ) { this[OBJECTMODE] = this[OBJECTMODE] || !! }
write (chunk, encoding, cb) {
if (this[EOF])
throw new Error('write after end')
if (this[DESTROYED]) {
this.emit('error', Object.assign(
new Error('Cannot call write after a stream was destroyed'),
{ code: 'ERR_STREAM_DESTROYED' }
))
return true
}
if (typeof encoding === 'function')
cb = encoding, encoding = 'utf8'
if (!encoding)
encoding = 'utf8'
// convert array buffers and typed array views into buffers
// at some point in the future, we may want to do the opposite!
// leave strings and buffers as-is
// anything else switches us into object mode
if (!this[OBJECTMODE] && !B.isBuffer(chunk)) {
if (isArrayBufferView(chunk))
chunk = B.from(chunk.buffer, chunk.byteOffset, chunk.byteLength)
else if (isArrayBuffer(chunk))
chunk = B.from(chunk)
else if (typeof chunk !== 'string')
// use the setter so we throw if we have encoding set
this.objectMode = true
}
// this ensures at this point that the chunk is a buffer or string
// don't buffer it up or send it to the decoder
if (!this.objectMode && !chunk.length) {
const ret = this.flowing
if (this[BUFFERLENGTH] !== 0)
this.emit('readable')
if (cb)
cb()
return ret
}
// fast-path writing strings of same encoding to a stream with
// an empty buffer, skipping the buffer/decoder dance
if (typeof chunk === 'string' && !this[OBJECTMODE] &&
// unless it is a string already ready for us to use
!(encoding === this[ENCODING] && !this[DECODER].lastNeed)) {
chunk = B.from(chunk, encoding)
}
if (B.isBuffer(chunk) && this[ENCODING])
chunk = this[DECODER].write(chunk)
try {
return this.flowing
? (this.emit('data', chunk), this.flowing)
: (this[BUFFERPUSH](chunk), false)
} finally {
if (this[BUFFERLENGTH] !== 0)
this.emit('readable')
if (cb)
cb()
}
}
read (n) {
if (this[DESTROYED])
return null
try {
if (this[BUFFERLENGTH] === 0 || n === 0 || n > this[BUFFERLENGTH])
return null
if (this[OBJECTMODE])
n = null
if (this.buffer.length > 1 && !this[OBJECTMODE]) {
if (this.encoding)
this.buffer = new Yallist([
Array.from(this.buffer).join('')
])
else
this.buffer = new Yallist([
B.concat(Array.from(this.buffer), this[BUFFERLENGTH])
])
}
return this[READ](n || null, this.buffer.head.value)
} finally {
this[MAYBE_EMIT_END]()
}
}
[READ] (n, chunk) {
if (n === chunk.length || n === null)
this[BUFFERSHIFT]()
else {
this.buffer.head.value = chunk.slice(n)
chunk = chunk.slice(0, n)
this[BUFFERLENGTH] -= n
}
this.emit('data', chunk)
if (!this.buffer.length && !this[EOF])
this.emit('drain')
return chunk
}
end (chunk, encoding, cb) {
if (typeof chunk === 'function')
cb = chunk, chunk = null
if (typeof encoding === 'function')
cb = encoding, encoding = 'utf8'
if (chunk)
this.write(chunk, encoding)
if (cb)
this.once('end', cb)
this[EOF] = true
this.writable = false
// if we haven't written anything, then go ahead and emit,
// even if we're not reading.
// we'll re-emit if a new 'end' listener is added anyway.
// This makes MP more suitable to write-only use cases.
if (this.flowing || !this[PAUSED])
this[MAYBE_EMIT_END]()
return this
}
// don't let the internal resume be overwritten
[RESUME] () {
if (this[DESTROYED])
return
this[PAUSED] = false
this[FLOWING] = true
this.emit('resume')
if (this.buffer.length)
this[FLUSH]()
else if (this[EOF])
this[MAYBE_EMIT_END]()
else
this.emit('drain')
}
resume () {
return this[RESUME]()
}
pause () {
this[FLOWING] = false
this[PAUSED] = true
}
get destroyed () {
return this[DESTROYED]
}
get flowing () {
return this[FLOWING]
}
get paused () {
return this[PAUSED]
}
[BUFFERPUSH] (chunk) {
if (this[OBJECTMODE])
this[BUFFERLENGTH] += 1
else
this[BUFFERLENGTH] += chunk.length
return this.buffer.push(chunk)
}
[BUFFERSHIFT] () {
if (this.buffer.length) {
if (this[OBJECTMODE])
this[BUFFERLENGTH] -= 1
else
this[BUFFERLENGTH] -= this.buffer.head.value.length
}
return this.buffer.shift()
}
[FLUSH] () {
do {} while (this[FLUSHCHUNK](this[BUFFERSHIFT]()))
if (!this.buffer.length && !this[EOF])
this.emit('drain')
}
[FLUSHCHUNK] (chunk) {
return chunk ? (this.emit('data', chunk), this.flowing) : false
}
pipe (dest, opts) {
if (this[DESTROYED])
return
const ended = this[EMITTED_END]
opts = opts || {}
if (dest === process.stdout || dest === process.stderr)
opts.end = false
else
opts.end = opts.end !== false
const p = { dest: dest, opts: opts, ondrain: _ => this[RESUME]() }
this.pipes.push(p)
dest.on('drain', p.ondrain)
this[RESUME]()
// piping an ended stream ends immediately
if (ended && p.opts.end)
p.dest.end()
return dest
}
addListener (ev, fn) {
return this.on(ev, fn)
}
on (ev, fn) {
try {
return super.on(ev, fn)
} finally {
if (ev === 'data' && !this.pipes.length && !this.flowing)
this[RESUME]()
else if (isEndish(ev) && this[EMITTED_END]) {
super.emit(ev)
this.removeAllListeners(ev)
}
}
}
get emittedEnd () {
return this[EMITTED_END]
}
[MAYBE_EMIT_END] () {
if (!this[EMITTING_END] &&
!this[EMITTED_END] &&
!this[DESTROYED] &&
this.buffer.length === 0 &&
this[EOF]) {
this[EMITTING_END] = true
this.emit('end')
this.emit('prefinish')
this.emit('finish')
if (this[CLOSED])
this.emit('close')
this[EMITTING_END] = false
}
}
emit (ev, data) {
// error and close are only events allowed after calling destroy()
if (ev !== 'error' && ev !== 'close' && ev !== DESTROYED && this[DESTROYED])
return
else if (ev === 'data') {
if (!data)
return
if (this.pipes.length)
this.pipes.forEach(p =>
p.dest.write(data) === false && this.pause())
} else if (ev === 'end') {
// only actual end gets this treatment
if (this[EMITTED_END] === true)
return
this[EMITTED_END] = true
this.readable = false
if (this[DECODER]) {
data = this[DECODER].end()
if (data) {
this.pipes.forEach(p => p.dest.write(data))
super.emit('data', data)
}
}
this.pipes.forEach(p => {
p.dest.removeListener('drain', p.ondrain)
if (p.opts.end)
p.dest.end()
})
} else if (ev === 'close') {
this[CLOSED] = true
// don't emit close before 'end' and 'finish'
if (!this[EMITTED_END] && !this[DESTROYED])
return
}
// TODO: replace with a spread operator when Node v4 support drops
const args = new Array(arguments.length)
args[0] = ev
args[1] = data
if (arguments.length > 2) {
for (let i = 2; i < arguments.length; i++) {
args[i] = arguments[i]
}
}
try {
return super.emit.apply(this, args)
} finally {
if (!isEndish(ev))
this[MAYBE_EMIT_END]()
else
this.removeAllListeners(ev)
}
}
// const all = await stream.collect()
collect () {
const buf = []
buf.dataLength = 0
this.on('data', c => {
buf.push(c)
buf.dataLength += c.length
})
return this.promise().then(() => buf)
}
// const data = await stream.concat()
concat () {
return this[OBJECTMODE]
? Promise.reject(new Error('cannot concat in objectMode'))
: this.collect().then(buf =>
this[OBJECTMODE]
? Promise.reject(new Error('cannot concat in objectMode'))
: this[ENCODING] ? buf.join('') : B.concat(buf, buf.dataLength))
}
// stream.promise().then(() => done, er => emitted error)
promise () {
return new Promise((resolve, reject) => {
this.on(DESTROYED, () => reject(new Error('stream destroyed')))
this.on('end', () => resolve())
this.on('error', er => reject(er))
})
}
// for await (let chunk of stream)
[ASYNCITERATOR] () {
const next = () => {
const res = this.read()
if (res !== null)
return Promise.resolve({ done: false, value: res })
if (this[EOF])
return Promise.resolve({ done: true })
let resolve = null
let reject = null
const onerr = er => {
this.removeListener('data', ondata)
this.removeListener('end', onend)
reject(er)
}
const ondata = value => {
this.removeListener('error', onerr)
this.removeListener('end', onend)
this.pause()
resolve({ value: value, done: !!this[EOF] })
}
const onend = () => {
this.removeListener('error', onerr)
this.removeListener('data', ondata)
resolve({ done: true })
}
const ondestroy = () => onerr(new Error('stream destroyed'))
return new Promise((res, rej) => {
reject = rej
resolve = res
this.once(DESTROYED, ondestroy)
this.once('error', onerr)
this.once('end', onend)
this.once('data', ondata)
})
}
return { next }
}
// for (let chunk of stream)
[ITERATOR] () {
const next = () => {
const value = this.read()
const done = value === null
return { value, done }
}
return { next }
}
destroy (er) {
if (this[DESTROYED]) {
if (er)
this.emit('error', er)
else
this.emit(DESTROYED)
return this
}
this[DESTROYED] = true
// throw away all buffered data, it's never coming out
this.buffer = new Yallist()
this[BUFFERLENGTH] = 0
if (typeof this.close === 'function' && !this[CLOSED])
this.close()
if (er)
this.emit('error', er)
else // if no error to emit, still reject pending promises
this.emit(DESTROYED)
return this
}
static isStream (s) {
return !!s && (s instanceof Minipass || s instanceof EE && (
typeof s.pipe === 'function' || // readable
(typeof s.write === 'function' && typeof s.end === 'function') // writable
))
}
}

View File

@@ -0,0 +1,72 @@
{
"_from": "minipass@^2.8.6",
"_id": "minipass@2.9.0",
"_inBundle": false,
"_integrity": "sha512-wxfUjg9WebH+CUDX/CdbRlh5SmfZiy/hpkxaRI16Y9W56Pa75sWgd/rvFilSgrauD9NyFymP/+JFV3KwzIsJeg==",
"_location": "/node-pre-gyp/minipass",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "minipass@^2.8.6",
"name": "minipass",
"escapedName": "minipass",
"rawSpec": "^2.8.6",
"saveSpec": null,
"fetchSpec": "^2.8.6"
},
"_requiredBy": [
"/node-pre-gyp/fs-minipass",
"/node-pre-gyp/minizlib",
"/node-pre-gyp/tar"
],
"_resolved": "https://registry.npmjs.org/minipass/-/minipass-2.9.0.tgz",
"_shasum": "e713762e7d3e32fed803115cf93e04bca9fcc9a6",
"_spec": "minipass@^2.8.6",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot\\node_modules\\node-pre-gyp\\node_modules\\tar",
"author": {
"name": "Isaac Z. Schlueter",
"email": "i@izs.me",
"url": "http://blog.izs.me/"
},
"bugs": {
"url": "https://github.com/isaacs/minipass/issues"
},
"bundleDependencies": false,
"dependencies": {
"safe-buffer": "^5.1.2",
"yallist": "^3.0.0"
},
"deprecated": false,
"description": "minimal implementation of a PassThrough stream",
"devDependencies": {
"end-of-stream": "^1.4.0",
"tap": "^14.6.5",
"through2": "^2.0.3"
},
"files": [
"index.js"
],
"homepage": "https://github.com/isaacs/minipass#readme",
"keywords": [
"passthrough",
"stream"
],
"license": "ISC",
"main": "index.js",
"name": "minipass",
"repository": {
"type": "git",
"url": "git+https://github.com/isaacs/minipass.git"
},
"scripts": {
"postpublish": "git push origin --follow-tags",
"postversion": "npm publish",
"preversion": "npm test",
"test": "tap"
},
"tap": {
"check-coverage": true
},
"version": "2.9.0"
}

View File

@@ -0,0 +1,26 @@
Minizlib was created by Isaac Z. Schlueter.
It is a derivative work of the Node.js project.
"""
Copyright Isaac Z. Schlueter and Contributors
Copyright Node.js contributors. All rights reserved.
Copyright Joyent, Inc. and other Node contributors. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""

View File

@@ -0,0 +1,53 @@
# minizlib
A fast zlib stream built on [minipass](http://npm.im/minipass) and
Node.js's zlib binding.
This module was created to serve the needs of
[node-tar](http://npm.im/tar) and
[minipass-fetch](http://npm.im/minipass-fetch).
Brotli is supported in versions of node with a Brotli binding.
## How does this differ from the streams in `require('zlib')`?
First, there are no convenience methods to compress or decompress a
buffer. If you want those, use the built-in `zlib` module. This is
only streams. That being said, Minipass streams to make it fairly easy to
use as one-liners: `new zlib.Deflate().end(data).read()` will return the
deflate compressed result.
This module compresses and decompresses the data as fast as you feed
it in. It is synchronous, and runs on the main process thread. Zlib
and Brotli operations can be high CPU, but they're very fast, and doing it
this way means much less bookkeeping and artificial deferral.
Node's built in zlib streams are built on top of `stream.Transform`.
They do the maximally safe thing with respect to consistent
asynchrony, buffering, and backpressure.
See [Minipass](http://npm.im/minipass) for more on the differences between
Node.js core streams and Minipass streams, and the convenience methods
provided by that class.
## Classes
- Deflate
- Inflate
- Gzip
- Gunzip
- DeflateRaw
- InflateRaw
- Unzip
- BrotliCompress (Node v10 and higher)
- BrotliDecompress (Node v10 and higher)
## USAGE
```js
const zlib = require('minizlib')
const input = sourceOfCompressedData()
const decode = new zlib.BrotliDecompress()
const output = whereToWriteTheDecodedData()
input.pipe(decode).pipe(output)
```

View File

@@ -0,0 +1,115 @@
// Update with any zlib constants that are added or changed in the future.
// Node v6 didn't export this, so we just hard code the version and rely
// on all the other hard-coded values from zlib v4736. When node v6
// support drops, we can just export the realZlibConstants object.
const realZlibConstants = require('zlib').constants ||
/* istanbul ignore next */ { ZLIB_VERNUM: 4736 }
module.exports = Object.freeze(Object.assign(Object.create(null), {
Z_NO_FLUSH: 0,
Z_PARTIAL_FLUSH: 1,
Z_SYNC_FLUSH: 2,
Z_FULL_FLUSH: 3,
Z_FINISH: 4,
Z_BLOCK: 5,
Z_OK: 0,
Z_STREAM_END: 1,
Z_NEED_DICT: 2,
Z_ERRNO: -1,
Z_STREAM_ERROR: -2,
Z_DATA_ERROR: -3,
Z_MEM_ERROR: -4,
Z_BUF_ERROR: -5,
Z_VERSION_ERROR: -6,
Z_NO_COMPRESSION: 0,
Z_BEST_SPEED: 1,
Z_BEST_COMPRESSION: 9,
Z_DEFAULT_COMPRESSION: -1,
Z_FILTERED: 1,
Z_HUFFMAN_ONLY: 2,
Z_RLE: 3,
Z_FIXED: 4,
Z_DEFAULT_STRATEGY: 0,
DEFLATE: 1,
INFLATE: 2,
GZIP: 3,
GUNZIP: 4,
DEFLATERAW: 5,
INFLATERAW: 6,
UNZIP: 7,
BROTLI_DECODE: 8,
BROTLI_ENCODE: 9,
Z_MIN_WINDOWBITS: 8,
Z_MAX_WINDOWBITS: 15,
Z_DEFAULT_WINDOWBITS: 15,
Z_MIN_CHUNK: 64,
Z_MAX_CHUNK: Infinity,
Z_DEFAULT_CHUNK: 16384,
Z_MIN_MEMLEVEL: 1,
Z_MAX_MEMLEVEL: 9,
Z_DEFAULT_MEMLEVEL: 8,
Z_MIN_LEVEL: -1,
Z_MAX_LEVEL: 9,
Z_DEFAULT_LEVEL: -1,
BROTLI_OPERATION_PROCESS: 0,
BROTLI_OPERATION_FLUSH: 1,
BROTLI_OPERATION_FINISH: 2,
BROTLI_OPERATION_EMIT_METADATA: 3,
BROTLI_MODE_GENERIC: 0,
BROTLI_MODE_TEXT: 1,
BROTLI_MODE_FONT: 2,
BROTLI_DEFAULT_MODE: 0,
BROTLI_MIN_QUALITY: 0,
BROTLI_MAX_QUALITY: 11,
BROTLI_DEFAULT_QUALITY: 11,
BROTLI_MIN_WINDOW_BITS: 10,
BROTLI_MAX_WINDOW_BITS: 24,
BROTLI_LARGE_MAX_WINDOW_BITS: 30,
BROTLI_DEFAULT_WINDOW: 22,
BROTLI_MIN_INPUT_BLOCK_BITS: 16,
BROTLI_MAX_INPUT_BLOCK_BITS: 24,
BROTLI_PARAM_MODE: 0,
BROTLI_PARAM_QUALITY: 1,
BROTLI_PARAM_LGWIN: 2,
BROTLI_PARAM_LGBLOCK: 3,
BROTLI_PARAM_DISABLE_LITERAL_CONTEXT_MODELING: 4,
BROTLI_PARAM_SIZE_HINT: 5,
BROTLI_PARAM_LARGE_WINDOW: 6,
BROTLI_PARAM_NPOSTFIX: 7,
BROTLI_PARAM_NDIRECT: 8,
BROTLI_DECODER_RESULT_ERROR: 0,
BROTLI_DECODER_RESULT_SUCCESS: 1,
BROTLI_DECODER_RESULT_NEEDS_MORE_INPUT: 2,
BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT: 3,
BROTLI_DECODER_PARAM_DISABLE_RING_BUFFER_REALLOCATION: 0,
BROTLI_DECODER_PARAM_LARGE_WINDOW: 1,
BROTLI_DECODER_NO_ERROR: 0,
BROTLI_DECODER_SUCCESS: 1,
BROTLI_DECODER_NEEDS_MORE_INPUT: 2,
BROTLI_DECODER_NEEDS_MORE_OUTPUT: 3,
BROTLI_DECODER_ERROR_FORMAT_EXUBERANT_NIBBLE: -1,
BROTLI_DECODER_ERROR_FORMAT_RESERVED: -2,
BROTLI_DECODER_ERROR_FORMAT_EXUBERANT_META_NIBBLE: -3,
BROTLI_DECODER_ERROR_FORMAT_SIMPLE_HUFFMAN_ALPHABET: -4,
BROTLI_DECODER_ERROR_FORMAT_SIMPLE_HUFFMAN_SAME: -5,
BROTLI_DECODER_ERROR_FORMAT_CL_SPACE: -6,
BROTLI_DECODER_ERROR_FORMAT_HUFFMAN_SPACE: -7,
BROTLI_DECODER_ERROR_FORMAT_CONTEXT_MAP_REPEAT: -8,
BROTLI_DECODER_ERROR_FORMAT_BLOCK_LENGTH_1: -9,
BROTLI_DECODER_ERROR_FORMAT_BLOCK_LENGTH_2: -10,
BROTLI_DECODER_ERROR_FORMAT_TRANSFORM: -11,
BROTLI_DECODER_ERROR_FORMAT_DICTIONARY: -12,
BROTLI_DECODER_ERROR_FORMAT_WINDOW_BITS: -13,
BROTLI_DECODER_ERROR_FORMAT_PADDING_1: -14,
BROTLI_DECODER_ERROR_FORMAT_PADDING_2: -15,
BROTLI_DECODER_ERROR_FORMAT_DISTANCE: -16,
BROTLI_DECODER_ERROR_DICTIONARY_NOT_SET: -19,
BROTLI_DECODER_ERROR_INVALID_ARGUMENTS: -20,
BROTLI_DECODER_ERROR_ALLOC_CONTEXT_MODES: -21,
BROTLI_DECODER_ERROR_ALLOC_TREE_GROUPS: -22,
BROTLI_DECODER_ERROR_ALLOC_CONTEXT_MAP: -25,
BROTLI_DECODER_ERROR_ALLOC_RING_BUFFER_1: -26,
BROTLI_DECODER_ERROR_ALLOC_RING_BUFFER_2: -27,
BROTLI_DECODER_ERROR_ALLOC_BLOCK_TYPE_TREES: -30,
BROTLI_DECODER_ERROR_UNREACHABLE: -31,
}, realZlibConstants))

View File

@@ -0,0 +1,320 @@
'use strict'
const assert = require('assert')
const Buffer = require('buffer').Buffer
const realZlib = require('zlib')
const constants = exports.constants = require('./constants.js')
const Minipass = require('minipass')
const OriginalBufferConcat = Buffer.concat
class ZlibError extends Error {
constructor (err) {
super('zlib: ' + err.message)
this.code = err.code
this.errno = err.errno
/* istanbul ignore if */
if (!this.code)
this.code = 'ZLIB_ERROR'
this.message = 'zlib: ' + err.message
Error.captureStackTrace(this, this.constructor)
}
get name () {
return 'ZlibError'
}
}
// the Zlib class they all inherit from
// This thing manages the queue of requests, and returns
// true or false if there is anything in the queue when
// you call the .write() method.
const _opts = Symbol('opts')
const _flushFlag = Symbol('flushFlag')
const _finishFlushFlag = Symbol('finishFlushFlag')
const _fullFlushFlag = Symbol('fullFlushFlag')
const _handle = Symbol('handle')
const _onError = Symbol('onError')
const _sawError = Symbol('sawError')
const _level = Symbol('level')
const _strategy = Symbol('strategy')
const _ended = Symbol('ended')
const _defaultFullFlush = Symbol('_defaultFullFlush')
class ZlibBase extends Minipass {
constructor (opts, mode) {
if (!opts || typeof opts !== 'object')
throw new TypeError('invalid options for ZlibBase constructor')
super(opts)
this[_ended] = false
this[_opts] = opts
this[_flushFlag] = opts.flush
this[_finishFlushFlag] = opts.finishFlush
// this will throw if any options are invalid for the class selected
try {
this[_handle] = new realZlib[mode](opts)
} catch (er) {
// make sure that all errors get decorated properly
throw new ZlibError(er)
}
this[_onError] = (err) => {
this[_sawError] = true
// there is no way to cleanly recover.
// continuing only obscures problems.
this.close()
this.emit('error', err)
}
this[_handle].on('error', er => this[_onError](new ZlibError(er)))
this.once('end', () => this.close)
}
close () {
if (this[_handle]) {
this[_handle].close()
this[_handle] = null
this.emit('close')
}
}
reset () {
if (!this[_sawError]) {
assert(this[_handle], 'zlib binding closed')
return this[_handle].reset()
}
}
flush (flushFlag) {
if (this.ended)
return
if (typeof flushFlag !== 'number')
flushFlag = this[_fullFlushFlag]
this.write(Object.assign(Buffer.alloc(0), { [_flushFlag]: flushFlag }))
}
end (chunk, encoding, cb) {
if (chunk)
this.write(chunk, encoding)
this.flush(this[_finishFlushFlag])
this[_ended] = true
return super.end(null, null, cb)
}
get ended () {
return this[_ended]
}
write (chunk, encoding, cb) {
// process the chunk using the sync process
// then super.write() all the outputted chunks
if (typeof encoding === 'function')
cb = encoding, encoding = 'utf8'
if (typeof chunk === 'string')
chunk = Buffer.from(chunk, encoding)
if (this[_sawError])
return
assert(this[_handle], 'zlib binding closed')
// _processChunk tries to .close() the native handle after it's done, so we
// intercept that by temporarily making it a no-op.
const nativeHandle = this[_handle]._handle
const originalNativeClose = nativeHandle.close
nativeHandle.close = () => {}
const originalClose = this[_handle].close
this[_handle].close = () => {}
// It also calls `Buffer.concat()` at the end, which may be convenient
// for some, but which we are not interested in as it slows us down.
Buffer.concat = (args) => args
let result
try {
const flushFlag = typeof chunk[_flushFlag] === 'number'
? chunk[_flushFlag] : this[_flushFlag]
result = this[_handle]._processChunk(chunk, flushFlag)
// if we don't throw, reset it back how it was
Buffer.concat = OriginalBufferConcat
} catch (err) {
// or if we do, put Buffer.concat() back before we emit error
// Error events call into user code, which may call Buffer.concat()
Buffer.concat = OriginalBufferConcat
this[_onError](new ZlibError(err))
} finally {
if (this[_handle]) {
// Core zlib resets `_handle` to null after attempting to close the
// native handle. Our no-op handler prevented actual closure, but we
// need to restore the `._handle` property.
this[_handle]._handle = nativeHandle
nativeHandle.close = originalNativeClose
this[_handle].close = originalClose
// `_processChunk()` adds an 'error' listener. If we don't remove it
// after each call, these handlers start piling up.
this[_handle].removeAllListeners('error')
}
}
let writeReturn
if (result) {
if (Array.isArray(result) && result.length > 0) {
// The first buffer is always `handle._outBuffer`, which would be
// re-used for later invocations; so, we always have to copy that one.
writeReturn = super.write(Buffer.from(result[0]))
for (let i = 1; i < result.length; i++) {
writeReturn = super.write(result[i])
}
} else {
writeReturn = super.write(Buffer.from(result))
}
}
if (cb)
cb()
return writeReturn
}
}
class Zlib extends ZlibBase {
constructor (opts, mode) {
opts = opts || {}
opts.flush = opts.flush || constants.Z_NO_FLUSH
opts.finishFlush = opts.finishFlush || constants.Z_FINISH
super(opts, mode)
this[_fullFlushFlag] = constants.Z_FULL_FLUSH
this[_level] = opts.level
this[_strategy] = opts.strategy
}
params (level, strategy) {
if (this[_sawError])
return
if (!this[_handle])
throw new Error('cannot switch params when binding is closed')
// no way to test this without also not supporting params at all
/* istanbul ignore if */
if (!this[_handle].params)
throw new Error('not supported in this implementation')
if (this[_level] !== level || this[_strategy] !== strategy) {
this.flush(constants.Z_SYNC_FLUSH)
assert(this[_handle], 'zlib binding closed')
// .params() calls .flush(), but the latter is always async in the
// core zlib. We override .flush() temporarily to intercept that and
// flush synchronously.
const origFlush = this[_handle].flush
this[_handle].flush = (flushFlag, cb) => {
this.flush(flushFlag)
cb()
}
try {
this[_handle].params(level, strategy)
} finally {
this[_handle].flush = origFlush
}
/* istanbul ignore else */
if (this[_handle]) {
this[_level] = level
this[_strategy] = strategy
}
}
}
}
// minimal 2-byte header
class Deflate extends Zlib {
constructor (opts) {
super(opts, 'Deflate')
}
}
class Inflate extends Zlib {
constructor (opts) {
super(opts, 'Inflate')
}
}
// gzip - bigger header, same deflate compression
class Gzip extends Zlib {
constructor (opts) {
super(opts, 'Gzip')
}
}
class Gunzip extends Zlib {
constructor (opts) {
super(opts, 'Gunzip')
}
}
// raw - no header
class DeflateRaw extends Zlib {
constructor (opts) {
super(opts, 'DeflateRaw')
}
}
class InflateRaw extends Zlib {
constructor (opts) {
super(opts, 'InflateRaw')
}
}
// auto-detect header.
class Unzip extends Zlib {
constructor (opts) {
super(opts, 'Unzip')
}
}
class Brotli extends ZlibBase {
constructor (opts, mode) {
opts = opts || {}
opts.flush = opts.flush || constants.BROTLI_OPERATION_PROCESS
opts.finishFlush = opts.finishFlush || constants.BROTLI_OPERATION_FINISH
super(opts, mode)
this[_fullFlushFlag] = constants.BROTLI_OPERATION_FLUSH
}
}
class BrotliCompress extends Brotli {
constructor (opts) {
super(opts, 'BrotliCompress')
}
}
class BrotliDecompress extends Brotli {
constructor (opts) {
super(opts, 'BrotliDecompress')
}
}
exports.Deflate = Deflate
exports.Inflate = Inflate
exports.Gzip = Gzip
exports.Gunzip = Gunzip
exports.DeflateRaw = DeflateRaw
exports.InflateRaw = InflateRaw
exports.Unzip = Unzip
/* istanbul ignore else */
if (typeof realZlib.BrotliCompress === 'function') {
exports.BrotliCompress = BrotliCompress
exports.BrotliDecompress = BrotliDecompress
} else {
exports.BrotliCompress = exports.BrotliDecompress = class {
constructor () {
throw new Error('Brotli is not supported in this version of Node.js')
}
}
}

View File

@@ -0,0 +1,71 @@
{
"_from": "minizlib@^1.2.1",
"_id": "minizlib@1.3.3",
"_inBundle": false,
"_integrity": "sha512-6ZYMOEnmVsdCeTJVE0W9ZD+pVnE8h9Hma/iOwwRDsdQoePpoX56/8B6z3P9VNwppJuBKNRuFDRNRqRWexT9G9Q==",
"_location": "/node-pre-gyp/minizlib",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "minizlib@^1.2.1",
"name": "minizlib",
"escapedName": "minizlib",
"rawSpec": "^1.2.1",
"saveSpec": null,
"fetchSpec": "^1.2.1"
},
"_requiredBy": [
"/node-pre-gyp/tar"
],
"_resolved": "https://registry.npmjs.org/minizlib/-/minizlib-1.3.3.tgz",
"_shasum": "2290de96818a34c29551c8a8d301216bd65a861d",
"_spec": "minizlib@^1.2.1",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot\\node_modules\\node-pre-gyp\\node_modules\\tar",
"author": {
"name": "Isaac Z. Schlueter",
"email": "i@izs.me",
"url": "http://blog.izs.me/"
},
"bugs": {
"url": "https://github.com/isaacs/minizlib/issues"
},
"bundleDependencies": false,
"dependencies": {
"minipass": "^2.9.0"
},
"deprecated": false,
"description": "A small fast zlib stream built on [minipass](http://npm.im/minipass) and Node.js's zlib binding.",
"devDependencies": {
"tap": "^12.0.1"
},
"files": [
"index.js",
"constants.js"
],
"homepage": "https://github.com/isaacs/minizlib#readme",
"keywords": [
"zlib",
"gzip",
"gunzip",
"deflate",
"inflate",
"compression",
"zip",
"unzip"
],
"license": "MIT",
"main": "index.js",
"name": "minizlib",
"repository": {
"type": "git",
"url": "git+https://github.com/isaacs/minizlib.git"
},
"scripts": {
"postpublish": "git push origin --all; git push origin --tags",
"postversion": "npm publish",
"preversion": "npm test",
"test": "tap test/*.js --100 -J"
},
"version": "1.3.3"
}

View File

@@ -0,0 +1,58 @@
### v4.0.1 (2016-12-14)
#### WHOOPS
* [`fb9b1ce`](https://github.com/npm/nopt/commit/fb9b1ce57b3c69b4f7819015be87719204f77ef6)
Merged so many patches at once that the code fencing
([@adius](https://github.com/adius)) added got broken. Sorry,
([@adius](https://github.com/adius))!
([@othiym23](https://github.com/othiym23))
### v4.0.0 (2016-12-13)
#### BREAKING CHANGES
* [`651d447`](https://github.com/npm/nopt/commit/651d4473946096d341a480bbe56793de3fc706aa)
When parsing String-typed arguments, if the next value is `""`, don't simply
swallow it. ([@samjonester](https://github.com/samjonester))
#### PERFORMANCE TWEAKS
* [`3370ce8`](https://github.com/npm/nopt/commit/3370ce87a7618ba228883861db84ddbcdff252a9)
Simplify initialization. ([@elidoran](https://github.com/elidoran))
* [`356e58e`](https://github.com/npm/nopt/commit/356e58e3b3b431a4b1af7fd7bdee44c2c0526a09)
Store `Array.isArray(types[arg])` for reuse.
([@elidoran](https://github.com/elidoran))
* [`0d95e90`](https://github.com/npm/nopt/commit/0d95e90515844f266015b56d2c80b94e5d14a07e)
Interpret single-item type arrays as a single type.
([@samjonester](https://github.com/samjonester))
* [`07c69d3`](https://github.com/npm/nopt/commit/07c69d38b5186450941fbb505550becb78a0e925)
Simplify key-value extraction. ([@elidoran](https://github.com/elidoran))
* [`39b6e5c`](https://github.com/npm/nopt/commit/39b6e5c65ac47f60cd43a1fbeece5cd4c834c254)
Only call `Date.parse(val)` once. ([@elidoran](https://github.com/elidoran))
* [`934943d`](https://github.com/npm/nopt/commit/934943dffecb55123a2b15959fe2a359319a5dbd)
Use `osenv.home()` to find a user's home directory instead of assuming it's
always `$HOME`. ([@othiym23](https://github.com/othiym23))
#### TEST & CI IMPROVEMENTS
* [`326ffff`](https://github.com/npm/nopt/commit/326ffff7f78a00bcd316adecf69075f8a8093619)
Fix `/tmp` test to work on Windows.
([@elidoran](https://github.com/elidoran))
* [`c89d31a`](https://github.com/npm/nopt/commit/c89d31a49d14f2238bc6672db08da697bbc57f1b)
Only run Windows tests on Windows, only run Unix tests on a Unix.
([@elidoran](https://github.com/elidoran))
* [`affd3d1`](https://github.com/npm/nopt/commit/affd3d1d0addffa93006397b2013b18447339366)
Refresh Travis to run the tests against the currently-supported batch of npm
versions. ([@helio](https://github.com/helio)-frota)
* [`55f9449`](https://github.com/npm/nopt/commit/55f94497d163ed4d16dd55fd6c4fb95cc440e66d)
`tap@8.0.1` ([@othiym23](https://github.com/othiym23))
#### DOC TWEAKS
* [`5271229`](https://github.com/npm/nopt/commit/5271229ee7c810217dd51616c086f5d9ab224581)
Use JavaScript code block for syntax highlighting.
([@adius](https://github.com/adius))
* [`c0d156f`](https://github.com/npm/nopt/commit/c0d156f229f9994c5dfcec4a8886eceff7a07682)
The code sample in the README had `many2: [ oneThing ]`, and now it has
`many2: [ two, things ]`. ([@silkentrance](https://github.com/silkentrance))

15
node_modules/node-pre-gyp/node_modules/nopt/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

213
node_modules/node-pre-gyp/node_modules/nopt/README.md generated vendored Normal file
View File

@@ -0,0 +1,213 @@
If you want to write an option parser, and have it be good, there are
two ways to do it. The Right Way, and the Wrong Way.
The Wrong Way is to sit down and write an option parser. We've all done
that.
The Right Way is to write some complex configurable program with so many
options that you hit the limit of your frustration just trying to
manage them all, and defer it with duct-tape solutions until you see
exactly to the core of the problem, and finally snap and write an
awesome option parser.
If you want to write an option parser, don't write an option parser.
Write a package manager, or a source control system, or a service
restarter, or an operating system. You probably won't end up with a
good one of those, but if you don't give up, and you are relentless and
diligent enough in your procrastination, you may just end up with a very
nice option parser.
## USAGE
```javascript
// my-program.js
var nopt = require("nopt")
, Stream = require("stream").Stream
, path = require("path")
, knownOpts = { "foo" : [String, null]
, "bar" : [Stream, Number]
, "baz" : path
, "bloo" : [ "big", "medium", "small" ]
, "flag" : Boolean
, "pick" : Boolean
, "many1" : [String, Array]
, "many2" : [path, Array]
}
, shortHands = { "foofoo" : ["--foo", "Mr. Foo"]
, "b7" : ["--bar", "7"]
, "m" : ["--bloo", "medium"]
, "p" : ["--pick"]
, "f" : ["--flag"]
}
// everything is optional.
// knownOpts and shorthands default to {}
// arg list defaults to process.argv
// slice defaults to 2
, parsed = nopt(knownOpts, shortHands, process.argv, 2)
console.log(parsed)
```
This would give you support for any of the following:
```console
$ node my-program.js --foo "blerp" --no-flag
{ "foo" : "blerp", "flag" : false }
$ node my-program.js ---bar 7 --foo "Mr. Hand" --flag
{ bar: 7, foo: "Mr. Hand", flag: true }
$ node my-program.js --foo "blerp" -f -----p
{ foo: "blerp", flag: true, pick: true }
$ node my-program.js -fp --foofoo
{ foo: "Mr. Foo", flag: true, pick: true }
$ node my-program.js --foofoo -- -fp # -- stops the flag parsing.
{ foo: "Mr. Foo", argv: { remain: ["-fp"] } }
$ node my-program.js --blatzk -fp # unknown opts are ok.
{ blatzk: true, flag: true, pick: true }
$ node my-program.js --blatzk=1000 -fp # but you need to use = if they have a value
{ blatzk: 1000, flag: true, pick: true }
$ node my-program.js --no-blatzk -fp # unless they start with "no-"
{ blatzk: false, flag: true, pick: true }
$ node my-program.js --baz b/a/z # known paths are resolved.
{ baz: "/Users/isaacs/b/a/z" }
# if Array is one of the types, then it can take many
# values, and will always be an array. The other types provided
# specify what types are allowed in the list.
$ node my-program.js --many1 5 --many1 null --many1 foo
{ many1: ["5", "null", "foo"] }
$ node my-program.js --many2 foo --many2 bar
{ many2: ["/path/to/foo", "path/to/bar"] }
```
Read the tests at the bottom of `lib/nopt.js` for more examples of
what this puppy can do.
## Types
The following types are supported, and defined on `nopt.typeDefs`
* String: A normal string. No parsing is done.
* path: A file system path. Gets resolved against cwd if not absolute.
* url: A url. If it doesn't parse, it isn't accepted.
* Number: Must be numeric.
* Date: Must parse as a date. If it does, and `Date` is one of the options,
then it will return a Date object, not a string.
* Boolean: Must be either `true` or `false`. If an option is a boolean,
then it does not need a value, and its presence will imply `true` as
the value. To negate boolean flags, do `--no-whatever` or `--whatever
false`
* NaN: Means that the option is strictly not allowed. Any value will
fail.
* Stream: An object matching the "Stream" class in node. Valuable
for use when validating programmatically. (npm uses this to let you
supply any WriteStream on the `outfd` and `logfd` config options.)
* Array: If `Array` is specified as one of the types, then the value
will be parsed as a list of options. This means that multiple values
can be specified, and that the value will always be an array.
If a type is an array of values not on this list, then those are
considered valid values. For instance, in the example above, the
`--bloo` option can only be one of `"big"`, `"medium"`, or `"small"`,
and any other value will be rejected.
When parsing unknown fields, `"true"`, `"false"`, and `"null"` will be
interpreted as their JavaScript equivalents.
You can also mix types and values, or multiple types, in a list. For
instance `{ blah: [Number, null] }` would allow a value to be set to
either a Number or null. When types are ordered, this implies a
preference, and the first type that can be used to properly interpret
the value will be used.
To define a new type, add it to `nopt.typeDefs`. Each item in that
hash is an object with a `type` member and a `validate` method. The
`type` member is an object that matches what goes in the type list. The
`validate` method is a function that gets called with `validate(data,
key, val)`. Validate methods should assign `data[key]` to the valid
value of `val` if it can be handled properly, or return boolean
`false` if it cannot.
You can also call `nopt.clean(data, types, typeDefs)` to clean up a
config object and remove its invalid properties.
## Error Handling
By default, nopt outputs a warning to standard error when invalid values for
known options are found. You can change this behavior by assigning a method
to `nopt.invalidHandler`. This method will be called with
the offending `nopt.invalidHandler(key, val, types)`.
If no `nopt.invalidHandler` is assigned, then it will console.error
its whining. If it is assigned to boolean `false` then the warning is
suppressed.
## Abbreviations
Yes, they are supported. If you define options like this:
```javascript
{ "foolhardyelephants" : Boolean
, "pileofmonkeys" : Boolean }
```
Then this will work:
```bash
node program.js --foolhar --pil
node program.js --no-f --pileofmon
# etc.
```
## Shorthands
Shorthands are a hash of shorter option names to a snippet of args that
they expand to.
If multiple one-character shorthands are all combined, and the
combination does not unambiguously match any other option or shorthand,
then they will be broken up into their constituent parts. For example:
```json
{ "s" : ["--loglevel", "silent"]
, "g" : "--global"
, "f" : "--force"
, "p" : "--parseable"
, "l" : "--long"
}
```
```bash
npm ls -sgflp
# just like doing this:
npm ls --loglevel silent --global --force --long --parseable
```
## The Rest of the args
The config object returned by nopt is given a special member called
`argv`, which is an object with the following fields:
* `remain`: The remaining args after all the parsing has occurred.
* `original`: The args as they originally appeared.
* `cooked`: The args after flags and shorthands are expanded.
## Slicing
Node programs are called with more or less the exact argv as it appears
in C land, after the v8 and node-specific options have been plucked off.
As such, `argv[0]` is always `node` and `argv[1]` is always the
JavaScript program being run.
That's usually not very useful to you. So they're sliced off by
default. If you want them, then you can pass in `0` as the last
argument, or any other number that you'd like to slice off the start of
the list.

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env node
var nopt = require("../lib/nopt")
, path = require("path")
, types = { num: Number
, bool: Boolean
, help: Boolean
, list: Array
, "num-list": [Number, Array]
, "str-list": [String, Array]
, "bool-list": [Boolean, Array]
, str: String
, clear: Boolean
, config: Boolean
, length: Number
, file: path
}
, shorthands = { s: [ "--str", "astring" ]
, b: [ "--bool" ]
, nb: [ "--no-bool" ]
, tft: [ "--bool-list", "--no-bool-list", "--bool-list", "true" ]
, "?": ["--help"]
, h: ["--help"]
, H: ["--help"]
, n: [ "--num", "125" ]
, c: ["--config"]
, l: ["--length"]
, f: ["--file"]
}
, parsed = nopt( types
, shorthands
, process.argv
, 2 )
console.log("parsed", parsed)
if (parsed.help) {
console.log("")
console.log("nopt cli tester")
console.log("")
console.log("types")
console.log(Object.keys(types).map(function M (t) {
var type = types[t]
if (Array.isArray(type)) {
return [t, type.map(function (type) { return type.name })]
}
return [t, type && type.name]
}).reduce(function (s, i) {
s[i[0]] = i[1]
return s
}, {}))
console.log("")
console.log("shorthands")
console.log(shorthands)
}

View File

@@ -0,0 +1,65 @@
{
"_from": "nopt@^4.0.3",
"_id": "nopt@4.0.3",
"_inBundle": false,
"_integrity": "sha512-CvaGwVMztSMJLOeXPrez7fyfObdZqNUK1cPAEzLHrTybIua9pMdmmPR5YwtfNftIOMv3DPUhFaxsZMNTQO20Kg==",
"_location": "/node-pre-gyp/nopt",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "nopt@^4.0.3",
"name": "nopt",
"escapedName": "nopt",
"rawSpec": "^4.0.3",
"saveSpec": null,
"fetchSpec": "^4.0.3"
},
"_requiredBy": [
"/node-pre-gyp"
],
"_resolved": "https://registry.npmjs.org/nopt/-/nopt-4.0.3.tgz",
"_shasum": "a375cad9d02fd921278d954c2254d5aa57e15e48",
"_spec": "nopt@^4.0.3",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot\\node_modules\\node-pre-gyp",
"author": {
"name": "Isaac Z. Schlueter",
"email": "i@izs.me",
"url": "http://blog.izs.me/"
},
"bin": {
"nopt": "bin/nopt.js"
},
"bugs": {
"url": "https://github.com/npm/nopt/issues"
},
"bundleDependencies": false,
"dependencies": {
"abbrev": "1",
"osenv": "^0.1.4"
},
"deprecated": false,
"description": "Option parsing for Node, supporting types, shorthands, etc. Used by npm.",
"devDependencies": {
"tap": "^14.10.6"
},
"files": [
"bin",
"lib"
],
"homepage": "https://github.com/npm/nopt#readme",
"license": "ISC",
"main": "lib/nopt.js",
"name": "nopt",
"repository": {
"type": "git",
"url": "git+https://github.com/npm/nopt.git"
},
"scripts": {
"postversion": "npm publish",
"prepublishOnly": "git push origin --follow-tags",
"preversion": "npm test",
"test": "tap test/*.js"
},
"version": "4.0.3"
}

15
node_modules/node-pre-gyp/node_modules/rimraf/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

101
node_modules/node-pre-gyp/node_modules/rimraf/README.md generated vendored Normal file
View File

@@ -0,0 +1,101 @@
[![Build Status](https://travis-ci.org/isaacs/rimraf.svg?branch=master)](https://travis-ci.org/isaacs/rimraf) [![Dependency Status](https://david-dm.org/isaacs/rimraf.svg)](https://david-dm.org/isaacs/rimraf) [![devDependency Status](https://david-dm.org/isaacs/rimraf/dev-status.svg)](https://david-dm.org/isaacs/rimraf#info=devDependencies)
The [UNIX command](http://en.wikipedia.org/wiki/Rm_(Unix)) `rm -rf` for node.
Install with `npm install rimraf`, or just drop rimraf.js somewhere.
## API
`rimraf(f, [opts], callback)`
The first parameter will be interpreted as a globbing pattern for files. If you
want to disable globbing you can do so with `opts.disableGlob` (defaults to
`false`). This might be handy, for instance, if you have filenames that contain
globbing wildcard characters.
The callback will be called with an error if there is one. Certain
errors are handled for you:
* Windows: `EBUSY` and `ENOTEMPTY` - rimraf will back off a maximum of
`opts.maxBusyTries` times before giving up, adding 100ms of wait
between each attempt. The default `maxBusyTries` is 3.
* `ENOENT` - If the file doesn't exist, rimraf will return
successfully, since your desired outcome is already the case.
* `EMFILE` - Since `readdir` requires opening a file descriptor, it's
possible to hit `EMFILE` if too many file descriptors are in use.
In the sync case, there's nothing to be done for this. But in the
async case, rimraf will gradually back off with timeouts up to
`opts.emfileWait` ms, which defaults to 1000.
## options
* unlink, chmod, stat, lstat, rmdir, readdir,
unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync
In order to use a custom file system library, you can override
specific fs functions on the options object.
If any of these functions are present on the options object, then
the supplied function will be used instead of the default fs
method.
Sync methods are only relevant for `rimraf.sync()`, of course.
For example:
```javascript
var myCustomFS = require('some-custom-fs')
rimraf('some-thing', myCustomFS, callback)
```
* maxBusyTries
If an `EBUSY`, `ENOTEMPTY`, or `EPERM` error code is encountered
on Windows systems, then rimraf will retry with a linear backoff
wait of 100ms longer on each try. The default maxBusyTries is 3.
Only relevant for async usage.
* emfileWait
If an `EMFILE` error is encountered, then rimraf will retry
repeatedly with a linear backoff of 1ms longer on each try, until
the timeout counter hits this max. The default limit is 1000.
If you repeatedly encounter `EMFILE` errors, then consider using
[graceful-fs](http://npm.im/graceful-fs) in your program.
Only relevant for async usage.
* glob
Set to `false` to disable [glob](http://npm.im/glob) pattern
matching.
Set to an object to pass options to the glob module. The default
glob options are `{ nosort: true, silent: true }`.
Glob version 6 is used in this module.
Relevant for both sync and async usage.
* disableGlob
Set to any non-falsey value to disable globbing entirely.
(Equivalent to setting `glob: false`.)
## rimraf.sync
It can remove stuff synchronously, too. But that's not so good. Use
the async API. It's better.
## CLI
If installed with `npm install rimraf -g` it can be used as a global
command `rimraf <path> [<path> ...]` which is useful for cross platform support.
## mkdirp
If you need to create a directory recursively, check out
[mkdirp](https://github.com/substack/node-mkdirp).

50
node_modules/node-pre-gyp/node_modules/rimraf/bin.js generated vendored Normal file
View File

@@ -0,0 +1,50 @@
#!/usr/bin/env node
var rimraf = require('./')
var help = false
var dashdash = false
var noglob = false
var args = process.argv.slice(2).filter(function(arg) {
if (dashdash)
return !!arg
else if (arg === '--')
dashdash = true
else if (arg === '--no-glob' || arg === '-G')
noglob = true
else if (arg === '--glob' || arg === '-g')
noglob = false
else if (arg.match(/^(-+|\/)(h(elp)?|\?)$/))
help = true
else
return !!arg
})
if (help || args.length === 0) {
// If they didn't ask for help, then this is not a "success"
var log = help ? console.log : console.error
log('Usage: rimraf <path> [<path> ...]')
log('')
log(' Deletes all files and folders at "path" recursively.')
log('')
log('Options:')
log('')
log(' -h, --help Display this usage info')
log(' -G, --no-glob Do not expand glob patterns in arguments')
log(' -g, --glob Expand glob patterns in arguments (default)')
process.exit(help ? 0 : 1)
} else
go(0)
function go (n) {
if (n >= args.length)
return
var options = {}
if (noglob)
options = { glob: false }
rimraf(args[n], options, function (er) {
if (er)
throw er
go(n+1)
})
}

View File

@@ -0,0 +1,67 @@
{
"_from": "rimraf@^2.7.1",
"_id": "rimraf@2.7.1",
"_inBundle": false,
"_integrity": "sha512-uWjbaKIK3T1OSVptzX7Nl6PvQ3qAGtKEtVRjRuazjfL3Bx5eI409VZSqgND+4UNnmzLVdPj9FqFJNPqBZFve4w==",
"_location": "/node-pre-gyp/rimraf",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "rimraf@^2.7.1",
"name": "rimraf",
"escapedName": "rimraf",
"rawSpec": "^2.7.1",
"saveSpec": null,
"fetchSpec": "^2.7.1"
},
"_requiredBy": [
"/node-pre-gyp"
],
"_resolved": "https://registry.npmjs.org/rimraf/-/rimraf-2.7.1.tgz",
"_shasum": "35797f13a7fdadc566142c29d4f07ccad483e3ec",
"_spec": "rimraf@^2.7.1",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot\\node_modules\\node-pre-gyp",
"author": {
"name": "Isaac Z. Schlueter",
"email": "i@izs.me",
"url": "http://blog.izs.me/"
},
"bin": {
"rimraf": "bin.js"
},
"bugs": {
"url": "https://github.com/isaacs/rimraf/issues"
},
"bundleDependencies": false,
"dependencies": {
"glob": "^7.1.3"
},
"deprecated": false,
"description": "A deep deletion module for node (like `rm -rf`)",
"devDependencies": {
"mkdirp": "^0.5.1",
"tap": "^12.1.1"
},
"files": [
"LICENSE",
"README.md",
"bin.js",
"rimraf.js"
],
"homepage": "https://github.com/isaacs/rimraf#readme",
"license": "ISC",
"main": "rimraf.js",
"name": "rimraf",
"repository": {
"type": "git",
"url": "git://github.com/isaacs/rimraf.git"
},
"scripts": {
"postpublish": "git push origin --all; git push origin --tags",
"postversion": "npm publish",
"preversion": "npm test",
"test": "tap test/*.js"
},
"version": "2.7.1"
}

372
node_modules/node-pre-gyp/node_modules/rimraf/rimraf.js generated vendored Normal file
View File

@@ -0,0 +1,372 @@
module.exports = rimraf
rimraf.sync = rimrafSync
var assert = require("assert")
var path = require("path")
var fs = require("fs")
var glob = undefined
try {
glob = require("glob")
} catch (_err) {
// treat glob as optional.
}
var _0666 = parseInt('666', 8)
var defaultGlobOpts = {
nosort: true,
silent: true
}
// for EMFILE handling
var timeout = 0
var isWindows = (process.platform === "win32")
function defaults (options) {
var methods = [
'unlink',
'chmod',
'stat',
'lstat',
'rmdir',
'readdir'
]
methods.forEach(function(m) {
options[m] = options[m] || fs[m]
m = m + 'Sync'
options[m] = options[m] || fs[m]
})
options.maxBusyTries = options.maxBusyTries || 3
options.emfileWait = options.emfileWait || 1000
if (options.glob === false) {
options.disableGlob = true
}
if (options.disableGlob !== true && glob === undefined) {
throw Error('glob dependency not found, set `options.disableGlob = true` if intentional')
}
options.disableGlob = options.disableGlob || false
options.glob = options.glob || defaultGlobOpts
}
function rimraf (p, options, cb) {
if (typeof options === 'function') {
cb = options
options = {}
}
assert(p, 'rimraf: missing path')
assert.equal(typeof p, 'string', 'rimraf: path should be a string')
assert.equal(typeof cb, 'function', 'rimraf: callback function required')
assert(options, 'rimraf: invalid options argument provided')
assert.equal(typeof options, 'object', 'rimraf: options should be object')
defaults(options)
var busyTries = 0
var errState = null
var n = 0
if (options.disableGlob || !glob.hasMagic(p))
return afterGlob(null, [p])
options.lstat(p, function (er, stat) {
if (!er)
return afterGlob(null, [p])
glob(p, options.glob, afterGlob)
})
function next (er) {
errState = errState || er
if (--n === 0)
cb(errState)
}
function afterGlob (er, results) {
if (er)
return cb(er)
n = results.length
if (n === 0)
return cb()
results.forEach(function (p) {
rimraf_(p, options, function CB (er) {
if (er) {
if ((er.code === "EBUSY" || er.code === "ENOTEMPTY" || er.code === "EPERM") &&
busyTries < options.maxBusyTries) {
busyTries ++
var time = busyTries * 100
// try again, with the same exact callback as this one.
return setTimeout(function () {
rimraf_(p, options, CB)
}, time)
}
// this one won't happen if graceful-fs is used.
if (er.code === "EMFILE" && timeout < options.emfileWait) {
return setTimeout(function () {
rimraf_(p, options, CB)
}, timeout ++)
}
// already gone
if (er.code === "ENOENT") er = null
}
timeout = 0
next(er)
})
})
}
}
// Two possible strategies.
// 1. Assume it's a file. unlink it, then do the dir stuff on EPERM or EISDIR
// 2. Assume it's a directory. readdir, then do the file stuff on ENOTDIR
//
// Both result in an extra syscall when you guess wrong. However, there
// are likely far more normal files in the world than directories. This
// is based on the assumption that a the average number of files per
// directory is >= 1.
//
// If anyone ever complains about this, then I guess the strategy could
// be made configurable somehow. But until then, YAGNI.
function rimraf_ (p, options, cb) {
assert(p)
assert(options)
assert(typeof cb === 'function')
// sunos lets the root user unlink directories, which is... weird.
// so we have to lstat here and make sure it's not a dir.
options.lstat(p, function (er, st) {
if (er && er.code === "ENOENT")
return cb(null)
// Windows can EPERM on stat. Life is suffering.
if (er && er.code === "EPERM" && isWindows)
fixWinEPERM(p, options, er, cb)
if (st && st.isDirectory())
return rmdir(p, options, er, cb)
options.unlink(p, function (er) {
if (er) {
if (er.code === "ENOENT")
return cb(null)
if (er.code === "EPERM")
return (isWindows)
? fixWinEPERM(p, options, er, cb)
: rmdir(p, options, er, cb)
if (er.code === "EISDIR")
return rmdir(p, options, er, cb)
}
return cb(er)
})
})
}
function fixWinEPERM (p, options, er, cb) {
assert(p)
assert(options)
assert(typeof cb === 'function')
if (er)
assert(er instanceof Error)
options.chmod(p, _0666, function (er2) {
if (er2)
cb(er2.code === "ENOENT" ? null : er)
else
options.stat(p, function(er3, stats) {
if (er3)
cb(er3.code === "ENOENT" ? null : er)
else if (stats.isDirectory())
rmdir(p, options, er, cb)
else
options.unlink(p, cb)
})
})
}
function fixWinEPERMSync (p, options, er) {
assert(p)
assert(options)
if (er)
assert(er instanceof Error)
try {
options.chmodSync(p, _0666)
} catch (er2) {
if (er2.code === "ENOENT")
return
else
throw er
}
try {
var stats = options.statSync(p)
} catch (er3) {
if (er3.code === "ENOENT")
return
else
throw er
}
if (stats.isDirectory())
rmdirSync(p, options, er)
else
options.unlinkSync(p)
}
function rmdir (p, options, originalEr, cb) {
assert(p)
assert(options)
if (originalEr)
assert(originalEr instanceof Error)
assert(typeof cb === 'function')
// try to rmdir first, and only readdir on ENOTEMPTY or EEXIST (SunOS)
// if we guessed wrong, and it's not a directory, then
// raise the original error.
options.rmdir(p, function (er) {
if (er && (er.code === "ENOTEMPTY" || er.code === "EEXIST" || er.code === "EPERM"))
rmkids(p, options, cb)
else if (er && er.code === "ENOTDIR")
cb(originalEr)
else
cb(er)
})
}
function rmkids(p, options, cb) {
assert(p)
assert(options)
assert(typeof cb === 'function')
options.readdir(p, function (er, files) {
if (er)
return cb(er)
var n = files.length
if (n === 0)
return options.rmdir(p, cb)
var errState
files.forEach(function (f) {
rimraf(path.join(p, f), options, function (er) {
if (errState)
return
if (er)
return cb(errState = er)
if (--n === 0)
options.rmdir(p, cb)
})
})
})
}
// this looks simpler, and is strictly *faster*, but will
// tie up the JavaScript thread and fail on excessively
// deep directory trees.
function rimrafSync (p, options) {
options = options || {}
defaults(options)
assert(p, 'rimraf: missing path')
assert.equal(typeof p, 'string', 'rimraf: path should be a string')
assert(options, 'rimraf: missing options')
assert.equal(typeof options, 'object', 'rimraf: options should be object')
var results
if (options.disableGlob || !glob.hasMagic(p)) {
results = [p]
} else {
try {
options.lstatSync(p)
results = [p]
} catch (er) {
results = glob.sync(p, options.glob)
}
}
if (!results.length)
return
for (var i = 0; i < results.length; i++) {
var p = results[i]
try {
var st = options.lstatSync(p)
} catch (er) {
if (er.code === "ENOENT")
return
// Windows can EPERM on stat. Life is suffering.
if (er.code === "EPERM" && isWindows)
fixWinEPERMSync(p, options, er)
}
try {
// sunos lets the root user unlink directories, which is... weird.
if (st && st.isDirectory())
rmdirSync(p, options, null)
else
options.unlinkSync(p)
} catch (er) {
if (er.code === "ENOENT")
return
if (er.code === "EPERM")
return isWindows ? fixWinEPERMSync(p, options, er) : rmdirSync(p, options, er)
if (er.code !== "EISDIR")
throw er
rmdirSync(p, options, er)
}
}
}
function rmdirSync (p, options, originalEr) {
assert(p)
assert(options)
if (originalEr)
assert(originalEr instanceof Error)
try {
options.rmdirSync(p)
} catch (er) {
if (er.code === "ENOENT")
return
if (er.code === "ENOTDIR")
throw originalEr
if (er.code === "ENOTEMPTY" || er.code === "EEXIST" || er.code === "EPERM")
rmkidsSync(p, options)
}
}
function rmkidsSync (p, options) {
assert(p)
assert(options)
options.readdirSync(p).forEach(function (f) {
rimrafSync(path.join(p, f), options)
})
// We only end up here once we got ENOTEMPTY at least once, and
// at this point, we are guaranteed to have removed all the kids.
// So, we know that it won't be ENOENT or ENOTDIR or anything else.
// try really hard to delete stuff on windows, because it has a
// PROFOUNDLY annoying habit of not closing handles promptly when
// files are deleted, resulting in spurious ENOTEMPTY errors.
var retries = isWindows ? 100 : 1
var i = 0
do {
var threw = true
try {
var ret = options.rmdirSync(p, options)
threw = false
return ret
} finally {
if (++i < retries && threw)
continue
}
} while (true)
}

View File

@@ -0,0 +1,39 @@
# changes log
## 5.7
* Add `minVersion` method
## 5.6
* Move boolean `loose` param to an options object, with
backwards-compatibility protection.
* Add ability to opt out of special prerelease version handling with
the `includePrerelease` option flag.
## 5.5
* Add version coercion capabilities
## 5.4
* Add intersection checking
## 5.3
* Add `minSatisfying` method
## 5.2
* Add `prerelease(v)` that returns prerelease components
## 5.1
* Add Backus-Naur for ranges
* Remove excessively cute inspection methods
## 5.0
* Remove AMD/Browserified build artifacts
* Fix ltr and gtr when using the `*` range
* Fix for range `*` with a prerelease identifier

15
node_modules/node-pre-gyp/node_modules/semver/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

412
node_modules/node-pre-gyp/node_modules/semver/README.md generated vendored Normal file
View File

@@ -0,0 +1,412 @@
semver(1) -- The semantic versioner for npm
===========================================
## Install
```bash
npm install --save semver
````
## Usage
As a node module:
```js
const semver = require('semver')
semver.valid('1.2.3') // '1.2.3'
semver.valid('a.b.c') // null
semver.clean(' =v1.2.3 ') // '1.2.3'
semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true
semver.gt('1.2.3', '9.8.7') // false
semver.lt('1.2.3', '9.8.7') // true
semver.minVersion('>=1.0.0') // '1.0.0'
semver.valid(semver.coerce('v2')) // '2.0.0'
semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7'
```
As a command-line utility:
```
$ semver -h
A JavaScript implementation of the https://semver.org/ specification
Copyright Isaac Z. Schlueter
Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence
Options:
-r --range <range>
Print versions that match the specified range.
-i --increment [<level>]
Increment a version by the specified level. Level can
be one of: major, minor, patch, premajor, preminor,
prepatch, or prerelease. Default level is 'patch'.
Only one version may be specified.
--preid <identifier>
Identifier to be used to prefix premajor, preminor,
prepatch or prerelease version increments.
-l --loose
Interpret versions and ranges loosely
-p --include-prerelease
Always include prerelease versions in range matching
-c --coerce
Coerce a string into SemVer if possible
(does not imply --loose)
Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.
If no satisfying versions are found, then exits failure.
Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.
```
## Versions
A "version" is described by the `v2.0.0` specification found at
<https://semver.org/>.
A leading `"="` or `"v"` character is stripped off and ignored.
## Ranges
A `version range` is a set of `comparators` which specify versions
that satisfy the range.
A `comparator` is composed of an `operator` and a `version`. The set
of primitive `operators` is:
* `<` Less than
* `<=` Less than or equal to
* `>` Greater than
* `>=` Greater than or equal to
* `=` Equal. If no operator is specified, then equality is assumed,
so this operator is optional, but MAY be included.
For example, the comparator `>=1.2.7` would match the versions
`1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6`
or `1.1.0`.
Comparators can be joined by whitespace to form a `comparator set`,
which is satisfied by the **intersection** of all of the comparators
it includes.
A range is composed of one or more comparator sets, joined by `||`. A
version matches a range if and only if every comparator in at least
one of the `||`-separated comparator sets is satisfied by the version.
For example, the range `>=1.2.7 <1.3.0` would match the versions
`1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`,
or `1.1.0`.
The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`,
`1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`.
### Prerelease Tags
If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then
it will only be allowed to satisfy comparator sets if at least one
comparator with the same `[major, minor, patch]` tuple also has a
prerelease tag.
For example, the range `>1.2.3-alpha.3` would be allowed to match the
version `1.2.3-alpha.7`, but it would *not* be satisfied by
`3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater
than" `1.2.3-alpha.3` according to the SemVer sort rules. The version
range only accepts prerelease tags on the `1.2.3` version. The
version `3.4.5` *would* satisfy the range, because it does not have a
prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`.
The purpose for this behavior is twofold. First, prerelease versions
frequently are updated very quickly, and contain many breaking changes
that are (by the author's design) not yet fit for public consumption.
Therefore, by default, they are excluded from range matching
semantics.
Second, a user who has opted into using a prerelease version has
clearly indicated the intent to use *that specific* set of
alpha/beta/rc versions. By including a prerelease tag in the range,
the user is indicating that they are aware of the risk. However, it
is still not appropriate to assume that they have opted into taking a
similar risk on the *next* set of prerelease versions.
Note that this behavior can be suppressed (treating all prerelease
versions as if they were normal versions, for the purpose of range
matching) by setting the `includePrerelease` flag on the options
object to any
[functions](https://github.com/npm/node-semver#functions) that do
range matching.
#### Prerelease Identifiers
The method `.inc` takes an additional `identifier` string argument that
will append the value of the string as a prerelease identifier:
```javascript
semver.inc('1.2.3', 'prerelease', 'beta')
// '1.2.4-beta.0'
```
command-line example:
```bash
$ semver 1.2.3 -i prerelease --preid beta
1.2.4-beta.0
```
Which then can be used to increment further:
```bash
$ semver 1.2.4-beta.0 -i prerelease
1.2.4-beta.1
```
### Advanced Range Syntax
Advanced range syntax desugars to primitive comparators in
deterministic ways.
Advanced ranges may be combined in the same way as primitive
comparators using white space or `||`.
#### Hyphen Ranges `X.Y.Z - A.B.C`
Specifies an inclusive set.
* `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4`
If a partial version is provided as the first version in the inclusive
range, then the missing pieces are replaced with zeroes.
* `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4`
If a partial version is provided as the second version in the
inclusive range, then all versions that start with the supplied parts
of the tuple are accepted, but nothing that would be greater than the
provided tuple parts.
* `1.2.3 - 2.3` := `>=1.2.3 <2.4.0`
* `1.2.3 - 2` := `>=1.2.3 <3.0.0`
#### X-Ranges `1.2.x` `1.X` `1.2.*` `*`
Any of `X`, `x`, or `*` may be used to "stand in" for one of the
numeric values in the `[major, minor, patch]` tuple.
* `*` := `>=0.0.0` (Any version satisfies)
* `1.x` := `>=1.0.0 <2.0.0` (Matching major version)
* `1.2.x` := `>=1.2.0 <1.3.0` (Matching major and minor versions)
A partial version range is treated as an X-Range, so the special
character is in fact optional.
* `""` (empty string) := `*` := `>=0.0.0`
* `1` := `1.x.x` := `>=1.0.0 <2.0.0`
* `1.2` := `1.2.x` := `>=1.2.0 <1.3.0`
#### Tilde Ranges `~1.2.3` `~1.2` `~1`
Allows patch-level changes if a minor version is specified on the
comparator. Allows minor-level changes if not.
* `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0`
* `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0` (Same as `1.2.x`)
* `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0` (Same as `1.x`)
* `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0`
* `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0` (Same as `0.2.x`)
* `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0` (Same as `0.x`)
* `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0` Note that prereleases in
the `1.2.3` version will be allowed, if they are greater than or
equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but
`1.2.4-beta.2` would not, because it is a prerelease of a
different `[major, minor, patch]` tuple.
#### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4`
Allows changes that do not modify the left-most non-zero digit in the
`[major, minor, patch]` tuple. In other words, this allows patch and
minor updates for versions `1.0.0` and above, patch updates for
versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`.
Many authors treat a `0.x` version as if the `x` were the major
"breaking-change" indicator.
Caret ranges are ideal when an author may make breaking changes
between `0.2.4` and `0.3.0` releases, which is a common practice.
However, it presumes that there will *not* be breaking changes between
`0.2.4` and `0.2.5`. It allows for changes that are presumed to be
additive (but non-breaking), according to commonly observed practices.
* `^1.2.3` := `>=1.2.3 <2.0.0`
* `^0.2.3` := `>=0.2.3 <0.3.0`
* `^0.0.3` := `>=0.0.3 <0.0.4`
* `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0` Note that prereleases in
the `1.2.3` version will be allowed, if they are greater than or
equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but
`1.2.4-beta.2` would not, because it is a prerelease of a
different `[major, minor, patch]` tuple.
* `^0.0.3-beta` := `>=0.0.3-beta <0.0.4` Note that prereleases in the
`0.0.3` version *only* will be allowed, if they are greater than or
equal to `beta`. So, `0.0.3-pr.2` would be allowed.
When parsing caret ranges, a missing `patch` value desugars to the
number `0`, but will allow flexibility within that value, even if the
major and minor versions are both `0`.
* `^1.2.x` := `>=1.2.0 <2.0.0`
* `^0.0.x` := `>=0.0.0 <0.1.0`
* `^0.0` := `>=0.0.0 <0.1.0`
A missing `minor` and `patch` values will desugar to zero, but also
allow flexibility within those values, even if the major version is
zero.
* `^1.x` := `>=1.0.0 <2.0.0`
* `^0.x` := `>=0.0.0 <1.0.0`
### Range Grammar
Putting all this together, here is a Backus-Naur grammar for ranges,
for the benefit of parser authors:
```bnf
range-set ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range ::= hyphen | simple ( ' ' simple ) * | ''
hyphen ::= partial ' - ' partial
simple ::= primitive | partial | tilde | caret
primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr ::= 'x' | 'X' | '*' | nr
nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde ::= '~' partial
caret ::= '^' partial
qualifier ::= ( '-' pre )? ( '+' build )?
pre ::= parts
build ::= parts
parts ::= part ( '.' part ) *
part ::= nr | [-0-9A-Za-z]+
```
## Functions
All methods and classes take a final `options` object argument. All
options in this object are `false` by default. The options supported
are:
- `loose` Be more forgiving about not-quite-valid semver strings.
(Any resulting output will always be 100% strict compliant, of
course.) For backwards compatibility reasons, if the `options`
argument is a boolean value instead of an object, it is interpreted
to be the `loose` param.
- `includePrerelease` Set to suppress the [default
behavior](https://github.com/npm/node-semver#prerelease-tags) of
excluding prerelease tagged versions from ranges unless they are
explicitly opted into.
Strict-mode Comparators and Ranges will be strict about the SemVer
strings that they parse.
* `valid(v)`: Return the parsed version, or null if it's not valid.
* `inc(v, release)`: Return the version incremented by the release
type (`major`, `premajor`, `minor`, `preminor`, `patch`,
`prepatch`, or `prerelease`), or null if it's not valid
* `premajor` in one call will bump the version up to the next major
version and down to a prerelease of that major version.
`preminor`, and `prepatch` work the same way.
* If called from a non-prerelease version, the `prerelease` will work the
same as `prepatch`. It increments the patch version, then makes a
prerelease. If the input version is already a prerelease it simply
increments it.
* `prerelease(v)`: Returns an array of prerelease components, or null
if none exist. Example: `prerelease('1.2.3-alpha.1') -> ['alpha', 1]`
* `major(v)`: Return the major version number.
* `minor(v)`: Return the minor version number.
* `patch(v)`: Return the patch version number.
* `intersects(r1, r2, loose)`: Return true if the two supplied ranges
or comparators intersect.
* `parse(v)`: Attempt to parse a string as a semantic version, returning either
a `SemVer` object or `null`.
### Comparison
* `gt(v1, v2)`: `v1 > v2`
* `gte(v1, v2)`: `v1 >= v2`
* `lt(v1, v2)`: `v1 < v2`
* `lte(v1, v2)`: `v1 <= v2`
* `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent,
even if they're not the exact same string. You already know how to
compare strings.
* `neq(v1, v2)`: `v1 != v2` The opposite of `eq`.
* `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call
the corresponding function above. `"==="` and `"!=="` do simple
string comparison, but are included for completeness. Throws if an
invalid comparison string is provided.
* `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if
`v2` is greater. Sorts in ascending order if passed to `Array.sort()`.
* `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions
in descending order when passed to `Array.sort()`.
* `diff(v1, v2)`: Returns difference between two versions by the release type
(`major`, `premajor`, `minor`, `preminor`, `patch`, `prepatch`, or `prerelease`),
or null if the versions are the same.
### Comparators
* `intersects(comparator)`: Return true if the comparators intersect
### Ranges
* `validRange(range)`: Return the valid range or null if it's not valid
* `satisfies(version, range)`: Return true if the version satisfies the
range.
* `maxSatisfying(versions, range)`: Return the highest version in the list
that satisfies the range, or `null` if none of them do.
* `minSatisfying(versions, range)`: Return the lowest version in the list
that satisfies the range, or `null` if none of them do.
* `minVersion(range)`: Return the lowest version that can possibly match
the given range.
* `gtr(version, range)`: Return `true` if version is greater than all the
versions possible in the range.
* `ltr(version, range)`: Return `true` if version is less than all the
versions possible in the range.
* `outside(version, range, hilo)`: Return true if the version is outside
the bounds of the range in either the high or low direction. The
`hilo` argument must be either the string `'>'` or `'<'`. (This is
the function called by `gtr` and `ltr`.)
* `intersects(range)`: Return true if any of the ranges comparators intersect
Note that, since ranges may be non-contiguous, a version might not be
greater than a range, less than a range, *or* satisfy a range! For
example, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9`
until `2.0.0`, so the version `1.2.10` would not be greater than the
range (because `2.0.1` satisfies, which is higher), nor less than the
range (since `1.2.8` satisfies, which is lower), and it also does not
satisfy the range.
If you want to know if a version satisfies or does not satisfy a
range, use the `satisfies(version, range)` function.
### Coercion
* `coerce(version)`: Coerces a string to semver if possible
This aims to provide a very forgiving translation of a non-semver string to
semver. It looks for the first digit in a string, and consumes all
remaining characters which satisfy at least a partial semver (e.g., `1`,
`1.2`, `1.2.3`) up to the max permitted length (256 characters). Longer
versions are simply truncated (`4.6.3.9.2-alpha2` becomes `4.6.3`). All
surrounding text is simply ignored (`v3.4 replaces v3.3.1` becomes
`3.4.0`). Only text which lacks digits will fail coercion (`version one`
is not valid). The maximum length for any semver component considered for
coercion is 16 characters; longer components will be ignored
(`10000000000000000.4.7.4` becomes `4.7.4`). The maximum value for any
semver component is `Number.MAX_SAFE_INTEGER || (2**53 - 1)`; higher value
components are invalid (`9999999999999999.4.7.4` is likely invalid).

View File

@@ -0,0 +1,160 @@
#!/usr/bin/env node
// Standalone semver comparison program.
// Exits successfully and prints matching version(s) if
// any supplied version is valid and passes all tests.
var argv = process.argv.slice(2)
var versions = []
var range = []
var inc = null
var version = require('../package.json').version
var loose = false
var includePrerelease = false
var coerce = false
var identifier
var semver = require('../semver')
var reverse = false
var options = {}
main()
function main () {
if (!argv.length) return help()
while (argv.length) {
var a = argv.shift()
var indexOfEqualSign = a.indexOf('=')
if (indexOfEqualSign !== -1) {
a = a.slice(0, indexOfEqualSign)
argv.unshift(a.slice(indexOfEqualSign + 1))
}
switch (a) {
case '-rv': case '-rev': case '--rev': case '--reverse':
reverse = true
break
case '-l': case '--loose':
loose = true
break
case '-p': case '--include-prerelease':
includePrerelease = true
break
case '-v': case '--version':
versions.push(argv.shift())
break
case '-i': case '--inc': case '--increment':
switch (argv[0]) {
case 'major': case 'minor': case 'patch': case 'prerelease':
case 'premajor': case 'preminor': case 'prepatch':
inc = argv.shift()
break
default:
inc = 'patch'
break
}
break
case '--preid':
identifier = argv.shift()
break
case '-r': case '--range':
range.push(argv.shift())
break
case '-c': case '--coerce':
coerce = true
break
case '-h': case '--help': case '-?':
return help()
default:
versions.push(a)
break
}
}
var options = { loose: loose, includePrerelease: includePrerelease }
versions = versions.map(function (v) {
return coerce ? (semver.coerce(v) || { version: v }).version : v
}).filter(function (v) {
return semver.valid(v)
})
if (!versions.length) return fail()
if (inc && (versions.length !== 1 || range.length)) { return failInc() }
for (var i = 0, l = range.length; i < l; i++) {
versions = versions.filter(function (v) {
return semver.satisfies(v, range[i], options)
})
if (!versions.length) return fail()
}
return success(versions)
}
function failInc () {
console.error('--inc can only be used on a single version with no range')
fail()
}
function fail () { process.exit(1) }
function success () {
var compare = reverse ? 'rcompare' : 'compare'
versions.sort(function (a, b) {
return semver[compare](a, b, options)
}).map(function (v) {
return semver.clean(v, options)
}).map(function (v) {
return inc ? semver.inc(v, inc, options, identifier) : v
}).forEach(function (v, i, _) { console.log(v) })
}
function help () {
console.log(['SemVer ' + version,
'',
'A JavaScript implementation of the https://semver.org/ specification',
'Copyright Isaac Z. Schlueter',
'',
'Usage: semver [options] <version> [<version> [...]]',
'Prints valid versions sorted by SemVer precedence',
'',
'Options:',
'-r --range <range>',
' Print versions that match the specified range.',
'',
'-i --increment [<level>]',
' Increment a version by the specified level. Level can',
' be one of: major, minor, patch, premajor, preminor,',
" prepatch, or prerelease. Default level is 'patch'.",
' Only one version may be specified.',
'',
'--preid <identifier>',
' Identifier to be used to prefix premajor, preminor,',
' prepatch or prerelease version increments.',
'',
'-l --loose',
' Interpret versions and ranges loosely',
'',
'-p --include-prerelease',
' Always include prerelease versions in range matching',
'',
'-c --coerce',
' Coerce a string into SemVer if possible',
' (does not imply --loose)',
'',
'Program exits successfully if any valid version satisfies',
'all supplied ranges, and prints all satisfying versions.',
'',
'If no satisfying versions are found, then exits failure.',
'',
'Versions are printed in ascending order, so supplying',
'multiple versions to the utility will just sort them.'
].join('\n'))
}

View File

@@ -0,0 +1,60 @@
{
"_from": "semver@^5.7.1",
"_id": "semver@5.7.1",
"_inBundle": false,
"_integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==",
"_location": "/node-pre-gyp/semver",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "semver@^5.7.1",
"name": "semver",
"escapedName": "semver",
"rawSpec": "^5.7.1",
"saveSpec": null,
"fetchSpec": "^5.7.1"
},
"_requiredBy": [
"/node-pre-gyp"
],
"_resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz",
"_shasum": "a954f931aeba508d307bbf069eff0c01c96116f7",
"_spec": "semver@^5.7.1",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot\\node_modules\\node-pre-gyp",
"bin": {
"semver": "bin/semver"
},
"bugs": {
"url": "https://github.com/npm/node-semver/issues"
},
"bundleDependencies": false,
"deprecated": false,
"description": "The semantic version parser used by npm.",
"devDependencies": {
"tap": "^13.0.0-rc.18"
},
"files": [
"bin",
"range.bnf",
"semver.js"
],
"homepage": "https://github.com/npm/node-semver#readme",
"license": "ISC",
"main": "semver.js",
"name": "semver",
"repository": {
"type": "git",
"url": "git+https://github.com/npm/node-semver.git"
},
"scripts": {
"postpublish": "git push origin --all; git push origin --tags",
"postversion": "npm publish",
"preversion": "npm test",
"test": "tap"
},
"tap": {
"check-coverage": true
},
"version": "5.7.1"
}

View File

@@ -0,0 +1,16 @@
range-set ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range ::= hyphen | simple ( ' ' simple ) * | ''
hyphen ::= partial ' - ' partial
simple ::= primitive | partial | tilde | caret
primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr ::= 'x' | 'X' | '*' | nr
nr ::= '0' | [1-9] ( [0-9] ) *
tilde ::= '~' partial
caret ::= '^' partial
qualifier ::= ( '-' pre )? ( '+' build )?
pre ::= parts
build ::= parts
parts ::= part ( '.' part ) *
part ::= nr | [-0-9A-Za-z]+

1483
node_modules/node-pre-gyp/node_modules/semver/semver.js generated vendored Normal file

File diff suppressed because it is too large Load Diff

15
node_modules/node-pre-gyp/node_modules/tar/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

954
node_modules/node-pre-gyp/node_modules/tar/README.md generated vendored Normal file
View File

@@ -0,0 +1,954 @@
# node-tar
[![Build Status](https://travis-ci.org/npm/node-tar.svg?branch=master)](https://travis-ci.org/npm/node-tar)
[Fast](./benchmarks) and full-featured Tar for Node.js
The API is designed to mimic the behavior of `tar(1)` on unix systems.
If you are familiar with how tar works, most of this will hopefully be
straightforward for you. If not, then hopefully this module can teach
you useful unix skills that may come in handy someday :)
## Background
A "tar file" or "tarball" is an archive of file system entries
(directories, files, links, etc.) The name comes from "tape archive".
If you run `man tar` on almost any Unix command line, you'll learn
quite a bit about what it can do, and its history.
Tar has 5 main top-level commands:
* `c` Create an archive
* `r` Replace entries within an archive
* `u` Update entries within an archive (ie, replace if they're newer)
* `t` List out the contents of an archive
* `x` Extract an archive to disk
The other flags and options modify how this top level function works.
## High-Level API
These 5 functions are the high-level API. All of them have a
single-character name (for unix nerds familiar with `tar(1)`) as well
as a long name (for everyone else).
All the high-level functions take the following arguments, all three
of which are optional and may be omitted.
1. `options` - An optional object specifying various options
2. `paths` - An array of paths to add or extract
3. `callback` - Called when the command is completed, if async. (If
sync or no file specified, providing a callback throws a
`TypeError`.)
If the command is sync (ie, if `options.sync=true`), then the
callback is not allowed, since the action will be completed immediately.
If a `file` argument is specified, and the command is async, then a
`Promise` is returned. In this case, if async, a callback may be
provided which is called when the command is completed.
If a `file` option is not specified, then a stream is returned. For
`create`, this is a readable stream of the generated archive. For
`list` and `extract` this is a writable stream that an archive should
be written into. If a file is not specified, then a callback is not
allowed, because you're already getting a stream to work with.
`replace` and `update` only work on existing archives, and so require
a `file` argument.
Sync commands without a file argument return a stream that acts on its
input immediately in the same tick. For readable streams, this means
that all of the data is immediately available by calling
`stream.read()`. For writable streams, it will be acted upon as soon
as it is provided, but this can be at any time.
### Warnings
Some things cause tar to emit a warning, but should usually not cause
the entire operation to fail. There are three ways to handle
warnings:
1. **Ignore them** (default) Invalid entries won't be put in the
archive, and invalid entries won't be unpacked. This is usually
fine, but can hide failures that you might care about.
2. **Notice them** Add an `onwarn` function to the options, or listen
to the `'warn'` event on any tar stream. The function will get
called as `onwarn(message, data)`. Handle as appropriate.
3. **Explode them.** Set `strict: true` in the options object, and
`warn` messages will be emitted as `'error'` events instead. If
there's no `error` handler, this causes the program to crash. If
used with a promise-returning/callback-taking method, then it'll
send the error to the promise/callback.
### Examples
The API mimics the `tar(1)` command line functionality, with aliases
for more human-readable option and function names. The goal is that
if you know how to use `tar(1)` in Unix, then you know how to use
`require('tar')` in JavaScript.
To replicate `tar czf my-tarball.tgz files and folders`, you'd do:
```js
tar.c(
{
gzip: <true|gzip options>,
file: 'my-tarball.tgz'
},
['some', 'files', 'and', 'folders']
).then(_ => { .. tarball has been created .. })
```
To replicate `tar cz files and folders > my-tarball.tgz`, you'd do:
```js
tar.c( // or tar.create
{
gzip: <true|gzip options>
},
['some', 'files', 'and', 'folders']
).pipe(fs.createWriteStream('my-tarball.tgz'))
```
To replicate `tar xf my-tarball.tgz` you'd do:
```js
tar.x( // or tar.extract(
{
file: 'my-tarball.tgz'
}
).then(_=> { .. tarball has been dumped in cwd .. })
```
To replicate `cat my-tarball.tgz | tar x -C some-dir --strip=1`:
```js
fs.createReadStream('my-tarball.tgz').pipe(
tar.x({
strip: 1,
C: 'some-dir' // alias for cwd:'some-dir', also ok
})
)
```
To replicate `tar tf my-tarball.tgz`, do this:
```js
tar.t({
file: 'my-tarball.tgz',
onentry: entry => { .. do whatever with it .. }
})
```
To replicate `cat my-tarball.tgz | tar t` do:
```js
fs.createReadStream('my-tarball.tgz')
.pipe(tar.t())
.on('entry', entry => { .. do whatever with it .. })
```
To do anything synchronous, add `sync: true` to the options. Note
that sync functions don't take a callback and don't return a promise.
When the function returns, it's already done. Sync methods without a
file argument return a sync stream, which flushes immediately. But,
of course, it still won't be done until you `.end()` it.
To filter entries, add `filter: <function>` to the options.
Tar-creating methods call the filter with `filter(path, stat)`.
Tar-reading methods (including extraction) call the filter with
`filter(path, entry)`. The filter is called in the `this`-context of
the `Pack` or `Unpack` stream object.
The arguments list to `tar t` and `tar x` specify a list of filenames
to extract or list, so they're equivalent to a filter that tests if
the file is in the list.
For those who _aren't_ fans of tar's single-character command names:
```
tar.c === tar.create
tar.r === tar.replace (appends to archive, file is required)
tar.u === tar.update (appends if newer, file is required)
tar.x === tar.extract
tar.t === tar.list
```
Keep reading for all the command descriptions and options, as well as
the low-level API that they are built on.
### tar.c(options, fileList, callback) [alias: tar.create]
Create a tarball archive.
The `fileList` is an array of paths to add to the tarball. Adding a
directory also adds its children recursively.
An entry in `fileList` that starts with an `@` symbol is a tar archive
whose entries will be added. To add a file that starts with `@`,
prepend it with `./`.
The following options are supported:
- `file` Write the tarball archive to the specified filename. If this
is specified, then the callback will be fired when the file has been
written, and a promise will be returned that resolves when the file
is written. If a filename is not specified, then a Readable Stream
will be returned which will emit the file data. [Alias: `f`]
- `sync` Act synchronously. If this is set, then any provided file
will be fully written after the call to `tar.c`. If this is set,
and a file is not provided, then the resulting stream will already
have the data ready to `read` or `emit('data')` as soon as you
request it.
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
- `strict` Treat warnings as crash-worthy errors. Default false.
- `cwd` The current working directory for creating the archive.
Defaults to `process.cwd()`. [Alias: `C`]
- `prefix` A path portion to prefix onto the entries in the archive.
- `gzip` Set to any truthy value to create a gzipped archive, or an
object with settings for `zlib.Gzip()` [Alias: `z`]
- `filter` A function that gets called with `(path, stat)` for each
entry being added. Return `true` to add the entry to the archive,
or `false` to omit it.
- `portable` Omit metadata that is system-specific: `ctime`, `atime`,
`uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note
that `mtime` is still included, because this is necessary other
time-based operations.
- `preservePaths` Allow absolute paths. By default, `/` is stripped
from absolute paths. [Alias: `P`]
- `mode` The mode to set on the created file archive
- `noDirRecurse` Do not recursively archive the contents of
directories. [Alias: `n`]
- `follow` Set to true to pack the targets of symbolic links. Without
this option, symbolic links are archived as such. [Alias: `L`, `h`]
- `noPax` Suppress pax extended headers. Note that this means that
long paths and linkpaths will be truncated, and large or negative
numeric values may be interpreted incorrectly.
- `noMtime` Set to true to omit writing `mtime` values for entries.
Note that this prevents using other mtime-based features like
`tar.update` or the `keepNewer` option with the resulting tar archive.
[Alias: `m`, `no-mtime`]
- `mtime` Set to a `Date` object to force a specific `mtime` for
everything added to the archive. Overridden by `noMtime`.
The following options are mostly internal, but can be modified in some
advanced use cases, such as re-using caches between runs.
- `linkCache` A Map object containing the device and inode value for
any file whose nlink is > 1, to identify hard links.
- `statCache` A Map object that caches calls `lstat`.
- `readdirCache` A Map object that caches calls to `readdir`.
- `jobs` A number specifying how many concurrent jobs to run.
Defaults to 4.
- `maxReadSize` The maximum buffer size for `fs.read()` operations.
Defaults to 16 MB.
### tar.x(options, fileList, callback) [alias: tar.extract]
Extract a tarball archive.
The `fileList` is an array of paths to extract from the tarball. If
no paths are provided, then all the entries are extracted.
If the archive is gzipped, then tar will detect this and unzip it.
Note that all directories that are created will be forced to be
writable, readable, and listable by their owner, to avoid cases where
a directory prevents extraction of child entries by virtue of its
mode.
Most extraction errors will cause a `warn` event to be emitted. If
the `cwd` is missing, or not a directory, then the extraction will
fail completely.
The following options are supported:
- `cwd` Extract files relative to the specified directory. Defaults
to `process.cwd()`. If provided, this must exist and must be a
directory. [Alias: `C`]
- `file` The archive file to extract. If not specified, then a
Writable stream is returned where the archive data should be
written. [Alias: `f`]
- `sync` Create files and directories synchronously.
- `strict` Treat warnings as crash-worthy errors. Default false.
- `filter` A function that gets called with `(path, entry)` for each
entry being unpacked. Return `true` to unpack the entry from the
archive, or `false` to skip it.
- `newer` Set to true to keep the existing file on disk if it's newer
than the file in the archive. [Alias: `keep-newer`,
`keep-newer-files`]
- `keep` Do not overwrite existing files. In particular, if a file
appears more than once in an archive, later copies will not
overwrite earlier copies. [Alias: `k`, `keep-existing`]
- `preservePaths` Allow absolute paths, paths containing `..`, and
extracting through symbolic links. By default, `/` is stripped from
absolute paths, `..` paths are not extracted, and any file whose
location would be modified by a symbolic link is not extracted.
[Alias: `P`]
- `unlink` Unlink files before creating them. Without this option,
tar overwrites existing files, which preserves existing hardlinks.
With this option, existing hardlinks will be broken, as will any
symlink that would affect the location of an extracted file. [Alias:
`U`]
- `strip` Remove the specified number of leading path elements.
Pathnames with fewer elements will be silently skipped. Note that
the pathname is edited after applying the filter, but before
security checks. [Alias: `strip-components`, `stripComponents`]
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
- `preserveOwner` If true, tar will set the `uid` and `gid` of
extracted entries to the `uid` and `gid` fields in the archive.
This defaults to true when run as root, and false otherwise. If
false, then files and directories will be set with the owner and
group of the user running the process. This is similar to `-p` in
`tar(1)`, but ACLs and other system-specific data is never unpacked
in this implementation, and modes are set by default already.
[Alias: `p`]
- `uid` Set to a number to force ownership of all extracted files and
folders, and all implicitly created directories, to be owned by the
specified user id, regardless of the `uid` field in the archive.
Cannot be used along with `preserveOwner`. Requires also setting a
`gid` option.
- `gid` Set to a number to force ownership of all extracted files and
folders, and all implicitly created directories, to be owned by the
specified group id, regardless of the `gid` field in the archive.
Cannot be used along with `preserveOwner`. Requires also setting a
`uid` option.
- `noMtime` Set to true to omit writing `mtime` value for extracted
entries. [Alias: `m`, `no-mtime`]
- `transform` Provide a function that takes an `entry` object, and
returns a stream, or any falsey value. If a stream is provided,
then that stream's data will be written instead of the contents of
the archive entry. If a falsey value is provided, then the entry is
written to disk as normal. (To exclude items from extraction, use
the `filter` option described above.)
- `onentry` A function that gets called with `(entry)` for each entry
that passes the filter.
The following options are mostly internal, but can be modified in some
advanced use cases, such as re-using caches between runs.
- `maxReadSize` The maximum buffer size for `fs.read()` operations.
Defaults to 16 MB.
- `umask` Filter the modes of entries like `process.umask()`.
- `dmode` Default mode for directories
- `fmode` Default mode for files
- `dirCache` A Map object of which directories exist.
- `maxMetaEntrySize` The maximum size of meta entries that is
supported. Defaults to 1 MB.
Note that using an asynchronous stream type with the `transform`
option will cause undefined behavior in sync extractions.
[MiniPass](http://npm.im/minipass)-based streams are designed for this
use case.
### tar.t(options, fileList, callback) [alias: tar.list]
List the contents of a tarball archive.
The `fileList` is an array of paths to list from the tarball. If
no paths are provided, then all the entries are listed.
If the archive is gzipped, then tar will detect this and unzip it.
Returns an event emitter that emits `entry` events with
`tar.ReadEntry` objects. However, they don't emit `'data'` or `'end'`
events. (If you want to get actual readable entries, use the
`tar.Parse` class instead.)
The following options are supported:
- `cwd` Extract files relative to the specified directory. Defaults
to `process.cwd()`. [Alias: `C`]
- `file` The archive file to list. If not specified, then a
Writable stream is returned where the archive data should be
written. [Alias: `f`]
- `sync` Read the specified file synchronously. (This has no effect
when a file option isn't specified, because entries are emitted as
fast as they are parsed from the stream anyway.)
- `strict` Treat warnings as crash-worthy errors. Default false.
- `filter` A function that gets called with `(path, entry)` for each
entry being listed. Return `true` to emit the entry from the
archive, or `false` to skip it.
- `onentry` A function that gets called with `(entry)` for each entry
that passes the filter. This is important for when both `file` and
`sync` are set, because it will be called synchronously.
- `maxReadSize` The maximum buffer size for `fs.read()` operations.
Defaults to 16 MB.
- `noResume` By default, `entry` streams are resumed immediately after
the call to `onentry`. Set `noResume: true` to suppress this
behavior. Note that by opting into this, the stream will never
complete until the entry data is consumed.
### tar.u(options, fileList, callback) [alias: tar.update]
Add files to an archive if they are newer than the entry already in
the tarball archive.
The `fileList` is an array of paths to add to the tarball. Adding a
directory also adds its children recursively.
An entry in `fileList` that starts with an `@` symbol is a tar archive
whose entries will be added. To add a file that starts with `@`,
prepend it with `./`.
The following options are supported:
- `file` Required. Write the tarball archive to the specified
filename. [Alias: `f`]
- `sync` Act synchronously. If this is set, then any provided file
will be fully written after the call to `tar.c`.
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
- `strict` Treat warnings as crash-worthy errors. Default false.
- `cwd` The current working directory for adding entries to the
archive. Defaults to `process.cwd()`. [Alias: `C`]
- `prefix` A path portion to prefix onto the entries in the archive.
- `gzip` Set to any truthy value to create a gzipped archive, or an
object with settings for `zlib.Gzip()` [Alias: `z`]
- `filter` A function that gets called with `(path, stat)` for each
entry being added. Return `true` to add the entry to the archive,
or `false` to omit it.
- `portable` Omit metadata that is system-specific: `ctime`, `atime`,
`uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note
that `mtime` is still included, because this is necessary other
time-based operations.
- `preservePaths` Allow absolute paths. By default, `/` is stripped
from absolute paths. [Alias: `P`]
- `maxReadSize` The maximum buffer size for `fs.read()` operations.
Defaults to 16 MB.
- `noDirRecurse` Do not recursively archive the contents of
directories. [Alias: `n`]
- `follow` Set to true to pack the targets of symbolic links. Without
this option, symbolic links are archived as such. [Alias: `L`, `h`]
- `noPax` Suppress pax extended headers. Note that this means that
long paths and linkpaths will be truncated, and large or negative
numeric values may be interpreted incorrectly.
- `noMtime` Set to true to omit writing `mtime` values for entries.
Note that this prevents using other mtime-based features like
`tar.update` or the `keepNewer` option with the resulting tar archive.
[Alias: `m`, `no-mtime`]
- `mtime` Set to a `Date` object to force a specific `mtime` for
everything added to the archive. Overridden by `noMtime`.
### tar.r(options, fileList, callback) [alias: tar.replace]
Add files to an existing archive. Because later entries override
earlier entries, this effectively replaces any existing entries.
The `fileList` is an array of paths to add to the tarball. Adding a
directory also adds its children recursively.
An entry in `fileList` that starts with an `@` symbol is a tar archive
whose entries will be added. To add a file that starts with `@`,
prepend it with `./`.
The following options are supported:
- `file` Required. Write the tarball archive to the specified
filename. [Alias: `f`]
- `sync` Act synchronously. If this is set, then any provided file
will be fully written after the call to `tar.c`.
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
- `strict` Treat warnings as crash-worthy errors. Default false.
- `cwd` The current working directory for adding entries to the
archive. Defaults to `process.cwd()`. [Alias: `C`]
- `prefix` A path portion to prefix onto the entries in the archive.
- `gzip` Set to any truthy value to create a gzipped archive, or an
object with settings for `zlib.Gzip()` [Alias: `z`]
- `filter` A function that gets called with `(path, stat)` for each
entry being added. Return `true` to add the entry to the archive,
or `false` to omit it.
- `portable` Omit metadata that is system-specific: `ctime`, `atime`,
`uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note
that `mtime` is still included, because this is necessary other
time-based operations.
- `preservePaths` Allow absolute paths. By default, `/` is stripped
from absolute paths. [Alias: `P`]
- `maxReadSize` The maximum buffer size for `fs.read()` operations.
Defaults to 16 MB.
- `noDirRecurse` Do not recursively archive the contents of
directories. [Alias: `n`]
- `follow` Set to true to pack the targets of symbolic links. Without
this option, symbolic links are archived as such. [Alias: `L`, `h`]
- `noPax` Suppress pax extended headers. Note that this means that
long paths and linkpaths will be truncated, and large or negative
numeric values may be interpreted incorrectly.
- `noMtime` Set to true to omit writing `mtime` values for entries.
Note that this prevents using other mtime-based features like
`tar.update` or the `keepNewer` option with the resulting tar archive.
[Alias: `m`, `no-mtime`]
- `mtime` Set to a `Date` object to force a specific `mtime` for
everything added to the archive. Overridden by `noMtime`.
## Low-Level API
### class tar.Pack
A readable tar stream.
Has all the standard readable stream interface stuff. `'data'` and
`'end'` events, `read()` method, `pause()` and `resume()`, etc.
#### constructor(options)
The following options are supported:
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
- `strict` Treat warnings as crash-worthy errors. Default false.
- `cwd` The current working directory for creating the archive.
Defaults to `process.cwd()`.
- `prefix` A path portion to prefix onto the entries in the archive.
- `gzip` Set to any truthy value to create a gzipped archive, or an
object with settings for `zlib.Gzip()`
- `filter` A function that gets called with `(path, stat)` for each
entry being added. Return `true` to add the entry to the archive,
or `false` to omit it.
- `portable` Omit metadata that is system-specific: `ctime`, `atime`,
`uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note
that `mtime` is still included, because this is necessary other
time-based operations.
- `preservePaths` Allow absolute paths. By default, `/` is stripped
from absolute paths.
- `linkCache` A Map object containing the device and inode value for
any file whose nlink is > 1, to identify hard links.
- `statCache` A Map object that caches calls `lstat`.
- `readdirCache` A Map object that caches calls to `readdir`.
- `jobs` A number specifying how many concurrent jobs to run.
Defaults to 4.
- `maxReadSize` The maximum buffer size for `fs.read()` operations.
Defaults to 16 MB.
- `noDirRecurse` Do not recursively archive the contents of
directories.
- `follow` Set to true to pack the targets of symbolic links. Without
this option, symbolic links are archived as such.
- `noPax` Suppress pax extended headers. Note that this means that
long paths and linkpaths will be truncated, and large or negative
numeric values may be interpreted incorrectly.
- `noMtime` Set to true to omit writing `mtime` values for entries.
Note that this prevents using other mtime-based features like
`tar.update` or the `keepNewer` option with the resulting tar archive.
- `mtime` Set to a `Date` object to force a specific `mtime` for
everything added to the archive. Overridden by `noMtime`.
#### add(path)
Adds an entry to the archive. Returns the Pack stream.
#### write(path)
Adds an entry to the archive. Returns true if flushed.
#### end()
Finishes the archive.
### class tar.Pack.Sync
Synchronous version of `tar.Pack`.
### class tar.Unpack
A writable stream that unpacks a tar archive onto the file system.
All the normal writable stream stuff is supported. `write()` and
`end()` methods, `'drain'` events, etc.
Note that all directories that are created will be forced to be
writable, readable, and listable by their owner, to avoid cases where
a directory prevents extraction of child entries by virtue of its
mode.
`'close'` is emitted when it's done writing stuff to the file system.
Most unpack errors will cause a `warn` event to be emitted. If the
`cwd` is missing, or not a directory, then an error will be emitted.
#### constructor(options)
- `cwd` Extract files relative to the specified directory. Defaults
to `process.cwd()`. If provided, this must exist and must be a
directory.
- `filter` A function that gets called with `(path, entry)` for each
entry being unpacked. Return `true` to unpack the entry from the
archive, or `false` to skip it.
- `newer` Set to true to keep the existing file on disk if it's newer
than the file in the archive.
- `keep` Do not overwrite existing files. In particular, if a file
appears more than once in an archive, later copies will not
overwrite earlier copies.
- `preservePaths` Allow absolute paths, paths containing `..`, and
extracting through symbolic links. By default, `/` is stripped from
absolute paths, `..` paths are not extracted, and any file whose
location would be modified by a symbolic link is not extracted.
- `unlink` Unlink files before creating them. Without this option,
tar overwrites existing files, which preserves existing hardlinks.
With this option, existing hardlinks will be broken, as will any
symlink that would affect the location of an extracted file.
- `strip` Remove the specified number of leading path elements.
Pathnames with fewer elements will be silently skipped. Note that
the pathname is edited after applying the filter, but before
security checks.
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
- `umask` Filter the modes of entries like `process.umask()`.
- `dmode` Default mode for directories
- `fmode` Default mode for files
- `dirCache` A Map object of which directories exist.
- `maxMetaEntrySize` The maximum size of meta entries that is
supported. Defaults to 1 MB.
- `preserveOwner` If true, tar will set the `uid` and `gid` of
extracted entries to the `uid` and `gid` fields in the archive.
This defaults to true when run as root, and false otherwise. If
false, then files and directories will be set with the owner and
group of the user running the process. This is similar to `-p` in
`tar(1)`, but ACLs and other system-specific data is never unpacked
in this implementation, and modes are set by default already.
- `win32` True if on a windows platform. Causes behavior where
filenames containing `<|>?` chars are converted to
windows-compatible values while being unpacked.
- `uid` Set to a number to force ownership of all extracted files and
folders, and all implicitly created directories, to be owned by the
specified user id, regardless of the `uid` field in the archive.
Cannot be used along with `preserveOwner`. Requires also setting a
`gid` option.
- `gid` Set to a number to force ownership of all extracted files and
folders, and all implicitly created directories, to be owned by the
specified group id, regardless of the `gid` field in the archive.
Cannot be used along with `preserveOwner`. Requires also setting a
`uid` option.
- `noMtime` Set to true to omit writing `mtime` value for extracted
entries.
- `transform` Provide a function that takes an `entry` object, and
returns a stream, or any falsey value. If a stream is provided,
then that stream's data will be written instead of the contents of
the archive entry. If a falsey value is provided, then the entry is
written to disk as normal. (To exclude items from extraction, use
the `filter` option described above.)
- `strict` Treat warnings as crash-worthy errors. Default false.
- `onentry` A function that gets called with `(entry)` for each entry
that passes the filter.
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
### class tar.Unpack.Sync
Synchronous version of `tar.Unpack`.
Note that using an asynchronous stream type with the `transform`
option will cause undefined behavior in sync unpack streams.
[MiniPass](http://npm.im/minipass)-based streams are designed for this
use case.
### class tar.Parse
A writable stream that parses a tar archive stream. All the standard
writable stream stuff is supported.
If the archive is gzipped, then tar will detect this and unzip it.
Emits `'entry'` events with `tar.ReadEntry` objects, which are
themselves readable streams that you can pipe wherever.
Each `entry` will not emit until the one before it is flushed through,
so make sure to either consume the data (with `on('data', ...)` or
`.pipe(...)`) or throw it away with `.resume()` to keep the stream
flowing.
#### constructor(options)
Returns an event emitter that emits `entry` events with
`tar.ReadEntry` objects.
The following options are supported:
- `strict` Treat warnings as crash-worthy errors. Default false.
- `filter` A function that gets called with `(path, entry)` for each
entry being listed. Return `true` to emit the entry from the
archive, or `false` to skip it.
- `onentry` A function that gets called with `(entry)` for each entry
that passes the filter.
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
#### abort(message, error)
Stop all parsing activities. This is called when there are zlib
errors. It also emits a warning with the message and error provided.
### class tar.ReadEntry extends [MiniPass](http://npm.im/minipass)
A representation of an entry that is being read out of a tar archive.
It has the following fields:
- `extended` The extended metadata object provided to the constructor.
- `globalExtended` The global extended metadata object provided to the
constructor.
- `remain` The number of bytes remaining to be written into the
stream.
- `blockRemain` The number of 512-byte blocks remaining to be written
into the stream.
- `ignore` Whether this entry should be ignored.
- `meta` True if this represents metadata about the next entry, false
if it represents a filesystem object.
- All the fields from the header, extended header, and global extended
header are added to the ReadEntry object. So it has `path`, `type`,
`size, `mode`, and so on.
#### constructor(header, extended, globalExtended)
Create a new ReadEntry object with the specified header, extended
header, and global extended header values.
### class tar.WriteEntry extends [MiniPass](http://npm.im/minipass)
A representation of an entry that is being written from the file
system into a tar archive.
Emits data for the Header, and for the Pax Extended Header if one is
required, as well as any body data.
Creating a WriteEntry for a directory does not also create
WriteEntry objects for all of the directory contents.
It has the following fields:
- `path` The path field that will be written to the archive. By
default, this is also the path from the cwd to the file system
object.
- `portable` Omit metadata that is system-specific: `ctime`, `atime`,
`uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note
that `mtime` is still included, because this is necessary other
time-based operations.
- `myuid` If supported, the uid of the user running the current
process.
- `myuser` The `env.USER` string if set, or `''`. Set as the entry
`uname` field if the file's `uid` matches `this.myuid`.
- `maxReadSize` The maximum buffer size for `fs.read()` operations.
Defaults to 1 MB.
- `linkCache` A Map object containing the device and inode value for
any file whose nlink is > 1, to identify hard links.
- `statCache` A Map object that caches calls `lstat`.
- `preservePaths` Allow absolute paths. By default, `/` is stripped
from absolute paths.
- `cwd` The current working directory for creating the archive.
Defaults to `process.cwd()`.
- `absolute` The absolute path to the entry on the filesystem. By
default, this is `path.resolve(this.cwd, this.path)`, but it can be
overridden explicitly.
- `strict` Treat warnings as crash-worthy errors. Default false.
- `win32` True if on a windows platform. Causes behavior where paths
replace `\` with `/` and filenames containing the windows-compatible
forms of `<|>?:` characters are converted to actual `<|>?:` characters
in the archive.
- `noPax` Suppress pax extended headers. Note that this means that
long paths and linkpaths will be truncated, and large or negative
numeric values may be interpreted incorrectly.
- `noMtime` Set to true to omit writing `mtime` values for entries.
Note that this prevents using other mtime-based features like
`tar.update` or the `keepNewer` option with the resulting tar archive.
#### constructor(path, options)
`path` is the path of the entry as it is written in the archive.
The following options are supported:
- `portable` Omit metadata that is system-specific: `ctime`, `atime`,
`uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note
that `mtime` is still included, because this is necessary other
time-based operations.
- `maxReadSize` The maximum buffer size for `fs.read()` operations.
Defaults to 1 MB.
- `linkCache` A Map object containing the device and inode value for
any file whose nlink is > 1, to identify hard links.
- `statCache` A Map object that caches calls `lstat`.
- `preservePaths` Allow absolute paths. By default, `/` is stripped
from absolute paths.
- `cwd` The current working directory for creating the archive.
Defaults to `process.cwd()`.
- `absolute` The absolute path to the entry on the filesystem. By
default, this is `path.resolve(this.cwd, this.path)`, but it can be
overridden explicitly.
- `strict` Treat warnings as crash-worthy errors. Default false.
- `win32` True if on a windows platform. Causes behavior where paths
replace `\` with `/`.
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
- `noMtime` Set to true to omit writing `mtime` values for entries.
Note that this prevents using other mtime-based features like
`tar.update` or the `keepNewer` option with the resulting tar archive.
- `umask` Set to restrict the modes on the entries in the archive,
somewhat like how umask works on file creation. Defaults to
`process.umask()` on unix systems, or `0o22` on Windows.
#### warn(message, data)
If strict, emit an error with the provided message.
Othewise, emit a `'warn'` event with the provided message and data.
### class tar.WriteEntry.Sync
Synchronous version of tar.WriteEntry
### class tar.WriteEntry.Tar
A version of tar.WriteEntry that gets its data from a tar.ReadEntry
instead of from the filesystem.
#### constructor(readEntry, options)
`readEntry` is the entry being read out of another archive.
The following options are supported:
- `portable` Omit metadata that is system-specific: `ctime`, `atime`,
`uid`, `gid`, `uname`, `gname`, `dev`, `ino`, and `nlink`. Note
that `mtime` is still included, because this is necessary other
time-based operations.
- `preservePaths` Allow absolute paths. By default, `/` is stripped
from absolute paths.
- `strict` Treat warnings as crash-worthy errors. Default false.
- `onwarn` A function that will get called with `(message, data)` for
any warnings encountered.
- `noMtime` Set to true to omit writing `mtime` values for entries.
Note that this prevents using other mtime-based features like
`tar.update` or the `keepNewer` option with the resulting tar archive.
### class tar.Header
A class for reading and writing header blocks.
It has the following fields:
- `nullBlock` True if decoding a block which is entirely composed of
`0x00` null bytes. (Useful because tar files are terminated by
at least 2 null blocks.)
- `cksumValid` True if the checksum in the header is valid, false
otherwise.
- `needPax` True if the values, as encoded, will require a Pax
extended header.
- `path` The path of the entry.
- `mode` The 4 lowest-order octal digits of the file mode. That is,
read/write/execute permissions for world, group, and owner, and the
setuid, setgid, and sticky bits.
- `uid` Numeric user id of the file owner
- `gid` Numeric group id of the file owner
- `size` Size of the file in bytes
- `mtime` Modified time of the file
- `cksum` The checksum of the header. This is generated by adding all
the bytes of the header block, treating the checksum field itself as
all ascii space characters (that is, `0x20`).
- `type` The human-readable name of the type of entry this represents,
or the alphanumeric key if unknown.
- `typeKey` The alphanumeric key for the type of entry this header
represents.
- `linkpath` The target of Link and SymbolicLink entries.
- `uname` Human-readable user name of the file owner
- `gname` Human-readable group name of the file owner
- `devmaj` The major portion of the device number. Always `0` for
files, directories, and links.
- `devmin` The minor portion of the device number. Always `0` for
files, directories, and links.
- `atime` File access time.
- `ctime` File change time.
#### constructor(data, [offset=0])
`data` is optional. It is either a Buffer that should be interpreted
as a tar Header starting at the specified offset and continuing for
512 bytes, or a data object of keys and values to set on the header
object, and eventually encode as a tar Header.
#### decode(block, offset)
Decode the provided buffer starting at the specified offset.
Buffer length must be greater than 512 bytes.
#### set(data)
Set the fields in the data object.
#### encode(buffer, offset)
Encode the header fields into the buffer at the specified offset.
Returns `this.needPax` to indicate whether a Pax Extended Header is
required to properly encode the specified data.
### class tar.Pax
An object representing a set of key-value pairs in an Pax extended
header entry.
It has the following fields. Where the same name is used, they have
the same semantics as the tar.Header field of the same name.
- `global` True if this represents a global extended header, or false
if it is for a single entry.
- `atime`
- `charset`
- `comment`
- `ctime`
- `gid`
- `gname`
- `linkpath`
- `mtime`
- `path`
- `size`
- `uid`
- `uname`
- `dev`
- `ino`
- `nlink`
#### constructor(object, global)
Set the fields set in the object. `global` is a boolean that defaults
to false.
#### encode()
Return a Buffer containing the header and body for the Pax extended
header entry, or `null` if there is nothing to encode.
#### encodeBody()
Return a string representing the body of the pax extended header
entry.
#### encodeField(fieldName)
Return a string representing the key/value encoding for the specified
fieldName, or `''` if the field is unset.
### tar.Pax.parse(string, extended, global)
Return a new Pax object created by parsing the contents of the string
provided.
If the `extended` object is set, then also add the fields from that
object. (This is necessary because multiple metadata entries can
occur in sequence.)
### tar.types
A translation table for the `type` field in tar headers.
#### tar.types.name.get(code)
Get the human-readable name for a given alphanumeric code.
#### tar.types.code.get(name)
Get the alphanumeric code for a given human-readable name.

18
node_modules/node-pre-gyp/node_modules/tar/index.js generated vendored Normal file
View File

@@ -0,0 +1,18 @@
'use strict'
// high-level commands
exports.c = exports.create = require('./lib/create.js')
exports.r = exports.replace = require('./lib/replace.js')
exports.t = exports.list = require('./lib/list.js')
exports.u = exports.update = require('./lib/update.js')
exports.x = exports.extract = require('./lib/extract.js')
// classes
exports.Pack = require('./lib/pack.js')
exports.Unpack = require('./lib/unpack.js')
exports.Parse = require('./lib/parse.js')
exports.ReadEntry = require('./lib/read-entry.js')
exports.WriteEntry = require('./lib/write-entry.js')
exports.Header = require('./lib/header.js')
exports.Pax = require('./lib/pax.js')
exports.types = require('./lib/types.js')

View File

@@ -0,0 +1,82 @@
{
"_from": "tar@^4.4.13",
"_id": "tar@4.4.13",
"_inBundle": false,
"_integrity": "sha512-w2VwSrBoHa5BsSyH+KxEqeQBAllHhccyMFVHtGtdMpF4W7IRWfZjFiQceJPChOeTsSDVUpER2T8FA93pr0L+QA==",
"_location": "/node-pre-gyp/tar",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "tar@^4.4.13",
"name": "tar",
"escapedName": "tar",
"rawSpec": "^4.4.13",
"saveSpec": null,
"fetchSpec": "^4.4.13"
},
"_requiredBy": [
"/node-pre-gyp"
],
"_resolved": "https://registry.npmjs.org/tar/-/tar-4.4.13.tgz",
"_shasum": "43b364bc52888d555298637b10d60790254ab525",
"_spec": "tar@^4.4.13",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot\\node_modules\\node-pre-gyp",
"author": {
"name": "Isaac Z. Schlueter",
"email": "i@izs.me",
"url": "http://blog.izs.me/"
},
"bugs": {
"url": "https://github.com/npm/node-tar/issues"
},
"bundleDependencies": false,
"dependencies": {
"chownr": "^1.1.1",
"fs-minipass": "^1.2.5",
"minipass": "^2.8.6",
"minizlib": "^1.2.1",
"mkdirp": "^0.5.0",
"safe-buffer": "^5.1.2",
"yallist": "^3.0.3"
},
"deprecated": false,
"description": "tar for node",
"devDependencies": {
"chmodr": "^1.2.0",
"end-of-stream": "^1.4.1",
"events-to-array": "^1.1.2",
"mutate-fs": "^2.1.1",
"rimraf": "^2.6.3",
"tap": "^14.6.5",
"tar-fs": "^1.16.3",
"tar-stream": "^1.6.2"
},
"engines": {
"node": ">=4.5"
},
"files": [
"index.js",
"lib/"
],
"homepage": "https://github.com/npm/node-tar#readme",
"license": "ISC",
"name": "tar",
"repository": {
"type": "git",
"url": "git+https://github.com/npm/node-tar.git"
},
"scripts": {
"bench": "for i in benchmarks/*/*.js; do echo $i; for j in {1..5}; do node $i || break; done; done",
"genparse": "node scripts/generate-parse-fixtures.js",
"postpublish": "git push origin --follow-tags",
"postversion": "npm publish",
"preversion": "npm test",
"test": "tap"
},
"tap": {
"coverage-map": "map.js",
"check-coverage": true
},
"version": "4.4.13"
}

15
node_modules/node-pre-gyp/node_modules/yallist/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,204 @@
# yallist
Yet Another Linked List
There are many doubly-linked list implementations like it, but this
one is mine.
For when an array would be too big, and a Map can't be iterated in
reverse order.
[![Build Status](https://travis-ci.org/isaacs/yallist.svg?branch=master)](https://travis-ci.org/isaacs/yallist) [![Coverage Status](https://coveralls.io/repos/isaacs/yallist/badge.svg?service=github)](https://coveralls.io/github/isaacs/yallist)
## basic usage
```javascript
var yallist = require('yallist')
var myList = yallist.create([1, 2, 3])
myList.push('foo')
myList.unshift('bar')
// of course pop() and shift() are there, too
console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo']
myList.forEach(function (k) {
// walk the list head to tail
})
myList.forEachReverse(function (k, index, list) {
// walk the list tail to head
})
var myDoubledList = myList.map(function (k) {
return k + k
})
// now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo']
// mapReverse is also a thing
var myDoubledListReverse = myList.mapReverse(function (k) {
return k + k
}) // ['foofoo', 6, 4, 2, 'barbar']
var reduced = myList.reduce(function (set, entry) {
set += entry
return set
}, 'start')
console.log(reduced) // 'startfoo123bar'
```
## api
The whole API is considered "public".
Functions with the same name as an Array method work more or less the
same way.
There's reverse versions of most things because that's the point.
### Yallist
Default export, the class that holds and manages a list.
Call it with either a forEach-able (like an array) or a set of
arguments, to initialize the list.
The Array-ish methods all act like you'd expect. No magic length,
though, so if you change that it won't automatically prune or add
empty spots.
### Yallist.create(..)
Alias for Yallist function. Some people like factories.
#### yallist.head
The first node in the list
#### yallist.tail
The last node in the list
#### yallist.length
The number of nodes in the list. (Change this at your peril. It is
not magic like Array length.)
#### yallist.toArray()
Convert the list to an array.
#### yallist.forEach(fn, [thisp])
Call a function on each item in the list.
#### yallist.forEachReverse(fn, [thisp])
Call a function on each item in the list, in reverse order.
#### yallist.get(n)
Get the data at position `n` in the list. If you use this a lot,
probably better off just using an Array.
#### yallist.getReverse(n)
Get the data at position `n`, counting from the tail.
#### yallist.map(fn, thisp)
Create a new Yallist with the result of calling the function on each
item.
#### yallist.mapReverse(fn, thisp)
Same as `map`, but in reverse.
#### yallist.pop()
Get the data from the list tail, and remove the tail from the list.
#### yallist.push(item, ...)
Insert one or more items to the tail of the list.
#### yallist.reduce(fn, initialValue)
Like Array.reduce.
#### yallist.reduceReverse
Like Array.reduce, but in reverse.
#### yallist.reverse
Reverse the list in place.
#### yallist.shift()
Get the data from the list head, and remove the head from the list.
#### yallist.slice([from], [to])
Just like Array.slice, but returns a new Yallist.
#### yallist.sliceReverse([from], [to])
Just like yallist.slice, but the result is returned in reverse.
#### yallist.toArray()
Create an array representation of the list.
#### yallist.toArrayReverse()
Create a reversed array representation of the list.
#### yallist.unshift(item, ...)
Insert one or more items to the head of the list.
#### yallist.unshiftNode(node)
Move a Node object to the front of the list. (That is, pull it out of
wherever it lives, and make it the new head.)
If the node belongs to a different list, then that list will remove it
first.
#### yallist.pushNode(node)
Move a Node object to the end of the list. (That is, pull it out of
wherever it lives, and make it the new tail.)
If the node belongs to a list already, then that list will remove it
first.
#### yallist.removeNode(node)
Remove a node from the list, preserving referential integrity of head
and tail and other nodes.
Will throw an error if you try to have a list remove a node that
doesn't belong to it.
### Yallist.Node
The class that holds the data and is actually the list.
Call with `var n = new Node(value, previousNode, nextNode)`
Note that if you do direct operations on Nodes themselves, it's very
easy to get into weird states where the list is broken. Be careful :)
#### node.next
The next node in the list.
#### node.prev
The previous node in the list.
#### node.value
The data the node contains.
#### node.list
The list to which this node belongs. (Null if it does not belong to
any list.)

View File

@@ -0,0 +1,8 @@
'use strict'
module.exports = function (Yallist) {
Yallist.prototype[Symbol.iterator] = function* () {
for (let walker = this.head; walker; walker = walker.next) {
yield walker.value
}
}
}

View File

@@ -0,0 +1,63 @@
{
"_from": "yallist@^3.0.3",
"_id": "yallist@3.1.1",
"_inBundle": false,
"_integrity": "sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==",
"_location": "/node-pre-gyp/yallist",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "yallist@^3.0.3",
"name": "yallist",
"escapedName": "yallist",
"rawSpec": "^3.0.3",
"saveSpec": null,
"fetchSpec": "^3.0.3"
},
"_requiredBy": [
"/node-pre-gyp/minipass",
"/node-pre-gyp/tar"
],
"_resolved": "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz",
"_shasum": "dbb7daf9bfd8bac9ab45ebf602b8cbad0d5d08fd",
"_spec": "yallist@^3.0.3",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot\\node_modules\\node-pre-gyp\\node_modules\\tar",
"author": {
"name": "Isaac Z. Schlueter",
"email": "i@izs.me",
"url": "http://blog.izs.me/"
},
"bugs": {
"url": "https://github.com/isaacs/yallist/issues"
},
"bundleDependencies": false,
"dependencies": {},
"deprecated": false,
"description": "Yet Another Linked List",
"devDependencies": {
"tap": "^12.1.0"
},
"directories": {
"test": "test"
},
"files": [
"yallist.js",
"iterator.js"
],
"homepage": "https://github.com/isaacs/yallist#readme",
"license": "ISC",
"main": "yallist.js",
"name": "yallist",
"repository": {
"type": "git",
"url": "git+https://github.com/isaacs/yallist.git"
},
"scripts": {
"postpublish": "git push origin --all; git push origin --tags",
"postversion": "npm publish",
"preversion": "npm test",
"test": "tap test/*.js --100"
},
"version": "3.1.1"
}

View File

@@ -0,0 +1,426 @@
'use strict'
module.exports = Yallist
Yallist.Node = Node
Yallist.create = Yallist
function Yallist (list) {
var self = this
if (!(self instanceof Yallist)) {
self = new Yallist()
}
self.tail = null
self.head = null
self.length = 0
if (list && typeof list.forEach === 'function') {
list.forEach(function (item) {
self.push(item)
})
} else if (arguments.length > 0) {
for (var i = 0, l = arguments.length; i < l; i++) {
self.push(arguments[i])
}
}
return self
}
Yallist.prototype.removeNode = function (node) {
if (node.list !== this) {
throw new Error('removing node which does not belong to this list')
}
var next = node.next
var prev = node.prev
if (next) {
next.prev = prev
}
if (prev) {
prev.next = next
}
if (node === this.head) {
this.head = next
}
if (node === this.tail) {
this.tail = prev
}
node.list.length--
node.next = null
node.prev = null
node.list = null
return next
}
Yallist.prototype.unshiftNode = function (node) {
if (node === this.head) {
return
}
if (node.list) {
node.list.removeNode(node)
}
var head = this.head
node.list = this
node.next = head
if (head) {
head.prev = node
}
this.head = node
if (!this.tail) {
this.tail = node
}
this.length++
}
Yallist.prototype.pushNode = function (node) {
if (node === this.tail) {
return
}
if (node.list) {
node.list.removeNode(node)
}
var tail = this.tail
node.list = this
node.prev = tail
if (tail) {
tail.next = node
}
this.tail = node
if (!this.head) {
this.head = node
}
this.length++
}
Yallist.prototype.push = function () {
for (var i = 0, l = arguments.length; i < l; i++) {
push(this, arguments[i])
}
return this.length
}
Yallist.prototype.unshift = function () {
for (var i = 0, l = arguments.length; i < l; i++) {
unshift(this, arguments[i])
}
return this.length
}
Yallist.prototype.pop = function () {
if (!this.tail) {
return undefined
}
var res = this.tail.value
this.tail = this.tail.prev
if (this.tail) {
this.tail.next = null
} else {
this.head = null
}
this.length--
return res
}
Yallist.prototype.shift = function () {
if (!this.head) {
return undefined
}
var res = this.head.value
this.head = this.head.next
if (this.head) {
this.head.prev = null
} else {
this.tail = null
}
this.length--
return res
}
Yallist.prototype.forEach = function (fn, thisp) {
thisp = thisp || this
for (var walker = this.head, i = 0; walker !== null; i++) {
fn.call(thisp, walker.value, i, this)
walker = walker.next
}
}
Yallist.prototype.forEachReverse = function (fn, thisp) {
thisp = thisp || this
for (var walker = this.tail, i = this.length - 1; walker !== null; i--) {
fn.call(thisp, walker.value, i, this)
walker = walker.prev
}
}
Yallist.prototype.get = function (n) {
for (var i = 0, walker = this.head; walker !== null && i < n; i++) {
// abort out of the list early if we hit a cycle
walker = walker.next
}
if (i === n && walker !== null) {
return walker.value
}
}
Yallist.prototype.getReverse = function (n) {
for (var i = 0, walker = this.tail; walker !== null && i < n; i++) {
// abort out of the list early if we hit a cycle
walker = walker.prev
}
if (i === n && walker !== null) {
return walker.value
}
}
Yallist.prototype.map = function (fn, thisp) {
thisp = thisp || this
var res = new Yallist()
for (var walker = this.head; walker !== null;) {
res.push(fn.call(thisp, walker.value, this))
walker = walker.next
}
return res
}
Yallist.prototype.mapReverse = function (fn, thisp) {
thisp = thisp || this
var res = new Yallist()
for (var walker = this.tail; walker !== null;) {
res.push(fn.call(thisp, walker.value, this))
walker = walker.prev
}
return res
}
Yallist.prototype.reduce = function (fn, initial) {
var acc
var walker = this.head
if (arguments.length > 1) {
acc = initial
} else if (this.head) {
walker = this.head.next
acc = this.head.value
} else {
throw new TypeError('Reduce of empty list with no initial value')
}
for (var i = 0; walker !== null; i++) {
acc = fn(acc, walker.value, i)
walker = walker.next
}
return acc
}
Yallist.prototype.reduceReverse = function (fn, initial) {
var acc
var walker = this.tail
if (arguments.length > 1) {
acc = initial
} else if (this.tail) {
walker = this.tail.prev
acc = this.tail.value
} else {
throw new TypeError('Reduce of empty list with no initial value')
}
for (var i = this.length - 1; walker !== null; i--) {
acc = fn(acc, walker.value, i)
walker = walker.prev
}
return acc
}
Yallist.prototype.toArray = function () {
var arr = new Array(this.length)
for (var i = 0, walker = this.head; walker !== null; i++) {
arr[i] = walker.value
walker = walker.next
}
return arr
}
Yallist.prototype.toArrayReverse = function () {
var arr = new Array(this.length)
for (var i = 0, walker = this.tail; walker !== null; i++) {
arr[i] = walker.value
walker = walker.prev
}
return arr
}
Yallist.prototype.slice = function (from, to) {
to = to || this.length
if (to < 0) {
to += this.length
}
from = from || 0
if (from < 0) {
from += this.length
}
var ret = new Yallist()
if (to < from || to < 0) {
return ret
}
if (from < 0) {
from = 0
}
if (to > this.length) {
to = this.length
}
for (var i = 0, walker = this.head; walker !== null && i < from; i++) {
walker = walker.next
}
for (; walker !== null && i < to; i++, walker = walker.next) {
ret.push(walker.value)
}
return ret
}
Yallist.prototype.sliceReverse = function (from, to) {
to = to || this.length
if (to < 0) {
to += this.length
}
from = from || 0
if (from < 0) {
from += this.length
}
var ret = new Yallist()
if (to < from || to < 0) {
return ret
}
if (from < 0) {
from = 0
}
if (to > this.length) {
to = this.length
}
for (var i = this.length, walker = this.tail; walker !== null && i > to; i--) {
walker = walker.prev
}
for (; walker !== null && i > from; i--, walker = walker.prev) {
ret.push(walker.value)
}
return ret
}
Yallist.prototype.splice = function (start, deleteCount /*, ...nodes */) {
if (start > this.length) {
start = this.length - 1
}
if (start < 0) {
start = this.length + start;
}
for (var i = 0, walker = this.head; walker !== null && i < start; i++) {
walker = walker.next
}
var ret = []
for (var i = 0; walker && i < deleteCount; i++) {
ret.push(walker.value)
walker = this.removeNode(walker)
}
if (walker === null) {
walker = this.tail
}
if (walker !== this.head && walker !== this.tail) {
walker = walker.prev
}
for (var i = 2; i < arguments.length; i++) {
walker = insert(this, walker, arguments[i])
}
return ret;
}
Yallist.prototype.reverse = function () {
var head = this.head
var tail = this.tail
for (var walker = head; walker !== null; walker = walker.prev) {
var p = walker.prev
walker.prev = walker.next
walker.next = p
}
this.head = tail
this.tail = head
return this
}
function insert (self, node, value) {
var inserted = node === self.head ?
new Node(value, null, node, self) :
new Node(value, node, node.next, self)
if (inserted.next === null) {
self.tail = inserted
}
if (inserted.prev === null) {
self.head = inserted
}
self.length++
return inserted
}
function push (self, item) {
self.tail = new Node(item, self.tail, null, self)
if (!self.head) {
self.head = self.tail
}
self.length++
}
function unshift (self, item) {
self.head = new Node(item, null, self.head, self)
if (!self.tail) {
self.tail = self.head
}
self.length++
}
function Node (value, prev, next, list) {
if (!(this instanceof Node)) {
return new Node(value, prev, next, list)
}
this.list = list
this.value = value
if (prev) {
prev.next = this
this.prev = prev
} else {
this.prev = null
}
if (next) {
next.prev = this
this.next = next
} else {
this.next = null
}
}
try {
// add if support for Symbol.iterator is present
require('./iterator.js')(Yallist)
} catch (er) {}

93
node_modules/node-pre-gyp/package.json generated vendored Normal file
View File

@@ -0,0 +1,93 @@
{
"_from": "node-pre-gyp@^0.17.0",
"_id": "node-pre-gyp@0.17.0",
"_inBundle": false,
"_integrity": "sha512-abzZt1hmOjkZez29ppg+5gGqdPLUuJeAEwVPtHYEJgx0qzttCbcKFpxrCQn2HYbwCv2c+7JwH4BgEzFkUGpn4A==",
"_location": "/node-pre-gyp",
"_phantomChildren": {
"abbrev": "1.1.1",
"glob": "7.1.6",
"mkdirp": "0.5.5",
"osenv": "0.1.5",
"safe-buffer": "5.1.2"
},
"_requested": {
"type": "range",
"registry": true,
"raw": "node-pre-gyp@^0.17.0",
"name": "node-pre-gyp",
"escapedName": "node-pre-gyp",
"rawSpec": "^0.17.0",
"saveSpec": null,
"fetchSpec": "^0.17.0"
},
"_requiredBy": [
"#USER",
"/"
],
"_resolved": "https://registry.npmjs.org/node-pre-gyp/-/node-pre-gyp-0.17.0.tgz",
"_shasum": "5af3f7b4c3848b5ed00edc3d298ff836daae5f1d",
"_spec": "node-pre-gyp@^0.17.0",
"_where": "C:\\Users\\MCGFX\\OneDrive\\Dokumente\\GitHub\\UnknownBot",
"author": {
"name": "Dane Springmeyer",
"email": "dane@mapbox.com"
},
"bin": {
"node-pre-gyp": "bin/node-pre-gyp"
},
"bugs": {
"url": "https://github.com/mapbox/node-pre-gyp/issues"
},
"bundleDependencies": false,
"dependencies": {
"detect-libc": "^1.0.3",
"mkdirp": "^0.5.5",
"needle": "^2.5.2",
"nopt": "^4.0.3",
"npm-packlist": "^1.4.8",
"npmlog": "^4.1.2",
"rc": "^1.2.8",
"rimraf": "^2.7.1",
"semver": "^5.7.1",
"tar": "^4.4.13"
},
"deprecated": "Please upgrade to @mapbox/node-pre-gyp: the non-scoped node-pre-gyp package is deprecated and only the @mapbox scoped package will recieve updates in the future",
"description": "Node.js native addon binary install tool",
"devDependencies": {
"aws-sdk": "^2.28.0",
"jshint": "^2.9.5",
"nock": "^9.2.3",
"tape": "^4.6.3"
},
"homepage": "https://github.com/mapbox/node-pre-gyp#readme",
"jshintConfig": {
"node": true,
"globalstrict": true,
"undef": true,
"unused": false,
"noarg": true
},
"keywords": [
"native",
"addon",
"module",
"c",
"c++",
"bindings",
"binary"
],
"license": "BSD-3-Clause",
"main": "./lib/node-pre-gyp.js",
"name": "node-pre-gyp",
"repository": {
"type": "git",
"url": "git://github.com/mapbox/node-pre-gyp.git"
},
"scripts": {
"pretest": "jshint test/build.test.js test/s3_setup.test.js test/versioning.test.js test/fetch.test.js lib lib/util scripts bin/node-pre-gyp",
"test": "jshint lib lib/util scripts bin/node-pre-gyp && tape test/*test.js",
"update-crosswalk": "node scripts/abi_crosswalk.js"
},
"version": "0.17.0"
}