XZ is a container for compressed archives. It is among the best compressors out there according to several benchmarks:
- Gzip vs Bzip2 vs LZMA vs XZ vs LZ4 vs LZO
- Large Text Compression Benchmark
- Linux Compression Comparison (GZIP vs BZIP2 vs LZMA vs ZIP vs Compress)
It has a good balance between compression time/ratio and decompression time/memory.
This project aims towards providing:
-
A quick and easy way to play with XZ compression: Quick and easy as it conforms to zlib API, so that switching from zlib/deflate to xz might be as easy as a string search/replace in your code editor 😄
-
Complete integration with XZ sources/binaries: You can either use system packages or download a specific version and compile it! See installation below.
Only LZMA2 is supported for compression output. But the library can open and read any LZMA1 or LZMA2 compressed file.
This major release brings the library into 2025 with modern tooling and TypeScript support:
- Full TypeScript migration: Complete rewrite from CoffeeScript to TypeScript for better type safety and developer experience
- Promise-based APIs: New async functions
xzAsync()andunxzAsync()with Promise support - Modern testing: Migrated from Mocha to Vitest with improved performance and better TypeScript integration
- Enhanced tooling:
- Biome for fast linting and formatting
- Pre-commit hooks with nano-staged and simple-git-hooks
- pnpm as package manager for better dependency management
- Updated Node.js support: Requires Node.js >= 16 (updated from >= 12)
In previous versions, N-API became the de facto standard to provide stable ABI API for NodeJS Native Modules, replacing nan.
It has been tested and works on:
- Linux x64 (Ubuntu)
- OSX (
macos-11) - Raspberry Pi 2/3/4 (both on 32-bit and 64-bit architectures)
- Windows (
windows-2019andwindows-2022are part of GitHub CI)
Notes:
- For Windows There is no "global" installation of the LZMA library on the Windows machine provisionned by GitHub, so it is pointless to build with this config
Several prebuilt versions are bundled within the package.
- Windows x86_64
- Linux x86_64
- MacOS x86_64 / Arm64
If your OS/architecture matches, you will use this version which has been compiled using the following default flags:
| Flag | Description | Default value | Possible values |
|---|---|---|---|
| USE_GLOBAL | Should the library use the system provided DLL/.so library ? | yes (no if OS is Windows) |
yes or no |
| RUNTIME_LINK | Should the library be linked statically or use the shared LZMA library ? | shared |
static or shared |
| ENABLE_THREAD_SUPPORT | Does the LZMA library support threads ? | yes |
yes or no |
If not node-gyp will automagically start compiling stuff according to the environment variables set, or the default values above.
If you want to change compilation flags, please read on here.
Thanks to the community, there are several choices out there:
- lzma-purejs A pure JavaScript implementation of the algorithm
- node-xz Node binding of XZ library
- lzma-native A very complete implementation of XZ library bindings
- Others are also available but they fork "xz" process in the background.
// CommonJS
var lzma = require('node-liblzma');
// TypeScript / ES6 modules
import * as lzma from 'node-liblzma';| Zlib | XZlib | Arguments |
|---|---|---|
| createGzip | createXz | ([lzma_options, [options]]) |
| createGunzip | createUnxz | ([lzma_options, [options]]) |
| gzip | xz | (buf, [options], callback) |
| gunzip | unxz | (buf, [options], callback) |
| gzipSync | xzSync | (buf, [options]) |
| gunzipSync | unxzSync | (buf, [options]) |
-
| xzAsync | (buf, [options]) ⇒ Promise\<Buffer> -
| unxzAsync | (buf, [options]) ⇒ Promise\<Buffer>
options is an Object with the following possible attributes:
| Attribute | Type | Available options |
|---|---|---|
| check | Uint32 | NONE |
| CRC32 | ||
| CRC64 | ||
| SHA256 | ||
| preset | Uint32 | DEFAULT |
| EXTREME | ||
| flag | Uint32 | TELL_NO_CHECK |
| TELL_UNSUPPORTED_CHECK | ||
| TELL_ANY_CHECK | ||
| CONCATENATED | ||
| mode | Uint32 | FAST |
| NORMAL | ||
| filters | Array | LZMA2 (added by default) |
| X86 | ||
| POWERPC | ||
| IA64 | ||
| ARM | ||
| ARMTHUMB | ||
| SPARC |
For further information about each of these flags, you will find reference at XZ SDK.
The library supports multi-threaded compression when built with ENABLE_THREAD_SUPPORT=yes (default). Thread support allows parallel compression on multi-core systems, significantly improving performance for large files.
Using threads in compression:
import { xz, createXz } from 'node-liblzma';
// Specify number of threads (1-N, where N is CPU core count)
const options = {
preset: lzma.preset.DEFAULT,
threads: 4 // Use 4 threads for compression
};
// With buffer compression
xz(buffer, options, (err, compressed) => {
// ...
});
// With streams
const compressor = createXz(options);
inputStream.pipe(compressor).pipe(outputStream);Important notes:
- Thread support only applies to compression, not decompression
- Requires LZMA library built with pthread support
threads: 1disables multi-threading (falls back to single-threaded encoder)- Check if threads are available:
import { hasThreads } from 'node-liblzma';
For optimal performance, the library uses configurable chunk sizes:
const stream = createXz({
preset: lzma.preset.DEFAULT,
chunkSize: 256 * 1024 // 256KB chunks (default: 64KB)
});Recommendations:
- Small files (< 1MB): Use default 64KB chunks
- Medium files (1-10MB): Use 128-256KB chunks
- Large files (> 10MB): Use 512KB-1MB chunks
- Maximum buffer size: 512MB per operation (security limit)
The library enforces a 512MB maximum buffer size to prevent DoS attacks via resource exhaustion. For files larger than 512MB, use streaming APIs:
import { createReadStream, createWriteStream } from 'fs';
import { createXz } from 'node-liblzma';
createReadStream('large-file.bin')
.pipe(createXz())
.pipe(createWriteStream('large-file.xz'));The library provides typed error classes for better error handling:
import {
xzAsync,
LZMAError,
LZMAMemoryError,
LZMADataError,
LZMAFormatError
} from 'node-liblzma';
try {
const compressed = await xzAsync(buffer);
} catch (error) {
if (error instanceof LZMAMemoryError) {
console.error('Out of memory:', error.message);
} else if (error instanceof LZMADataError) {
console.error('Corrupt data:', error.message);
} else if (error instanceof LZMAFormatError) {
console.error('Invalid format:', error.message);
} else {
console.error('Unknown error:', error);
}
}Available error classes:
LZMAError- Base error classLZMAMemoryError- Memory allocation failedLZMAMemoryLimitError- Memory limit exceededLZMAFormatError- Unrecognized file formatLZMAOptionsError- Invalid compression optionsLZMADataError- Corrupt compressed dataLZMABufferError- Buffer size issuesLZMAProgrammingError- Internal errors
Streams automatically handle recoverable errors and provide state transition hooks:
const decompressor = createUnxz();
decompressor.on('error', (error) => {
console.error('Decompression error:', error.errno, error.message);
// Stream will emit 'close' event after error
});
decompressor.on('close', () => {
console.log('Stream closed, safe to cleanup');
});For production environments with high concurrency needs, use LZMAPool to limit simultaneous operations:
import { LZMAPool } from 'node-liblzma';
const pool = new LZMAPool(10); // Max 10 concurrent operations
// Monitor pool metrics
pool.on('metrics', (metrics) => {
console.log(`Active: ${metrics.active}, Queued: ${metrics.queued}`);
console.log(`Completed: ${metrics.completed}, Failed: ${metrics.failed}`);
});
// Compress with automatic queuing
const compressed = await pool.compress(buffer);
const decompressed = await pool.decompress(compressed);
// Get current metrics
const status = pool.getMetrics();Pool Events:
queue- Task added to queuestart- Task started processingcomplete- Task completed successfullyerror-task- Task failedmetrics- Metrics updated (after each state change)
Benefits:
- ✅ Automatic backpressure
- ✅ Prevents resource exhaustion
- ✅ Production-ready monitoring
- ✅ Zero breaking changes (opt-in)
Simplified API for file-based compression:
import { xzFile, unxzFile } from 'node-liblzma';
// Compress a file
await xzFile('input.txt', 'output.txt.xz');
// Decompress a file
await unxzFile('output.txt.xz', 'restored.txt');
// With options
await xzFile('large-file.bin', 'compressed.xz', {
preset: 9,
threads: 4
});Advantages over buffer APIs:
- ✅ Handles files > 512MB automatically
- ✅ Built-in backpressure via streams
- ✅ Lower memory footprint
- ✅ Simpler API for common use cases
The low-level native callback used internally by streams follows an errno-style contract to match liblzma behavior and to avoid mixing exception channels:
- Signature:
(errno: number, availInAfter: number, availOutAfter: number) - Success:
errnois eitherLZMA_OKorLZMA_STREAM_END. - Recoverable/other conditions: any other
errnovalue (for example,LZMA_BUF_ERROR,LZMA_DATA_ERROR,LZMA_PROG_ERROR) indicates an error state. - Streams emit
onerrorwith the numericerrnowhenerrno !== LZMA_OK && errno !== LZMA_STREAM_END.
Why errno instead of JS exceptions?
- The binding mirrors liblzma’s status codes and keeps a single error channel that’s easy to reason about in tight processing loops.
- This avoids throwing across async worker boundaries and keeps cleanup deterministic.
High-level APIs remain ergonomic:
- Promise-based functions
xzAsync()/unxzAsync()still resolve toBufferor reject withErroras expected. - Stream users can listen to
errorevents, where we maperrnoto a human-friendly message (messages[errno]).
If you prefer Node’s error-first callbacks, you can wrap the APIs and translate errno to Error objects at your boundaries without changing the native layer.
Well, as simple as this one-liner:
npm i node-liblzma --save--OR--
yarn add node-liblzma--OR-- (recommended for development)
pnpm add node-liblzmaIf you want to recompile the source, for example to disable threading support in the module, then you have to opt out with:
ENABLE_THREAD_SUPPORT=no npm install node-liblzma --build-from-sourceNote: Enabling thread support in the library will NOT work if the LZMA library itself has been built without such support.
To build the module, you have the following options:
- Using system development libraries
- Ask the build system to download
xzand build it - Compile
xzyourself, outsidenode-liblzma, and have it use it after
You need to have the development package installed on your system. If you have Debian based distro:
# apt-get install liblzma-dev
If you do not plan on having a local install, you can ask for automatic download and build of whatever version of xz you want.
Just do:
npm install node-liblzma --build-from-sourceWhen no option is given in the commandline arguments, it will build with default values.
So you did install xz somewhere outside the module and want the module to use it.
For that, you need to set the include directory and library directory search paths as GCC environment variables.
export CPATH=$HOME/path/to/headers
export LIBRARY_PATH=$HOME/path/to/lib
export LD_LIBRARY_PATH=$HOME/path/to/lib:$LD_LIBRARY_PATHThe latest is needed for tests to be run right after.
Once done, this should suffice:
npm installThis project maintains 100% code coverage across all statements, branches, functions, and lines.
You can run tests with:
npm test
# or
pnpm testIt will build and launch the test suite (51 tests) with Vitest with TypeScript support and coverage reporting.
Additional testing commands:
# Watch mode for development
pnpm test:watch
# Coverage report
pnpm test:coverage
# Type checking
pnpm type-checkAs the API is very close to NodeJS Zlib, you will probably find a good reference there.
Otherwise examples can be found as part of the test suite, so feel free to use them! They are written in TypeScript with full type definitions.
Version 2.0 introduces several breaking changes along with powerful new features.
-
Node.js Version Requirement
- Requires Node.js >= 12 + Requires Node.js >= 16
-
ESM Module Format
- CommonJS: var lzma = require('node-liblzma'); + ESM: import * as lzma from 'node-liblzma'; + CommonJS still works via dynamic import
-
TypeScript Migration
- Source code migrated from CoffeeScript to TypeScript
- Full type definitions included
- Better IDE autocomplete and type safety
-
Promise-based APIs (Recommended for new code)
// Old callback style (still works) xz(buffer, (err, compressed) => { if (err) throw err; // use compressed }); // New Promise style try { const compressed = await xzAsync(buffer); // use compressed } catch (err) { // handle error }
-
Typed Error Classes (Better error handling)
import { LZMAMemoryError, LZMADataError } from 'node-liblzma'; try { await unxzAsync(corruptData); } catch (error) { if (error instanceof LZMADataError) { console.error('Corrupt compressed data'); } else if (error instanceof LZMAMemoryError) { console.error('Out of memory'); } }
-
Concurrency Control (For high-throughput applications)
import { LZMAPool } from 'node-liblzma'; const pool = new LZMAPool(10); // Max 10 concurrent operations // Automatic queuing and backpressure const results = await Promise.all( files.map(file => pool.compress(file)) );
-
File Helpers (Simpler file compression)
import { xzFile, unxzFile } from 'node-liblzma'; // Compress a file (handles streaming automatically) await xzFile('input.txt', 'output.txt.xz'); // Decompress a file await unxzFile('output.txt.xz', 'restored.txt');
If you maintain tests for code using node-liblzma:
- Mocha test framework
+ Vitest test framework (faster, better TypeScript support)Development tooling has been modernized:
- Linter: Biome (replaces ESLint + Prettier)
- Package Manager: pnpm recommended (npm/yarn still work)
- Pre-commit Hooks: nano-staged + simple-git-hooks
Solution: Install system development package or let node-gyp download it:
# Debian/Ubuntu
sudo apt-get install liblzma-dev
# macOS
brew install xz
# Windows (let node-gyp download and build)
npm install node-liblzma --build-from-sourceSymptoms: Build fails with C++ compilation errors
Solutions:
-
Install build tools:
# Ubuntu/Debian sudo apt-get install build-essential python3 # macOS (install Xcode Command Line Tools) xcode-select --install # Windows npm install --global windows-build-tools
-
Clear build cache and retry:
rm -rf build node_modules npm install
Solution: Your platform might not have prebuilt binaries. Build from source:
npm install node-liblzma --build-from-sourceCauses:
- Input buffer exceeds 512MB limit (security protection)
- System out of memory
- Trying to decompress extremely large archive
Solutions:
-
For files > 512MB, use streaming APIs:
import { createReadStream, createWriteStream } from 'fs'; import { createXz } from 'node-liblzma'; createReadStream('large-file.bin') .pipe(createXz()) .pipe(createWriteStream('large-file.xz'));
-
Or use file helpers (automatically handle large files):
await xzFile('large-file.bin', 'large-file.xz');
Symptoms: Decompression fails with LZMADataError
Causes:
- File is not actually XZ/LZMA compressed
- File is corrupted or incomplete
- Wrong file format (LZMA1 vs LZMA2)
Solutions:
-
Verify file format:
file compressed.xz # Should show: "XZ compressed data" -
Check file integrity:
xz -t compressed.xz
-
Handle errors gracefully:
try { const data = await unxzAsync(buffer); } catch (error) { if (error instanceof LZMADataError) { console.error('Invalid or corrupt XZ file'); } }
Symptoms: Compiler warnings about -Wmissing-field-initializers
Status: This is normal and does not affect functionality. Thread support still works correctly.
Disable thread support (if warnings are problematic):
ENABLE_THREAD_SUPPORT=no npm install node-liblzma --build-from-sourceSolution: Enable multi-threaded compression:
import { xz } from 'node-liblzma';
xz(buffer, { threads: 4 }, (err, compressed) => {
// 4 threads used for compression
});Note: Threads only apply to compression, not decompression.
Solution: Use LZMAPool to limit concurrency:
import { LZMAPool } from 'node-liblzma';
const pool = new LZMAPool(5); // Limit to 5 concurrent operations
// Pool automatically queues excess operations
const results = await Promise.all(
largeArray.map(item => pool.compress(item))
);Solutions:
-
Install Visual Studio Build Tools:
npm install --global windows-build-tools
-
Use the correct Python version:
npm config set python python3
-
Let the build system download XZ automatically:
npm install node-liblzma --build-from-source
Cause: Path separator issues in Windows
Solution: Use forward slashes or path.join():
import { join } from 'path';
await xzFile(join('data', 'input.txt'), join('data', 'output.xz'));We welcome contributions! Here's how to get started.
-
Clone the repository:
git clone https://github.com/oorabona/node-liblzma.git cd node-liblzma -
Install dependencies (pnpm recommended):
pnpm install # or npm install -
Build the project:
pnpm build
-
Run tests:
pnpm test
# Run all tests
pnpm test
# Watch mode (re-run on changes)
pnpm test:watch
# Coverage report
pnpm test:coverage
# Interactive UI
pnpm test:uiWe use Biome for linting and formatting:
# Check code style
pnpm check
# Auto-fix issues
pnpm check:write
# Lint only
pnpm lint
# Format only
pnpm format:writepnpm type-check- Linter: Biome (configured in
biome.json) - Formatting: Biome handles both linting and formatting
- Pre-commit hooks: Automatically run via nano-staged + simple-git-hooks
- TypeScript: Strict mode enabled
We follow Conventional Commits:
<type>(<scope>): <description>
[optional body]
[optional footer]
Types:
feat: New featurefix: Bug fixdocs: Documentation changesrefactor: Code refactoringtest: Test changeschore: Build/tooling changesperf: Performance improvements
Examples:
git commit -m "feat(pool): add LZMAPool for concurrency control"
git commit -m "fix(bindings): resolve memory leak in FunctionReference"
git commit -m "docs(readme): add migration guide for v2.0"-
Fork the repository and create a feature branch:
git checkout -b feat/my-new-feature
-
Make your changes following code style guidelines
-
Add tests for new functionality:
- All new code must have 100% test coverage
- Tests go in
test/directory - Use Vitest testing framework
-
Ensure all checks pass:
pnpm check:write # Fix code style pnpm type-check # Verify TypeScript types pnpm test # Run test suite
-
Commit with conventional commits:
git add . git commit -m "feat: add new feature"
-
Push and create Pull Request:
git push origin feat/my-new-feature
-
Wait for CI checks to pass (GitHub Actions will run automatically)
- Coverage: Maintain 100% code coverage (statements, branches, functions, lines)
- Test files: Name tests
*.test.tsintest/directory - Structure: Use
describeanditblocks with clear descriptions - Assertions: Use Vitest's
expect()API
Example test:
import { describe, it, expect } from 'vitest';
import { xzAsync, unxzAsync } from '../src/lzma.js';
describe('Compression', () => {
it('should compress and decompress data', async () => {
const original = Buffer.from('test data');
const compressed = await xzAsync(original);
const decompressed = await unxzAsync(compressed);
expect(decompressed.equals(original)).toBe(true);
});
});Releases are automated using @oorabona/release-it-preset:
# Standard release (patch/minor/major based on commits)
pnpm release
# Manual changelog editing
pnpm release:manual
# Hotfix release
pnpm release:hotfix
# Update changelog only (no release)
pnpm changelog:updateFor maintainers only. Contributors should submit PRs; maintainers handle releases.
- Questions: Open a Discussion
- Bugs: Open an Issue
- Security: Email [email protected] (do not open public issues)
By contributing, you agree that your contributions will be licensed under LGPL-3.0+.
If you find one, feel free to contribute and post a new issue! PR are accepted as well :)
Kudos goes to addaleax for helping me out with C++ stuff !
If you compile with threads, you may see a bunch of warnings about -Wmissing-field-initializers.
This is normal and does not prevent threading from being active and working.
I did not yet figure how to fix this except by masking the warning..
This software is released under LGPL3.0+