Skip to main content
MakeMyZip
← Back to blog

The Best Compression Levels for ZIP Files — When to Use Store, Fast, Normal, and Max

2026-04-11 · 7 min read

ZIP tools give you a compression level slider. Most people leave it on the default and never think about it. That default is usually fine — but understanding what the levels do helps you make better choices when the archive is large, the transfer is slow, or the files are already compressed.

Here's what each level actually does, when each one wins, and how to pick without overthinking it.

How DEFLATE Compression Works (The 30-Second Version)

ZIP files use an algorithm called DEFLATE. DEFLATE looks for repeated patterns in your data and replaces them with shorter references. The word "function" appearing 500 times in a JavaScript file doesn't get stored 500 times — it gets stored once, and every subsequent occurrence is replaced with a short pointer back to the first one.

The compression level controls how hard the algorithm works to find these patterns. A higher level means more time spent searching for matches, especially longer and more distant ones. More matches found = smaller output. But the search takes more CPU time.

Decompression speed is the same regardless of what compression level was used. The compressed data includes all the information needed to reconstruct the original — the decompressor just follows the pointers. A file compressed at maximum level extracts just as fast as one compressed at minimum level.

The Four Levels

Store (Level 0) — No Compression

Store mode doesn't compress at all. It packages files into the ZIP container format (so you get a single .zip file with the directory structure preserved) but every byte is stored verbatim. The output is the same size as the input, plus a small overhead for the ZIP headers.

Speed: Essentially instant. The only work is copying bytes and writing headers.

When Store wins:

  • Already-compressed files. JPEG photos, MP4 videos, MP3 audio, PNG images, and other media formats are already compressed by their own codecs. Running DEFLATE over them won't make them smaller — in some cases the output is actually slightly larger than the input because DEFLATE adds overhead without finding any patterns to exploit. Storing these files saves time with zero size penalty.
  • Archives inside archives. If you're creating a ZIP of ZIP files (or a ZIP of .tar.gz files), the inner archives are already compressed. Store mode avoids the pointless double-compression attempt.
  • Speed-critical pipelines. If you're generating archives programmatically as part of a build process or data pipeline and the bottleneck is I/O rather than bandwidth, Store mode eliminates the CPU overhead entirely.
  • Bundling for download. When you want a single downloadable file that preserves folder structure but don't care about size reduction. MakeMyZip's "Extract All" feature uses Store mode internally for this reason — it bundles extracted files into a ZIP for download without wasting time re-compressing them.

Fast (Level 1-3) — Quick Compression

Fast mode uses DEFLATE with a small search window and limited match-finding effort. It finds the easy patterns — repeated headers, common strings, obvious redundancy — and moves on quickly.

Speed: Roughly 2-4x slower than Store, but still very fast. On modern hardware, fast mode compresses at several hundred MB/s.

Compression ratio: Typically 40-60% of original size for text-heavy content. For a folder of source code, you might see 50% compression (half the original size). For a mix of text and binary, expect 60-70%.

When Fast wins:

  • Large archives where time matters. If you're compressing 10GB of log files and saving 5 minutes of compression time is worth a few percent more disk space, Fast is the right call.
  • Iterative workflows. If you're creating archives frequently during development (packaging builds, bundling test data), Fast gives you reasonable compression without slowing down your workflow.
  • Mixed content. When an archive contains both compressible (text, source code, CSV) and incompressible (images, video) content, Fast spends less time on the incompressible files while still compressing the compressible ones reasonably well.

Normal (Level 5-6) — The Default

Normal is the default in most ZIP tools. It uses a moderate search window and balanced match-finding. It finds most of the patterns that exist without exhaustively searching for every possible optimization.

Speed: Roughly 2-3x slower than Fast. Still fast enough that you won't notice on archives under a few hundred MB.

Compression ratio: Typically 30-50% of original size for text-heavy content. The improvement over Fast is usually 5-15 percentage points. On a 100MB folder of text files, you might see 45MB at Fast versus 38MB at Normal.

When Normal wins:

  • The vast majority of situations. Normal is the default because it's the best tradeoff for most workloads. The compression improvement over Fast is meaningful, and the speed cost is rarely noticeable.
  • Sharing via email or upload. When you're creating an archive to send to someone, the compression time is a one-time cost. A few extra seconds of compression saves bandwidth and upload time. Normal is almost always the right choice here.
  • Text-heavy content. Source code, documents, CSVs, XML, JSON, HTML — Normal captures most of the available compression for these file types.

Maximum (Level 9) — Best Compression

Maximum mode uses the largest search window DEFLATE supports and performs exhaustive match-finding. It considers every possible way to encode each section of the data and picks the smallest representation.

Speed: 2-5x slower than Normal. On a 1GB archive, the difference between Normal and Maximum might be 30 seconds versus 2 minutes. On a 100MB archive, it's barely noticeable.

Compression ratio: Typically 1-5% smaller than Normal. On text-heavy content, you might see 36MB versus 38MB on a 100MB input. The improvement is real but modest.

When Maximum wins:

  • Archival storage. If you're creating a ZIP that will be stored for months or years and downloaded many times, the one-time extra compression effort pays off with every download. Source code releases, dataset distributions, and documentation packages are good candidates.
  • Bandwidth-constrained transfers. If you're uploading over a slow connection or paying per-byte for bandwidth, squeezing out that extra 3% matters.
  • Small archives. On archives under 50MB, the speed difference between Normal and Maximum is under a second. Just use Maximum — there's no reason not to.

When Maximum doesn't help:

  • Already-compressed content. DEFLATE level 9 on a JPEG file produces the same output as level 1 — both find nothing to compress. Don't wait for maximum compression on files that won't compress.
  • Very large archives with time pressure. If you're compressing 50GB and need it done in the next 10 minutes, Normal will finish. Maximum might not.

Real Numbers: What to Expect

Compression ratios vary wildly by content type. Here are realistic ranges based on typical files:

Content TypeStoreFastNormalMaximum
Source code (JS/TS/Python)100%25-35%20-28%19-26%
Plain text / CSV / logs100%20-30%15-25%14-23%
Office docs (DOCX/XLSX)100%95-100%94-99%93-98%
JPEG photos100%99-101%99-101%99-101%
MP4 video100%100-101%100-101%100-101%
PDF documents100%85-95%80-92%79-91%
Executables / binaries100%50-70%45-65%43-62%

(Percentages are output size relative to input. Lower is better. Values over 100% mean the output is larger than the input.)

Key takeaways from this table:

  • DOCX and XLSX are already ZIP files internally. They're pre-compressed. You'll get almost zero additional compression. Same for PPTX, JAR, APK, and other formats that are ZIP containers.
  • JPEG, MP4, MP3, PNG, and other media formats don't compress further. Values over 100% happen because DEFLATE adds a small overhead when it can't find patterns.
  • Source code and plain text compress extremely well. 75-85% size reduction at Normal level is common.
  • The jump from Fast to Normal matters more than Normal to Maximum. If you're going to change from the default, consider whether Fast (not Maximum) is the right direction for your use case.

The Decision Framework

Ask yourself two questions:

  1. Are the files already compressed? (Media, Office docs, other archives) → Use Store. Don't waste CPU cycles.
  2. Am I optimizing for speed or size?
    • Speed → Fast
    • Balanced → Normal (default)
    • Size → Maximum

That's it. For mixed archives (some text, some images), use Normal and let DEFLATE handle the file-by-file optimization internally. It'll compress the text well and skip over the already-compressed media efficiently.

A Note on "Ultra" Compression

Some tools offer compression levels beyond 9, or proprietary "ultra" modes. These typically switch from DEFLATE to a different algorithm (LZMA, LZMA2, Zstandard). The compression improvement can be dramatic — 40-50% smaller than DEFLATE maximum — but the output is no longer a standard DEFLATE-compressed ZIP. Some tools can open it; many can't.

If you need the absolute best compression and don't care about compatibility, consider using the 7z format instead of ZIP. LZMA2 in a .7z container gives you better compression than any DEFLATE level, with the tradeoff of requiring 7-Zip on the receiving end. MakeMyZip's ZIP creator sticks to standard DEFLATE for maximum compatibility, but the 7z creator is there when you need the extra compression.

Try It Yourself

The best way to understand compression levels is to test them on your actual files. MakeMyZip's ZIP creator lets you drag in a folder, adjust the compression slider, and see the resulting size immediately. The output size and compression ratio display updates after each creation — try the same folder at Store, Fast, Normal, and Maximum to see the real differences for your specific content.

Everything runs in your browser. Your files never leave your machine.