How can MuPDF WebViewer load a large document no problem but have problems exporting it?

This is a common issue with PDF WASM viewers, and it happens due to fundamental differences between how viewing and exporting work:

Why Viewing Works Fine

Streaming & Chunked Loading:
  • WASM PDF viewers (like MuPDF WebViewer) can request byte ranges for document pages on-demand
  • They only render visible pages, keeping most content in compressed form
  • Memory usage stays relatively low since only decoded page data is in RAM
Optimized Rendering:
  • Pages are rendered on-demand as DOM elements
  • Decoded images/fonts are cached but can be garbage collected
  • The viewer works with compressed PDF streams directly

Why Saving Fails

Memory Explosion:
  • Saving often requires loading the entire PDF into memory at once
  • All pages, images, fonts, and metadata must be accessible simultaneously
  • WASM has limited memory (usually 4GB max, often much less)
  • Large PDFs can easily exceed available memory, especially if they contain many images as this data gets stored in vast base64 data representations
Processing Overhead:
  • Saving may involve recompressing or restructuring the PDF
  • Form data, annotations, or modifications need to be merged
  • Cross-reference tables must be rebuilt
  • This creates additional memory pressure

Browser-Specific Issues

Chrome/Edge:
  • More generous with WASM memory
  • Better garbage collection
Firefox:
  • Stricter memory limits
  • May need explicit memory management
Safari:
  • Most restrictive with memory
  • Often requires server-side processing

Solutions

None yet - invesigation of chunked data exporting for future releases is in process.