refactor(word): lazy-load DOCX images to reduce peak memory without changing output (#13233)

**Summary**
This PR tackles a significant memory bottleneck when processing
image-heavy Word documents. Previously, our pipeline eagerly decoded
DOCX images into `PIL.Image` objects, which caused high peak memory
usage. To solve this, I've introduced a **lazy-loading approach**:
images are now stored as raw blobs and only decoded exactly when and
where they are consumed.

This successfully reduces the memory footprint while keeping the parsing
output completely identical to before.

**What's Changed**
Instead of a dry file-by-file list, here is the logical breakdown of the
updates:

* **The Core Abstraction (`lazy_image.py`)**: Introduced `LazyDocxImage`
along with helper APIs to handle lazy decoding, image-type checks, and
NumPy compatibility. It also supports `.close()` and detached PIL access
to ensure safe lifecycle management and prevent memory leaks.
* **Pipeline Integration (`naive.py`, `figure_parser.py`, etc.)**:
Updated the general DOCX picture extraction to return these new lazy
images. Downstream consumers (like the figure/VLM flow and base64
encoding paths) now decode images right at the use site using detached
PIL instances, avoiding shared-instance side effects.
* **Compatibility Hooks (`operators.py`, `book.py`, etc.)**: Added
necessary compatibility conversions so these lazy images flow smoothly
through existing merging, filtering, and presentation steps without
breaking.

**Scope & What is Intentionally Left Out**
To keep this PR focused, I have restricted these changes strictly to the
**general Word pipeline** and its downstream consumers.
The `QA` and `manual` Word parsing pipelines are explicitly **not
modified** in this PR. They can be safely migrated to this new lazy-load
model in a subsequent, standalone PR.

**Design Considerations**
I briefly considered adding image compression during processing, but
decided against it to avoid any potential quality degradation in the
derived outputs. I also held off on a massive pipeline re-architecture
to avoid overly invasive changes right now.

**Validation & Testing**
I've tested this to ensure no regressions:

* Compared identical DOCX inputs before and after this branch: chunk
counts, extracted text, table HTML, and image descriptions match
perfectly.
* **Confirmed a noticeable drop in peak memory usage when processing
image-dense documents.** For a 30MB Word document containing 243 1080p
screenshots, memory consumption is reduced by approximately 1.5GB.

**Breaking Changes**
None.
This commit is contained in:
eviaaaaa
2026-02-28 11:22:31 +08:00
committed by GitHub
parent 4f0c892b32
commit fa71f8d0c7
8 changed files with 195 additions and 38 deletions

View File

@ -20,7 +20,6 @@ import re
from collections import defaultdict
from io import BytesIO
from PIL import Image
from PyPDF2 import PdfReader as pdf2_read
from deepdoc.parser import PdfParser, PlainParser
@ -29,6 +28,7 @@ from rag.app.naive import by_plaintext, PARSERS
from common.parser_config_utils import normalize_layout_recognizer
from rag.nlp import rag_tokenizer
from rag.nlp import tokenize
from rag.utils.lazy_image import ensure_pil_image, is_image_like
class Pdf(PdfParser):
@ -228,8 +228,10 @@ def chunk(filename, binary=None, from_page=0, to_page=100000, lang="Chinese", ca
for pn, (txt, img) in enumerate(sections):
d = copy.deepcopy(doc)
pn += from_page
if not isinstance(img, Image.Image):
if not is_image_like(img):
img = None
else:
img = ensure_pil_image(img)
d["image"] = img
d["page_num_int"] = [pn + 1]
d["top_int"] = [0]