This release introduces experimental SDXL support, currently limited to Snapdragon 8 Elite and Snapdragon 8 Elite Gen 5 / 8 Gen 5 devices. Output resolution is fixed at 1024×1024.
Two models are built into the app: SDXL Base and Anything XL
To convert your own SDXL models, refer to the SDXL conversion guide.
Low RAM Mode is enabled by default for SDXL — each model is loaded before inference and automatically released afterward. If your device has ≥ 16 GB RAM, you may disable this for faster performance.
Textual inversion embeddings and prompt weighting both work as expected on SDXL.
Added the LCM scheduler — pair with an LCM-distilled checkpoint for high-quality output in far fewer steps.
CLIP outputs are cached between generations. If the positive and negative prompts are unchanged, the text encoder is skipped entirely (QNN, MNN hybrid, and the SDXL dual-encoder path).
On QNN the unconditional UNet pass is skipped when CFG = 1, roughly halving step time in that mode.
Added an optional log collection feature. When enabled, logs are displayed after exiting the inference screen — useful for bug reports and troubleshooting. Keep this disabled under normal use.
Fixed an issue where custom model IDs could collide with built-in model IDs.
Fixed a potential memory leak in the QNN model lifecycle.
Removed the upscaler resolution limit on the GitHub version.
Variants :
LocalDream_xxx_with_filter: Block NSFW Results. Same as Google Play version.
LocalDream_xxx: No filter.
Co-Authored-By: Claude Opus 4.7