leaderboard 排行榜 category 分类浏览 code 独立开发者 notifications 通知中心 download 下载 App

登录

Local Dream

Local Dream

正式版
category 实用工具 / AI工具 update 2.4.0_with_filter (58) storage 63.98MB
file_download
18
下载量
chat_bubble_outline
0
评论数
favorite_border
32
收藏数
schedule
-
更新时间

photo_library 应用截图

warning

应用警告

https://github.com/xororz/local-dream 如果觉得好用,可以支持一下软件作者,项目主页最下面赞助方式有国内的爱发电

new_releases 更新日志

This release introduces experimental SDXL support, currently limited to Snapdragon 8 Elite and Snapdragon 8 Elite Gen 5 / 8 Gen 5 devices. Output resolution is fixed at 1024×1024. Two models are built into the app: SDXL Base and Anything XL To convert your own SDXL models, refer to the SDXL conversion guide. Low RAM Mode is enabled by default for SDXL — each model is loaded before inference and automatically released afterward. If your device has ≥ 16 GB RAM, you may disable this for faster performance. Textual inversion embeddings and prompt weighting both work as expected on SDXL. Added the LCM scheduler — pair with an LCM-distilled checkpoint for high-quality output in far fewer steps. CLIP outputs are cached between generations. If the positive and negative prompts are unchanged, the text encoder is skipped entirely (QNN, MNN hybrid, and the SDXL dual-encoder path). On QNN the unconditional UNet pass is skipped when CFG = 1, roughly halving step time in that mode. Added an optional log collection feature. When enabled, logs are displayed after exiting the inference screen — useful for bug reports and troubleshooting. Keep this disabled under normal use. Fixed an issue where custom model IDs could collide with built-in model IDs. Fixed a potential memory leak in the QNN model lifecycle. Removed the upscaler resolution limit on the GitHub version. Variants : LocalDream_xxx_with_filter: Block NSFW Results. Same as Google Play version. LocalDream_xxx: No filter. Co-Authored-By: Claude Opus 4.7

description 应用介绍

本地运行文生图模型应用 ———————————— 使用 Snapdragon NPU 加速在 Android 上运行 Stable Diffusion。还支持CPU/gpu推理。 NPU 支持的设备: - 骁龙 8gen1 - 骁龙8+第一代 - 骁龙 8gen2 - 骁龙 8gen3 - 骁龙 8gen4 CPU支持的设备: - 全部 技术实施 NPU加速 SDK:利用 Hexagon NPU 的高通 QNN SDK 量化:W8A16静态量化以实现最佳性能 分辨率:固定 512×512 模型形状 性能:极快的推理速度 CPU/GPU 推理 框架:由 MNN 框架提供支持 量化:W8 动态量化 分辨率:灵活尺寸(128×128、256×256、384×384、512×512) 性能:速度适中,兼容性高 NPU 高分辨率支持 下载 512 分辨率模型后,您可以下载补丁以启用 768×768 和 1024×1024 图像生成。请注意,量化高分辨率模型可能会产生布局不佳的图像。我们建议首先以 512 分辨率生成,然后使用 img2img 的高分辨率模型(本质上是 Highres.fix)。建议的 img2img denoise_strength约为 0.75。 设备兼容性 NPU 加速支持 兼容具有以下特点的设备: 骁龙 8 Gen 1 骁龙 8+ 第 1 代 骁龙 8 Gen 2 骁龙 8 第 3 代 骁龙8精英 注意:其他设备无法下载 NPU 模型 CPU/GPU 支持 RAM 要求:~2GB 可用内存 兼容性:近年来的大多数 Android 设备

apps 相关应用