The open-source code of Queryable, an iOS app, leverages the OpenAI's CLIP Apple's MobileCLIP model to conduct offline searches in the 'Photos' album. Unlike the category-based search model built into the iOS Photos app, Queryable allows you to use natural language statements, such as a brown dog sitting on a bench
, to search your album. Since it's offline, your album privacy won't be compromised by any company, including Apple or Google.
Blog | App Store | Website | Story | 故事
The process is as follows:
For more details, please refer to my blog: Run CLIP on iPhone to Search Photos.
[2024-09-01]: Now supports Apple's MobileCLIP.
You can download the exported TextEncoder_mobileCLIP_s2.mlmodelc
and ImageEncoder_mobileCLIP_s2.mlmodelc
from Google Drive. Currently we use s2
model as the default model, which balances both efficiency & precision.
The Android version(Code) developed by @greyovo, which supports both English and Chinese. See details in #12.
Download the TextEncoder_mobileCLIP_s2.mlmodelc
and ImageEncoder_mobileCLIP_s2.mlmodelc
from Google Drive.
Clone this repo, put the downloaded models below CoreMLModels/
path and run Xcode, it should work.
If you only want to run Queryable, you can skip this step and directly use the exported model from Google Drive. If you wish to implement Queryable that supports your own native language, or do some model quantization/acceleration work, here are some guidelines.
The trick is to separate the TextEncoder
and ImageEncoder
at the architecture level, and then load the model weights individually. Queryable uses the OpenAI ViT-B/32 Apple's MobileCLIP model, and I wrote a Jupyter notebook to demonstrate how to separate, load, and export the OpenAI's CLIP Core ML model(If you want the MobileCLIP's export script, checkout #issuecomment-2328024269). The export results of the ImageEncoder's Core ML have a certain level of precision error, and more appropriate normalization parameters may be needed.
clip-vit-base-patch32
. This has significantly reduced the precision error in the image encoder. For more details, see #18.Disclaimer: I am not a professional iOS engineer, please forgive my poor Swift code. You may focus only on the loading, computation, storage, and sorting of the model.
You can apply Queryable to your own product, but I don't recommend simply modifying the appearance and listing it on the App Store. If you are interested in optimizing certain aspects(such as https://github.com/mazzzystar/Queryable/issues/4, ~~https://github.com/mazzzystar/Queryable/issues/5~~, https://github.com/mazzzystar/Queryable/issues/6, https://github.com/mazzzystar/Queryable/issues/10, https://github.com/mazzzystar/Queryable/issues/11, ~~https://github.com/mazzzystar/Queryable/issues/12~~), feel free to submit a PR (Pull Request).
Thank you for your contribution : )
If you have any questions/suggestions, here are some contact methods: Discord | Twitter | Reddit: r/Queryable.
MIT License
Copyright (c) 2023 Ke Fang
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。