Stable Diffusion Models are Secretly Good at Visual In-Context Learning
AuthorsTrevine Oorloff†, Vishwanath Sindagi‡, Wele Gedara Chaminda Bandara‡, Ali Shafahi‡, Amin Ghiasi, Charan Prakash, Reza Ardekani
Stable Diffusion Models are Secretly Good at Visual In-Context Learning
AuthorsTrevine Oorloff†, Vishwanath Sindagi‡, Wele Gedara Chaminda Bandara‡, Ali Shafahi‡, Amin Ghiasi, Charan Prakash, Reza Ardekani
Large language models (LLM) in natural language processing (NLP) have demonstrated great potential for in-context learning (ICL) — the ability to leverage a few sets of example prompts to adapt to various tasks without having to explicitly update the model weights. ICL has recently been explored for computer vision tasks with promising early outcomes. These approaches involve specialized training and/or additional data that complicate the process and limit its generalizability. In this work, we show that off-the-shelf Stable Diffusion models can be repurposed for visual in-context learning (V-ICL). Specifically, we formulate an in-place attention re-computation within the self-attention layers of the Stable Diffusion architecture that explicitly incorporates context between the query and example prompts. Without any additional fine-tuning, we show that this repurposed Stable Diffusion model is able to adapt to six different tasks: foreground segmentation, single object detection, semantic segmentation, keypoint detection, edge detection, and colorization. For example, the proposed approach improves the mean intersection over union (mIoU) for the foreground segmentation task on Pascal-5i dataset by 8.9% and 3.2% over recent methods such as Visual Prompting and IMProv, respectively. Additionally, we show that the proposed method is able to effectively leverage multiple prompts through ensembling to infer the task better and further improve the performance.
Aggregate-and-Adapt Natural Language Prompts for Downstream Generalization of CLIP
November 4, 2024research area Computer Vision, research area Methods and Algorithmsconference NeurIPS
Large pretrained vision-language models like CLIP have shown promising generalization capability, but may struggle in specialized domains (e.g., satellite imagery) or fine-grained classification (e.g., car models) where the visual concepts are unseen or under-represented during pretraining. Prompt learning offers a parameter-efficient finetuning framework that can adapt CLIP to downstream tasks even when limited annotation data are available. In…
Stable Diffusion with Core ML on Apple Silicon
December 6, 2022research area Tools, Platforms, Frameworks
Today, we are excited to release optimizations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to get started with deploying to Apple Silicon devices.