0 like 0 dislike
30 views
in AI + Rasberry PI by

Run a Stable Diffusion Image Generator Locally on Raspberry Pi 5

Create stunning AI-generated images on your Pi — without internet!


️ Overview

Stable Diffusion is a powerful open-source image generation model that turns text prompts into photorealistic or artistic images. While it's typically run on desktops with powerful GPUs, recent optimizations and the Raspberry Pi 5's performance improvements make it possible (with compromises) to run lightweight versions of it locally.

In this article, we’ll show you how to:

✅ Run a CPU-friendly version of Stable Diffusion on Raspberry Pi 5
✅ Generate images from text prompts
✅ Use Python and diffusers library from HuggingFace
✅ Work fully offline after setup

⚠️ This guide uses a highly optimized model suited for CPU-only inference. Don’t expect real-time speed — but it works!


Requirements

ComponentVersion / Notes
Raspberry PiPi 5 (8GB RAM strongly recommended)
OSRaspberry Pi OS Bookworm (64-bit)
Python≥ 3.10
Disk Space~6 GB
InternetOnly for installation and model download
ModelStable Diffusion 1.5 (CPU-optimized)

Step 1: System Setup

sudo apt update && sudo apt upgrade -y
sudo apt install python3 python3-venv python3-pip -y

Create and activate a Python virtual environment:

python3 -m venv sd-env
source sd-env/bin/activate

Step 2: Install Required Python Libraries

pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
pip install diffusers transformers accelerate scipy safetensors

We use the CPU-only version of PyTorch to ensure compatibility with the Raspberry Pi’s hardware.


Step 3: Download a CPU-optimized Model

Let’s use the runwayml/stable-diffusion-v1-5 or smaller variant from HuggingFace.

You can use this script to automatically load and save the model locally:

from diffusers import StableDiffusionPipeline
import torch

pipeline = StableDiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5",
    torch_dtype=torch.float32,
)
pipeline = pipeline.to("cpu")

prompt = "A cyberpunk robot cat, neon background"
image = pipeline(prompt).images[0]
image.save("output.png")

Tip: Use Smaller Models

If your Pi struggles, try:

from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base")

Or explore CPU-optimized models like:

  • CompVis/stable-diffusion-v1-4

  • Linaqruf/stable-diffusion-1-5-better-vae

  • SG161222/Realistic_Vision_V5.1_noVAE


How Long Does It Take?

On Raspberry Pi 5 (8GB):

Image SizeInference Time
256x256~2-4 minutes
512x512~7-12 minutes

⚠️ Do not expect desktop GPU speeds — but for low-volume, offline creative projects, it's functional.


Optional: Serve as a Web App with Gradio

pip install gradio

Add this to the script:

import gradio as gr

def generate(prompt):
    image = pipeline(prompt).images[0]
    return image

gr.Interface(fn=generate, inputs="text", outputs="image").launch()

Access locally via http://localhost:7860


Example Output

PromptResult
“A medieval knight riding a dragon through the clouds”Example
“A serene lake in the forest during sunset, ultra-realistic”Example

(You can host real examples or dummy images later on your blog)


Summary

You’ve now turned your Raspberry Pi 5 into a local AI image generator using Stable Diffusion. While it’s not lightning-fast, it’s fully offline, works with open-source tools, and gives you creative freedom at the edge.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
To avoid this verification in future, please log in or register.

20 questions

1 answer

3 comments

2 users

Welcome to Asky Q&A, where you can ask questions and receive answers from other members of the community.
Asky AI - Home
HeyPiggy Banner
...