Cloud Computing

Your Own Private AI Image Generator: Step-by-Step Guide to Docker Model Runner and Open WebUI

2026-05-15 20:17:07

Introduction

Ever wanted to generate AI images without worrying about credits, privacy, or content filters? You can now run a full image-generation pipeline on your own machine using Docker Model Runner and Open WebUI. This setup gives you a chat interface where you type a prompt, and your local hardware handles the rest — no cloud subscriptions, no data leaks. In this guide, you’ll learn how to pull a model, launch the web UI, and start creating images in minutes. All you need is Docker and a bit of patience.

Your Own Private AI Image Generator: Step-by-Step Guide to Docker Model Runner and Open WebUI
Source: www.docker.com

What You Need

To verify your Docker setup, run docker model version in a terminal. If you see version info (no errors), you’re ready.

How Docker Model Runner Works with Open WebUI

Before diving into steps, here’s the big picture: Docker Model Runner acts as the control plane. It downloads the AI model, manages the inference backend lifecycle, and exposes a fully OpenAI-compatible API — including the POST /v1/images/generations endpoint. Open WebUI, a popular chat interface for local LLMs, knows exactly how to talk to this API. You type a prompt, the UI sends it to the local model, and the generated image appears in the chat.

Step 1: Pull an Image Generation Model

Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format). This bundles the text encoder, VAE, UNet (or DiT), and scheduler config into a single portable file — distributed via Docker Hub just like any OCI artifact. To download a model:

docker model pull stable-diffusion

This pulls the default Stable Diffusion XL model. The download is about 7 GB, so it may take a few minutes depending on your internet speed. Once downloaded, confirm it’s ready with:

docker model inspect stable-diffusion

You’ll see JSON output like:

{
  "id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823",
  "tags": ["docker.io/ai/stable-diffusion:latest"],
  "created": 1768470632,
  "config": {
    "format": "diffusers",
    "architecture": "diffusers",
    "size": "6.94GB",
    "diffusers": {
      "dduf_file": "stable-diffusion-xl-base-1.0-FP16.dduf",
      "layout": "dduf"
    }
  }
}

This means the model is stored locally as a DDUF file. Docker Model Runner will unpack it at runtime automatically.

Step 2: Launch Open WebUI

This is the magic moment. Docker Model Runner includes a built-in command that wires up Open WebUI against your local inference endpoint. Run:

docker model launch openwebui

That’s it — no extra configuration, no port mapping to remember. The launch command starts both the model inference service and the Open WebUI container, linking them automatically. After a few seconds, you’ll see a log line with the local URL (usually http://localhost:8080). Open that in your browser.

Step 3: Generate Your First Image

Inside Open WebUI, you’ll see a familiar chat interface. To generate an image, simply type a prompt like:

Your Own Private AI Image Generator: Step-by-Step Guide to Docker Model Runner and Open WebUI
Source: www.docker.com

“A dragon wearing a business suit, sitting at a desk, photorealistic style”

The UI will display a small loading animation while the model processes your request. Generating one image on a decent GPU takes about 10–30 seconds; on CPU-only it can take 2–5 minutes. Once done, the image appears in the chat thread. You can download it directly or continue refining with follow-up prompts.

Note: The default model (Stable Diffusion XL) produces 1024×1024 images. You can adjust resolution in future releases, but for now it’s fixed.

Tips for a Smoother Experience

Conclusion

With just two commands — docker model pull and docker model launch openwebui — you now have a fully private, locally running AI image generator. No subscription fees, no data leaving your machine, and no arbitrary content filters. You can create as many images as your hardware allows, all from a clean chat interface. This is the power of local AI, made accessible by Docker Model Runner and Open WebUI.

Explore

How to Analyze Box Office Performance: A Case Study of Mortal Kombat 2 7 Critical Flaws in VECT Ransomware: How a Promising RaaS Became a Self-Destructing Wiper Anatomy of a Nation-State Cyber Espionage Campaign: Understanding SHADOW-EARTH-053 Forrester Names Microsoft a Leader in Sovereign Cloud Platforms – Here’s Why It Matters Galaxy Ring 2 Pushed to 2027: Samsung Targets 10-Day Battery in Major Overhaul