Header Ads Widget

Game with eBay Refurbished. ebay tracking pixel

💡 How to Install Stable Diffusion on WSL Ubuntu — The Reusable, Developer-Friendly Way

💡 How to Install Stable Diffusion on WSL Ubuntu — The Reusable, Developer-Friendly Way

It’s not hard to find tutorials on how to install Stable Diffusion on Ubuntu running under WSL. But let’s face it — most of them are rigid, non-reusable, or built for only one specific case.

Want to learn how real developers set up their Stable Diffusion environments with flexibility and long-term usability in mind?

Here at facilitech999, we take a more scalable approach:

  • We use mamba to manage Python environments, because you’ll likely need multiple versions.
  • And once everything is installed? We export the whole thing as a template — so it’s easy to recreate later or migrate to another PC.

📦 Step 1: Install Required Dependencies

sudo apt update && sudo apt install -y \
    make build-essential libssl-dev zlib1g-dev \
    libbz2-dev libreadline-dev libsqlite3-dev curl \
    llvm libncursesw5-dev xz-utils tk-dev \
    libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev git

🐍 Step 2: Install pyenv to Manage Python Versions

curl https://pyenv.run | bash

If you're using Bash (default):

echo -e '\n# pyenv' >> ~/.bashrc
echo 'export PATH="$HOME/.pyenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(pyenv init --path)"' >> ~/.bashrc
echo 'eval "$(pyenv virtualenv-init -)"' >> ~/.bashrc
source ~/.bashrc

If you’ve installed Zsh:
Not using Zsh yet? Check out our setup guide here: 👉 Install Zsh, Oh-My-Zsh & Neovim on Ubuntu

echo -e '\n# pyenv' >> ~/.zshrc
echo 'export PATH="$HOME/.pyenv/bin:$PATH"' >> ~/.zshrc
echo 'eval "$(pyenv init --path)"' >> ~/.zshrc
echo 'eval "$(pyenv virtualenv-init -)"' >> ~/.zshrc
source ~/.zshrc

🐍 Step 3: Install Python

pyenv install 3.10.13
pyenv global 3.10.13

🔁 Step 4: Install Miniforge3 and Mamba

wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh
bash Miniforge3-Linux-x86_64.sh

source ~/miniforge3/bin/activate
conda init zsh && source ~/.zshrc
conda install -n base -c conda-forge mamba

🌱 Step 5: Create Your Stable Diffusion Environment

mamba create -n sd-env python=3.10
conda activate sd-env
# pip install torch transformers diffusers
conda env export --name sd-env > sd-env.yaml
mamba env create -f sd-env.yaml

⚡ Step 6: Optional — If You Have an NVIDIA GPU

nvidia-smi

Install PyTorch for CUDA:

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

Test with a simple script:

# torchtest.py
import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))
python torchtest.py

🚀 Step 7: Launch Stable Diffusion

mkdir ~/projects
cd ~/projects
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
python launch.py --xformers --precision full
# or
python launch.py --precision auto --no-half

🧩 Step 8: Export WSL Template

wsl --export Ubuntu-24.04 ubuntu-sd-setup.tar

More details: 👉 Back Up and Import WSL Ubuntu-24.04

🔁 Bonus: Reuse Windows SD Folder in Ubuntu

cd ~/projects/stable-diffusion-webui
pip install -r requirements.txt
pip install xformers bitsandbytes
python launch.py

🧩 Note for RTX 4060 (CUDA 12.1) Users: How to Align torch and xformers

If you're using an NVIDIA RTX 4060 Laptop GPU and encounter issues with xformers compatibility in your Stable Diffusion environment, make sure your torch version is aligned with your CUDA runtime. Here's a quick fix:

# 1. Uninstall mismatched versions if any
pip uninstall xformers torch torchvision torchaudio

# 2. Reinstall torch with matching CUDA 12.1 support
pip install torch==2.2.2+cu121 torchvision==0.17.2+cu121 torchaudio==2.2.2+cu121 \
  --index-url https://download.pytorch.org/whl/cu121

# 3. Install compatible xformers
pip install xformers==0.0.25.post1

✅ After installation, confirm your environment is ready by running:

import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))
print(torch.__version__)

If the output includes your GPU model (e.g., NVIDIA GeForce RTX 4060 Laptop GPU) and torch.__version__ is 2.2.2+cu121, you're good to go!

🧩 Note for RTX 4060 (CUDA 12.1) Users: How to Align torch and xformers

If you're using an NVIDIA RTX 4060 Laptop GPU and encounter issues with xformers compatibility in your Stable Diffusion environment, make sure your torch version is aligned with your CUDA runtime. Here's a quick fix:

# 1. Uninstall mismatched versions if any
pip uninstall xformers torch torchvision torchaudio

# 2. Reinstall torch with matching CUDA 12.1 support
pip install torch==2.2.2+cu121 torchvision==0.17.2+cu121 torchaudio==2.2.2+cu121 \
  --index-url https://download.pytorch.org/whl/cu121

# 3. Install compatible xformers
pip install xformers==0.0.25.post1

✅ After installation, confirm your environment is ready by running:

import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))
print(torch.__version__)

If the output includes your GPU model (e.g., NVIDIA GeForce RTX 4060 Laptop GPU) and torch.__version__ is 2.2.2+cu121, you're good to go! 

 

🧠 If You Want to Run Stable Diffusion on Native Ubuntu (Not WSL)

Some users may prefer running Stable Diffusion directly on native Ubuntu instead of WSL for full GPU access or better performance.
Here is a checklist and notes to help you succeed:

  1. Use Xorg instead of Wayland:
    At the login screen, click the gear icon and choose “Ubuntu on Xorg”. Wayland may interfere with NVIDIA GPU functionality.
  2. Disable Secure Boot in BIOS:
    Secure Boot can block NVIDIA kernel modules from loading. Disable it for smoother driver installation.
  3. Install the Latest NVIDIA Driver:
    Confirm that nvidia-smi runs successfully. For example:
    NVIDIA-SMI 570.144
    Driver Version: 570.144
    CUDA Version: 12.8
    This ensures CUDA compatibility for PyTorch.
  4. Install Developer Tools and Dependencies:
    Run this before installing the driver:
    sudo apt update && sudo apt install build-essential dkms linux-headers-$(uname -r)
  5. Install PyTorch with CUDA Support:
    Use matching versions for your installed CUDA (12.1 in this case):
    
    pip install torch==2.2.2+cu121 torchvision==0.17.2+cu121 torchaudio==2.2.2+cu121 \
      --index-url https://download.pytorch.org/whl/cu121
           
           
           
  6. Verify GPU Access in Python:
    import torch
    print(torch.cuda.is_available())
    print(torch.cuda.get_device_name(0))
    print(torch.__version__)

    Expected output should confirm GPU availability and show your RTX model (e.g., RTX 4060).

  7. Launch Stable Diffusion with GPU:
    mkdir ~/projects
    cd ~/projects
    git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
    cd stable-diffusion-webui 
    python launch.py --xformers --precision full

    If a torch-cuda check fails, add:

    COMMANDLINE_ARGS="--skip-torch-cuda-test --xformers --precision full" python launch.py

Running Stable Diffusion on native Ubuntu can be rewarding for power users.
However, setup is more involved than WSL, so be prepared to troubleshoot driver and kernel module issues.

Post a Comment

0 Comments