Tutorial / 01 March 2025

Stable Diffusion for concept art. Tutorial


Introduction

Hey there, I’m Lisa. This is my laid-back, simple guide on how I use Stable Diffusion to speed up and elevate my concept art. I'll walk you through my personal pipeline using a recent train concept for my project STATION Velaya
And just so you know — while some folks might throw shade at AI in creative work, I believe AI is just another tool in our creative toolbox. The real magic always comes from the artist’s mind.


Stable Diffusion Models

The heart of this process is choosing the right model. Models are essentially the engines behind Stable Diffusion — they dictate the style and quality of your output. Here are a few of my favorites:

I typically use the standard VAE for fixing eyes and skip any LoRA setups. For more model options, check out Civitai's checkpoint overview.


ControlNet

This awesome add-on lets you feed Stable Diffusion a guiding image — be it a pose or lineart — to steer the output in the right direction. For my environment concepts, I usually need lineart, which brings us to the next step.


3D Graybox Scene and Lineart

For this particular concept, I started with a screenshot from my whitebox scene in UE5. But really, you can use any 3D software or even hand-drawn lineart. Here’s what I did:

  • Grabed a Screenshot: Captured my 3D graybox scene.
  • Photoshop Magic: Opened the screenshot in Photoshop, converted it to grayscale, and adjusted the levels to get a balanced look.
  • Lineart Creation: Used the stylize filter from the filter gallery, then inverted the image to create a lineart base.
  • Manual Touch-Up: Refined the lineart by hand — fix shapes, add details — until I'm happy with the quick sketch.

This refined lineart became the ControlNet input for Stable Diffusion.


Stable Diffusion Pass

Now, let’s get to the core of the process:

  1. ControlNet Setup: Upload your lineart and set the filter to "Lineart Coarse" so Stable Diffusion treats it as a guideline.

  2. Image Size Matters: Start small — something like 768x512 is perfect. Larger images can slow things down or even crash your session.

  3. Sampler & Settings:

    • Sampler: I usually stick with Euler A or DPM2 A, though sometimes LMS gives me unexpected and cool results.
    • Inference Steps: I’m a fan of using 64 steps—it strikes the right balance, but feel free to experiment.
    • Guidance Scale: This controls how strictly Stable Diffusion follows your prompt. I find that a scale of 10-15 leaves just enough wiggle room for creative details.
  4. Crafting the Prompt:
    Begin with a few words describing the scene (e.g., “train carriage interior, wood, cold metal, red fabric”), then add a touch about the setting (e.g., “soviet union, dieselpunk, retrofuturism”). Include lighting and color notes (e.g., “warm lighting, neon ambiance, volumetric lighting, vivid colors”) and finish with a stylistic touch (e.g., “oil painting, acrylic palette, rendered in unreal engine, sharp focus”). Sometimes, throw in a couple of artist names like Greg Rutkowski or Simon Stålenhag for inspiration.You can even use AI (like ChatGPT) to help generate creative prompts.

  5. Iterate:
    After generating a few images, if none of them are perfect, you can upscale and import them into Photoshop for further refinement. I often combine a few images, then use that as an initial image for another pass in Stable Diffusion. The Draw tool is also handy to mask and tweak specific areas.

_______________________________

The pipeline is essentially:

  1. Create a lineart (from a 3D graybox or hand drawing).
  2. Upload it as a ControlNet image with the “Lineart Coarse” filter.
  3. Generate a few images with your prompt.
  4. Upscale and refine in Photoshop, then re-upload as an initial image for another pass.
  5. Use the Draw tool to mask and modify specific parts of the image.
  6. Repeat steps 4 and 5 as many times as needed until you nail the look.
  7. Finalize the piece by refining further in Photoshop.

Refining by Hand

  • Photobushing: Blend multiple images to create a composite that captures the best of each.
  • Draw Tool: Use masking in Stable Diffusion to adjust specific areas.
  • Final Touches: Once you have a solid composite, finish the artwork manually in Photoshop. This blend of AI and hand refinement keeps your art uniquely yours.

_______________________________

And that’s it! This is my personal workflow for using Stable Diffusion to supercharge concept art. Remember, the key is to experiment — tweak your models, prompts, and settings until you find what works best for you. Feel free to adapt this tutorial as you see fit. Enjoy!