Stability training day


Home / office one day installation and training course in Stability Diffusion a.i. image generation. Using Google Drive, Google Colab & ComfyUi. 
Focusing on SDXL, controlnets, IPAdaptor & upscaling.



Short overview

You already know the transformative, mind bending power of a.i. generative image making. That’s why you’re here. It’s only been around since the late summer of 2022. From now on, things are already very, very different. Very!

This training course is at your place, for 8 hours. Over your shoulder, holding hands. With the affable, Shropshire’s famous witch photographer.

But only in the UK. And, if you’re way down in Cornwall, or way up in Scotland, we’ll need to talk about extra travelling expenses. I’m central, in Telford, Shropshire and only promoting 100 miles from home. There or thereabouts.

Shropshire’s famous witch photographer says,
“repeat after me…
Press buttons with absolute care,
Dragons soar exactly where you dare.”

This workshop is designed for absolute beginners in Stability Diffusion. From absolute zero to newbie by lunch, and from newbie to intermediate by teatime.

You are an illustrator, artist, or designer, and already very comfortable with Photoshop, just like me. You need specificity and certainty in your art directions, just like me.

You won’t need a powerful computer; you and I will install everything on Google Drive and harness the power of Google Colab Pro. We will install locally, as well, fast or slow graphics card, it’s handy and a comfort to have both. It doesn’t matter which you run day to day. We’ll use the same root files, so they’re both the same. Yes, you can run both at the same time. Indeed, you can run several instances of either, at the same time, if you were so inclined.

No programming experience is necessary. We’ll stay far, far away from that. Our game is art-directed pretty pictures. While noodle-heavy workflows might seem daunting, they’re actually easy to wrap your head around.

Into the weeds

You’ll have a Google email address and therefore Google Drive, too. The default free Drive account is 15Gb, that is barely enough space for a large checkpoint and a few LoRAs, let alone the thousands of 14Mb images you’ll be knocking out. I went with 2Tb plan for £7.99/month, and I’m nearly using 500Gb of that. Mind, I keep a tidy ship, downloading all my large images and deleting them off the drive (downloading but not deleting my aide-mémoire thumbnails). I also back up some other off-topic files, so I find the cost justified.

Google Colab used to be free for SD, but it was using way too many resources, they said, and booted all freeloaders out. Now, you have to pay. I’m happy with the Colab Pro+ at £45.90 per month, the Pro is £9.72 per month. Usually, it runs around 20 pence an hour, but sometimes I go faster, bigger for 50p or even £1.50 an hour — that’s using 40Gb of video RAM! Such a Nvidia A100 graphics card would cost £4,000, if you’re lucky, on eBay!

Of course, there are pros and cons. I sometimes work on a cheap, light Chromebook — that’s a win. (Disappointed, I cannot work off my phone.) Most SD users run it locally, squeezing and slow on a small graphics card with 4Gb VRAM or proud and fast on a RTX 4090 with 24Gb VRAM. There are other cloud providers, but all together we form a small, often overlooked cohort — that’s a loss.

This is for people who need to use ControlNets to make their generated images. You may have tried the ease of MidJourney and are dazzled by the beauty — I was. But, need to place your characters, in specific poses, with consistency in faces, characters, products — that was so frustrating. You require fine, accurate and total control. ControlNets, allow you to sketch your image, first. Placing characters, in poses, maybe with arms folded, or star jumping as viewed from above in the bottom left corner, with a bird, top right and another top left. Yes, you can. You have complete control (net).

You don’t need blocked words in your prompt. Don’t ask me how to do porn. I don’t know, don’t want to know. You can figure that out, after I’ve left. But, other a.i. generative apps, ban celebrities, and words that can be misunderstood as a double entendre. A simple example is: cock, as in male chicken. It doesn’t work in MidJourney. Type it too many times and MJ will ban you. Not so with open source Stability Diffusion.


We’ve only got 8 hours, not 8 months. I’ve been using SD sine last Spring, and it is a devil of a job to keep up, (I made the executive decision not to even look at video). I don’t want to expand this course; let’s stick to just a few nodes and techniques to get you proficient with them, just like me.
These are:

  • SDXL: the newest Stability Diffusion model — it’s bigger, better but not so well established as SD1.5 which makes images at 512 × 512k. SDXL is 1024 × 1024k this is one megapixel (and equivalent aspact ratios).
  • ControlNets: depth, canny, sketch and open pose. The art director’s first base. Give ComfyUI an image and it’ll knock these out for you. You may want to edit them further in Photoshop.
  • IPAdapter for SDXL: the art director’s second base. Instead of text prompts, use images to style, describe, influence your output. Without exaggeration, this is mind blowing.
  • Upscaling: I do it once at 4x. Taking my images to 4096 × 4096k. 16 megapixels is enough for me. Others do it again!. All the while adding details.
  • And, very, very importantly, organised naming and image saving. With hundreds of images a day, things can get messy quickly.
    • Importing into photoshop, it is useful to see some of the parameters within the file name.
    • We’ll also make thumbnails, for storing the workflow of each image.
    • And another reason, that I can’t recall. Ah, yes! How to get out of trouble, when things go wrong. This is cutting edge. Sometimes there’s blood.

There are many large files, as in: checkpoints, LoRAs. Some are 10Gb, some only a few Kb. These are the models, the data, for want of a better word. I’m only going to get you a few. Yes, I’ll show you how to get more.

Similarly, with nodes, these are the panels on a worksheet that you connect with noodles. They’re added to every day, new shiny, distractions, every damned day. I’m only going to get you a few custom nodes, too. But, we’ll update, every time you restart the system. Keep it simple, stupid.

I’ll have you create a few worksheets that are for specific processes. You’ll save them and reload them as needed. In your own time you can edit these or download more, there are many, many available. But, don’t get anything new for a few days. Learn just the base I give you. Limit yourself for a while. See what all these controls can do.

What will be the outcomes?

  • Installation of SDXL checkpoints, ControlNets, IPAdapter, image enlargement, and organised image saving.
  • You will understand where key files/folders are installed and how you can grow and organise your installation.
  • You will grasp the power and limitations of Google Colab Pro. Backups with Google Drive. GitHub, too.
  • You will be comfortable with text to image (t2i) and image to image (i2i) workflows. And ComfyUI as a system, the bugs, history, future.
  • You will be able to sketch your image, first, placing characters, or objects in specific poses or sectors of your generated image.
  • Enlarge images while adding details.

I’ll leave you with a few printed pages of notes, tips. The stuff I introduce you to. Of course, have a notebook and pen handy throughout the day.

Who am I?

I’m a designer, illustrator, and artist. I started last year with MidJourney, dazzled by the beauty and creativity, but soon grew frustrated with its limitations. Six months ago I started with Stability Diffusion and the A1111 interface. But, A1111 was always playing catch up to ComfyUI and lacked the flexibility and raw power. Many 3d renderers use noodes and noodles — I’d been there, before. Yeah, noodles are coolio.

I have experience conducting training courses in various formats. For instance, I organised the Witch Photographer School, which spanned a weekend and took place at a charming BnB. Additionally, I conducted training in Reality Capture, a photogrammetry program, through Zoom.

I used to sell my pictures on markets and fairs throught Shropshire. It’s why I’m famous throughout the county.

To understand me better, see some of my MidJourney Space Fairies and examine the case study below…

Posed red dragons

Stage 1: text 2 image
I wanted to create Welsh red dragons, but plain old text-to-image couldn’t capture the characteristic national flag pose for my dragon: tongue out, curled tail, lifted front paw. I was also presented with two heads, six legs, too many wings, too many tails. I just could not get the textbook dragon I needed.

Stability Diffusion doesn’t know what a “red Welsh dragon in a classical pose” is. If only. Though, fair play, they were mostly all roaring, wings up, tail(s) outstreched.

Red Welsh dragon atop a mountain
t2i blown out colours and high gain from a phenonomaly high CFG and absurd number of steps, but less deformaities.
Red welsh dragon roaring at a distant castle
t2i nice picture but wrong pose. 90% had either two heads, two tails, wrong wings, six legs and other deformities.

Increasing the CFC scale to absurd levels helped with deformities but brought in high gain colours and a comic book illustrative effect. There were unsatisfactory workarounds.

Stage 2: image 2 image
I needed to explain to Stability Diffusion the position and the pose. The arrow pointed tail, the curled loop tail, the raised front paw, the nose spike, the arrow forked tongue. This wasn’t possible in text to image (t2i). Image to image I2i) came closer, but it still proved to be absolutely inaccurate.

i2i details are screwey but the pose is close, tail is curly, no nose horn, lifted paw deformed.
i2i although an interesting picture, it is not the dragon I’m after.

Stage 3: ControlNets
As if by magic, I had a Zbrushed model created many years before. I just needed depth maps, masks, and colour information from this model. Hand drawn sketches from existing flag drawings would also have worked. I could have iterated up some depth maps using ControlNets, till they were good enough. There are many ways to bodge the required depth maps, canny drawings and sketches.

Depth, sketch and canny ControlNets.
CN while ControlNets give an accurate silhouette, internal and finer details are misshaped. Curled tail is correct, lifted paw is good but feathery.

I tried open pose too, for the pose, but it is not developed in Stability Diffusion for animals, yet. Just humans.

Stage 4: bite sized
But still, the finished output was not accurate enough. Claws were missing, toes were twisted, tangled with the tongue. Eyes were in the wrong place. Wings were not developed.

It’s a matter of the resolution of the ControlNet images, at just one megapixel — it isn’t too clever.

I broke down the project into digestible sections. Four feet, in the same workflow. Two wings, in another workflow. Head, neck, tail, chest, separate, and so on. Now, small archetypal details were correct. I merely needed to photo bash the jigsaw together in Photoshop.

Stage 5: inpainting using Photoshop
Yet, still smaller details were tangled, confused, so I zoomed into these in Photoshop, and treated them separately, as well. Claws, nose spikes, teeth, ears, tongue. Conventional inpainting in SD is masking an area and generating the whole image and the new patch attempting to match the rest. Of course, the whole image chnages as well, sublty. There’s ways around this, too.

For cleaning up, I merely crop into the problem area in Photoshop, and img2img an entire colour image, with its depth, mask and canny. Then, paste it back into Photoshop, where I can bodge it better and return, again, to SD. I’ll iterate this a few times. And, I’ll show you my technique.

Now, I have the parts shaped correctly, I can remake them, blending text prompts with IPAdapter’s image prompts; to style my accurate dragon in any way I want. Cyborg, nasty reptilian, fluffy cute toy and so on. And variations on those themes. As well there are mediums too: charcoal sketched dragons, pencil, oil, watercolour. Follow in the style of a painter: I wonder what a Picaso dragon would look like? So many artists!

Cyborg style red Welsh dragon.
Nasty reptilian style red Welsh dragon.
Whole image is 17,000 x 11,500 pixels. The detail is good for hyper detailed printing on an Epson SureColor SC P900 – A2 Photo Printer. Of course, I could print even bigger, but I have to wait for their A1 printer, since Epson’s 5760 x 1440 photo resolution is my goto standard.
Even more zoomed in, and I haven’t added the story details yet: green teeth stains, warts, dirt, scars, rope burns, jewelry, glasses, drooling spittle, witch marks, the burnt bones of eaten priests… A certain magical aura. Oh, and fire!

Stage 6: Finishing up
My next, less prescriptive task, is the white and green background. The sky and the Welsh green landscape. After that, incidental characters, fairies, witches, knights. And add details to the dragons to fit into the stories.

This is much more work than a simple text prompt to generate an image. I imagine that you could prompt for “red Welsh dragon,” roll the dice. Do this with an infinite amount of monkeys and come out with something like my laboured artworks without using Photoshop. In the near future, this is certainly more possible, than not. But, right now, today, my way is the way.

In your own time

This is a LoRA I made of my dog, Yannon. Yes, Google Colab, with A100 power, is ideally suited to making your own LoRAs. It is way out of scope for a single day training. But, I will link to you a Colab script and plenty of howtos, dive in, at just £1.50 an hour on an A100 with 40Gb vram.

Yannon when we went walkies around the skyscrapers of New York.
Yannon in New York, she said, “they have bars big as cars.”
Yannon when we went to the Intrernational Space Station.
Yannon when we went diving in the Red Sea.

Call to action

Book your training day, go back up to the calendar and fall headlong into this new world. And I’ll see you at your place, soonest.

Call me on 07948 961 538 if you need assistance with the purchase decision.


There are no reviews yet.

Be the first to review “Stability training day”

Your email address will not be published. Required fields are marked *