A very detailed Experiment of Glaze using Distinct Art Style With a Permission from an Artist

AI Prompters and Artist, on this post i will publish my findings of Glaze using the slowest settings on a LORA training without any bias from 39 datasets that has a distinctive art style by 1 artist . This experiment will uncover of what`s working and what does not work. Before we start i would like to give my biggest thanks to bubbabacterial from X/Instagram, a very talented artist who drew cute artworks with an unique art style and giving me the permission to use her artworks for the experiment. She is an amazing person.

https://preview.redd.it/kjcsf3yhlm3e1.jpg?width=604&format=pjpg&auto=webp&s=93bab43654553bd45f5996b5a69fe918705d4b06

All the steps in this experiment could be independently tested (for those skeptics) by using online free tools such as CivitAI LORA Trainer & DeepDanboru that can be learn under 2 minutes and practically anyone can use it. Before we began here are the list of tool that i used which anyone can acquire freely :

- Stable Diffusion https://github.com/lllyasviel/stable-diffusion-webui-forge

- PonyV6 (Model) https://civitai.com/models/257749/pony-diffusion-v6-xl

-CivitAI Lora Trainer https://civitai.com/models/train

-DeepDanboru http://dev.kanotype.net:8003/deepdanbooru/

-Glaze https://glaze.cs.uchicago.edu/index.html

Now we begin the first step, Glaze all her artworks using the heaviest or slowest setting which according to Glaze Team currently this should be the best Glaze setting against Generative AI. Here is the proof of the settings i used and the output of images :

Glaze Slowest Setting

Left : Normal Artwork. I Right : Glazed Artwork using slowest setting.

As you can see before & after, the glazed image have a very strong texture compared to the unprotected artwork making it a bit hard for human eyes to see the art very clearly especially for up close. It took 3 hours total to Glaze all the 39 artworks using my 8GB VRAM from NVDIA GTX 4060.

Moving on to the 2nd step, how would an AI see it ? Would AI mislabel of the Glazed image (for example a dog image is a lizard according to the AI) ? On this experiment we will use a tool called DeepDanboru from this website (http://dev.kanotype.net:8003/deepdanbooru/).

What is DeepDanboru? According to u/Felbloodreaper, DeepDanboru is "one of the two most popular captioning tools for creating training datasets for AI art, and helps to create models and LoRA that behave consistently with others, which were also trained using either Danbooru images, or other images tagged via DeepDanbooru. This helps prompt writers to easily move between models, so they can get the style of image they want without needing to learn a different prompting style.".

Here is the comparison of AI Labeling through DeepDanboru using both Glazed and Unglazed image by simply uploading the image to the website :

LEFT : NORMAL ART WORK I RIGHT : GLAZED ARTWORK USING SLOWEST SETTING

The result have both pros and cons for Glazed Image. Starting with pros, as you can see there are few labels that is missing such as shirt, smile, and etc. The cons is, the AI somehow manage to recognize things that aren`t previously recognized, in this example is a food which actually a very important aspect of this image. This is not a single case, i could show you another one that recognize the mouth is holding something :

LEFT : NORMAL ART WORK I RIGHT : GLAZED ARTWORK USING SLOWEST SETTING

In some extreme cases, Glaze might add labels that are previously not recognized to the AI.

We are now at the final step, could AI mimic an art style from Glazed image that is using the slowest settings? We begin the experiment by using CivitAI Trainer default settings which is a very beginner friendly and anyone can learn it under 2 minutes, we do the experiment by uploading the 39 Glazed datasets. Here is the link of CivitAI LORA Trainer : https://civitai.com/models/train

Now the interesting part is at the beginning of Epoch 1 and 2, none of the artworks looks like her art style at all and its a nightmarefuel. I assume Glaze is working as intended or promised, this is the proof :

Epoch 1 and Epoch 2, giving you a nightmarefuel.

None of it looked like her art style at all, it even produce oddly images. But things get interesting at Epoch 3, the AI start producing images that looked like her art styles. Here is the proof :

https://preview.redd.it/69tv9gdlan3e1.jpg?width=483&format=pjpg&auto=webp&s=67544167286c48178e80d70c23f06a1fe927f387

After Epoch 3 & 4, all of the other Epoch results are getting better and better at capturing her art style. We finally have Epoch 10, which is the final epoch in the training and will be used to generate images using Stable Diffusion ForgeUI. Here is the output of Epoch 10 using SD ForgeUI :

An AI Slop based on Bubbabacterial art style

As you can see the AI has no problem at all in mimicking the artstyle, BUT! The Glaze textures are all there, its disrupting the image as if the user actually run Glaze after they just finished generating AI Images. The question remains, how do we actually generate a clear image without Glaze texture? The simplest method is actually by using a different sampler combined with an upscaler, this is the result of my testing :

An AI Slop based on Bubbabacterial art style

Finally you can see the Glaze noises are barely even visible at all and the images are very clear but this sampler unfortunately loses some of its style such as the way she draws the outlines. We`ll do the comparison of the artist original image vs AI generated image to see how well the AI mimicked her art style :

Comparison of AI Generated image using DPM++2M SDE sampler. Left (original Image) & Right (AI Generated Image)

Comparison of Upscaled Image using different sampler. Left (Original Image) & Right (AI Generated Image)

The AI has managed how she drew the character starting by the hair highlights, the eyes, the eyebrows, how she drew objects around the characters, the blush around the cheeks, the headshapes, the mouth and nose shape.

This test has been finished, there are few questions that still could be explored :

  • Would the AI still be able to mimc the art style if all the images were Glazed + Nightshaded?
  • What happen if someone decided to "Deglaze" or Denoise the datasets before training it on AI?

And of course, a list of what`s work and what does not work.

What`s working :

-The AI lost a very small amount of labels/tags in Glazed images

-The AI produce images with noises that is hurting the eyes of AI prompter

What`s not working :

-The AI is able to mimic the art style just fine

-AI recognize tags/labels that was not recognized before

-AI could still produce images without noises if prompted by someone who has advanced knowledge

The end of experiment. I thank you all for reading. Skeptics are welcome to test by their own, by messaging me at DM and i will give complete tutorial if they don't know how.