Saw a post about people putting a Tom overlay on top of their WIP sketch to prevent AI theft from stealing their sketches. Though the method doesn’t seem to be useful. When you need to show your sketches, you will need to reduce the opacity of the overlay so that the sketches lines are more prominent than the overlay. Stable Diffusion with control net can easily recognise the lineart thus the sketch will be translated to whatever content the prompter intended.
There was this comment that suggested to “glaze” the images before uploading. This sparked my interest in Glaze, a counter generative AI tool that messes up your image and prevent the algorithm from studying the subject. I always have doubt whether Glaze can really protect the artist work, so I tested the software.
The general idea is that, images will be sent into Glaze AI algorithm, through various setting it will “mess” up the input images, at a compensation of the input image quality. The image will have some noise and distortion overlay on top. Glaze claimed that these noises can mess up the learning algorithm. Though most existing generative AI model were trained with non-glaze images, and I don’t know how many new images online were really glazed through this software.
In this context, I am more interested to know if glaze can protect a sketch from StableDiffusion’s Control-Net. I did a simple test on one of my sketches.
Glaze did add some noise to the image, however these noises (even on highest setting with longest render) were insufficient to prevent ControlNet from reading the sketches. ControlNet is still capable of getting the major shape out from the sketch and generate an image from it. Though the noise from Glaze does significantly affect the generation.
Glaze might have protected Artist’s work from AI training, but when ControlNet was used, the protection is less.
Leave a Reply