It doesn’t selectively prevents learning, instead it hinders overall recognition even to the human eyes to some extent. The examples I know is old, but artists once tried to gauge model’s capability of i2i from sketches (there’s few instances people took artists’ wip and feed it to genAI to “claim the finished piece”). Watermaking, or constant tiling all over the image worked better to worsen genAI’s recognition than regular noise/dither type filter.
well ahktually the cgi at the time is ppl handpaint reflections of the liquid metal for each frame in photoshop(wasn’t commercially available at that time), so its smooth surface has warmth of human craft!