• pemptago@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      26 days ago

      It shouldn’t be. Unfortunately, afaik, no lawsuits have been settled yet. Seems like Anderson v. Stability Ai is the one to watch with regard to OP.

    • inconel@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      26 days ago

      AFAIK it’s very model specific attacks and won’t work against other models. Their tool preserving art the same to human eye is great offering, and there’s always rigorous watermarking (esp. with strong contrast) as an universally effective option.

      • Mothra@mander.xyz
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        Watermarking with strong contrast prevents ai from using the images in training? Since when?

        • inconel@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 day ago

          It doesn’t selectively prevents learning, instead it hinders overall recognition even to the human eyes to some extent. The examples I know is old, but artists once tried to gauge model’s capability of i2i from sketches (there’s few instances people took artists’ wip and feed it to genAI to “claim the finished piece”). Watermaking, or constant tiling all over the image worked better to worsen genAI’s recognition than regular noise/dither type filter.