• Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 months ago

    My takeaway from this is:

    1. Get a bunch of AI-generated slop and put it in a bunch of individual .htm files on my webserver.
    2. When my bot user agent filter is invoked in Nginx, instead of returning 444 and closing the connection, return a random .htm of AI-generated slop (instead of serving the real content)
    3. Laugh as the LLMs eat their own shit
    4. ???
    5. Profit
    • mesamune@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      I might just do this. It would be fun to write a quick python script to automate this so that it keeps going forever. Just have a link that regens junk then have it go to another junk html file forever more.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Imo this is not a bad thing.

    All the big LLM players are staunchly against regulation; this is one of the outcomes of that. So, by all means, please continue building an ouroboros of nonsense. It’ll only make the regulations that eventually get applied to ML stricter and more incisive.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Maybe this will become a major driver for the improvement of AI watermarking and detection techniques. If AI companies want to continue sucking up the whole internet to train their models on, they’ll have to be able to filter out the AI-generated content.

    • silence7@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      “filter out” is an arms race, and watermarking has very real limitations when it comes to textual content.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I’m interested in this but not very familiar. Are the limitations to do with brittleness (not surviving minor edits) and the need for text to be long enough for statistical effects to become visible?

        • silence7@slrpnk.netOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          Yes — also non-native speakers of a language tend to follow similar word choice patterns as LLMs, which creates a whole set of false positives on detection.

  • Ekky@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    So now LLM makers actually have to sanitize their datasets? The horror

  • leftzero@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Anyone old enough to have played with a photocopier as a kid could have told you this was going to happen.