I used to be the Security Team Lead for Web Applications at one of the largest government data centers in the world but now I do mostly “source available” security mainly focusing on BSD. I’m on GitHub but I run a self-hosted Gogs (which gitea came from) git repo at Quadhelion Engineering Dev.

Well, on that server I tried to deny AI with Suricata, robots.txt, “NO AI” Licenses, Human Intelligence (HI) License links in the software, “NO AI” comments in posts everywhere on the Internet where my software was posted. Here is what I found today after having correlated all my logs of git clones or scrapes and traced them all back to IP/Company/Server.

Formerly having been loathe to even give my thinking pattern to a potential enemy I asked Perplexity AI questions specifically about BSD security, a very niche topic. Although there is a huge data pool here in general over many decades, my type of software is pretty unique, is buried as it does not come up on a GitHub search for BSD Security for two pages which is all most users will click, is very recent comparitively to the “dead pool” of old knowledge, and is fairly well recieved, yet not generally popular so GitHub Traffic Analysis is very useful.

The traceback and AI result analysis shows the following:

  1. GitHub cloning vs visitor activity in the Traffic tab DOES NOT MATCH any useful pattern for me the Engineer. Likelyhood of AI training rough estimate of my own repositories: 60% of clones are AI/Automata
  2. GitHub README.md is not licensable material and is a public document able to be trained on no matter what the software license, copyright, statements, or any technical measures used to dissuade/defeat it. a. I’m trying to see if tracking down whether any README.md no matter what the context is trainable; is a solvable engineering project considering my life constraints.
  3. Plagarisation of technical writing: Probable
  4. Theft of programming “snippets” or perhaps “single lines of code” and overall logic design pattern for that solution: Probable
  5. Supremely interesting choice of datasets used vs available, in summary use, but also checking for validation against other software and weighted upon reputation factors with “Coq” like proofing, GitHub “Stars”, Employer History?
  6. Even though I can see my own writing and formatting right out of my README.md the citation was to “Phoronix Forum” but that isn’t true. That’s like saying your post is “Tick Tock” said. I wrote that, a real flesh and blood human being took comparitvely massive amounts of time to do that. My birthname is there in the post 2 times [EDIT: post signature with my name no longer? Name not in “about” either hmm], in the repo, in the comments, all over the Internet.

[EDIT continued] Did it choose the Phoronix vector to that information because it was less attributable? It found my other repos in other ways. My Phoronix handle is the same name as GitHub username, where my handl is my name, easily inferable in any, as well as a biography link with my fullname in the about.[EDIT cont end]

You should test this out for yourself as I’m not going to take days or a week making a great presentation of a technical case. Check your own niche code, a specific code question of application, or make a mock repo with super niche stuff with lots of code in the README.md and then check it against AI every day until you see it.

P.S. I pulled up TabNine and tried to write Ruby so complicated and magically mashed, AI could offer me nothing, just as an AI obsucation/smartness test. You should try something similar to see what results you get.

  • Elias Griffin@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    6 months ago

    The comments so far aren’t real people posting how they really feel. An agenda or automata. Does that tell you I’m over the target or what?

    Look my post is doing really well on the cyberescurity exchanges. So to all real developers and program managers out there:

    Recommend the removal of any “primary logic” functional code examples out of your README.md, that’s it.

    PSA, Here to help, Elias

    • bamboo@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Lmao you got some criticism and now you’re saying everyone else is a bot or has an agenda. I am a software engineer and my organization does not gain any specific benefits for promoting AI in any way. They don’t sell AI products and never will. We do publish open source work however, and per its license anyone is free to use it for any purpose, AI training included. It’s actually great that our work is in training sets, because it means our users can ask tools like ChatGPT questions and it can usually generate accurate code, at least for the simple cases. Saves us time answering those questions ourselves.

      I think that the anti-AI hysteria is stupid virtue signaling for luddites. LLMs are here, whether or not they train on your random project isn’t going to affect them in any meaningful way, there are more than enough fully open source works to train on. Better to have your work included so that the LLM can recommend it to people or answer questions about it.

      • Chronographs@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        The way that I see it, LLMs are a powerful tool to quickly and easily generate an output that should then be checked by a human. The problem is that it’s being shoehorned into every product it feasibly can be, often as an unchecked source of truth, by people who don’t understand it and just don’t want to miss out. If at any point you have to simply trust an LLM is “right”, it’s being used wrong.

        • bamboo@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Yeah this is super sensible. Out of curiosity, do you have any decent examples bad usage? I think chatbots, GitHub copilot type stuff to be fine. I find the rewording applications to be fine. I haven’t used it but Duolingo has an AI mode now and it is questionable sounding, but maybe it is elementary enough and fine tuned well enough for the content in the supported courses that errors are extremely rare or even detectable.

          • Chronographs@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            I would say chatbots are bad if their job is to provide accurate information, similarly is their use in search engines. Github on the other hand would be an example of a good use, as the code will be checked by whoever is using it. I also like all the image generation/processing uses, assuming that they aren’t taken as a source of truth.

            • bamboo@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              Chatbots are fine as long as it’s clearly disclosed to the user that anything they generate could be wrong. They’re super useful just as an idea generating machine for example, or even as a starting point for technical questions when you don’t know what the right vocabulary is to describe a problem.

                • bamboo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 months ago

                  Oh yeah those are problematic, but I’m pretty sure a court has ruled in a customer’s favor when the AI fucked up, which is good at least.