• Grimy@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    21 hours ago

    Yes, that is the one.

    Seems like a nonsense reply to copy paste into a comment section from an llm, it really has nothing to do with the conversation. I figure a human would have re-read it.

    That being said, I took a look at his other comments and it does seem a bit too elaborate for a bot, even though they are mostly on the short and simple side. You can never really know though, it is definitely possible to build hyper intelligent agentic commenting bots.

    I’m a bit paranoid about it, and also thought the user himself had removed the comment (my app just says “removed”, it doesn’t say by a mod). It’s why I don’t name drop or throw out accusations, it’s very very hard to prove even when it is true. I was going to send it to you in a dm which is when I saw the edit.

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        18 hours ago

        Not what I mean.

        I’m saying it is easy to create something that would appear to be an actual human. The context window can be hundred of thousands of words, so it’s easy to give it the whole article linked in the post, as well as the whole thread, and other threads pulled from reddit and other forums, just to help it make a reply that could easily fool someone. It’s also easy to instruct it and train it to speak like a normal social media user.

        You can also have it basically re-verify what it says, and include other bots who’s job is solely to catch gpt-isms, nonsensical replies, refusals, etc. It would basically be a system with multiple different bots (maybe dozens) reiterating on it and having specific jobs.

        You can make hundreds of replies and have an other bot rank them so you only put out only a few of the most believable ones every day.

        In this day and age, you kind of need to assume you might be being manipulated. There is no technical barrier to this anymore.