I’ve seen a bot or two, I don’t think we are in the clear.
The latest one I saw was using a vision model so it’s able to comment on photos and memes. I should have taken a screen shot of that one, it had clearly misunderstood the meme and gave a definition of what a balloons is because of it (the memes was about cops always shooting the black balloons).
Interesting. I’d like to know more about this if you can provide any further info (ex. what community or instance was the post on, how recently was the bot comment).
I’ve definitely seen a number of sus accounts but they’re usually few and far between, and oftentimes get addressed immediately. There is a problem with how bad actors can join so many different instances or create instances of their own to cause problems.
EDIT: I think I’ve found it. Doesn’t seem to be a full on bot account but the comment is AI slop that a human probably chose to comment for whatever reason
Seems like a nonsense reply to copy paste into a comment section from an llm, it really has nothing to do with the conversation. I figure a human would have re-read it.
That being said, I took a look at his other comments and it does seem a bit too elaborate for a bot, even though they are mostly on the short and simple side. You can never really know though, it is definitely possible to build hyper intelligent agentic commenting bots.
I’m a bit paranoid about it, and also thought the user himself had removed the comment (my app just says “removed”, it doesn’t say by a mod). It’s why I don’t name drop or throw out accusations, it’s very very hard to prove even when it is true. I was going to send it to you in a dm which is when I saw the edit.
I’m saying it is easy to create something that would appear to be an actual human. The context window can be hundred of thousands of words, so it’s easy to give it the whole article linked in the post, as well as the whole thread, and other threads pulled from reddit and other forums, just to help it make a reply that could easily fool someone. It’s also easy to instruct it and train it to speak like a normal social media user.
You can also have it basically re-verify what it says, and include other bots who’s job is solely to catch gpt-isms, nonsensical replies, refusals, etc. It would basically be a system with multiple different bots (maybe dozens) reiterating on it and having specific jobs.
You can make hundreds of replies and have an other bot rank them so you only put out only a few of the most believable ones every day.
In this day and age, you kind of need to assume you might be being manipulated. There is no technical barrier to this anymore.
I’ve seen a bot or two, I don’t think we are in the clear.
The latest one I saw was using a vision model so it’s able to comment on photos and memes. I should have taken a screen shot of that one, it had clearly misunderstood the meme and gave a definition of what a balloons is because of it (the memes was about cops always shooting the black balloons).
Interesting. I’d like to know more about this if you can provide any further info (ex. what community or instance was the post on, how recently was the bot comment).
I’ve definitely seen a number of sus accounts but they’re usually few and far between, and oftentimes get addressed immediately. There is a problem with how bad actors can join so many different instances or create instances of their own to cause problems.
EDIT: I think I’ve found it. Doesn’t seem to be a full on bot account but the comment is AI slop that a human probably chose to comment for whatever reason
Yes, that is the one.
Seems like a nonsense reply to copy paste into a comment section from an llm, it really has nothing to do with the conversation. I figure a human would have re-read it.
That being said, I took a look at his other comments and it does seem a bit too elaborate for a bot, even though they are mostly on the short and simple side. You can never really know though, it is definitely possible to build hyper intelligent agentic commenting bots.
I’m a bit paranoid about it, and also thought the user himself had removed the comment (my app just says “removed”, it doesn’t say by a mod). It’s why I don’t name drop or throw out accusations, it’s very very hard to prove even when it is true. I was going to send it to you in a dm which is when I saw the edit.
“intelligent” bots? Nope, not even once.
Not what I mean.
I’m saying it is easy to create something that would appear to be an actual human. The context window can be hundred of thousands of words, so it’s easy to give it the whole article linked in the post, as well as the whole thread, and other threads pulled from reddit and other forums, just to help it make a reply that could easily fool someone. It’s also easy to instruct it and train it to speak like a normal social media user.
You can also have it basically re-verify what it says, and include other bots who’s job is solely to catch gpt-isms, nonsensical replies, refusals, etc. It would basically be a system with multiple different bots (maybe dozens) reiterating on it and having specific jobs.
You can make hundreds of replies and have an other bot rank them so you only put out only a few of the most believable ones every day.
In this day and age, you kind of need to assume you might be being manipulated. There is no technical barrier to this anymore.