I’m saying it is easy to create something that would appear to be an actual human. The context window can be hundred of thousands of words, so it’s easy to give it the whole article linked in the post, as well as the whole thread, and other threads pulled from reddit and other forums, just to help it make a reply that could easily fool someone. It’s also easy to instruct it and train it to speak like a normal social media user.
You can also have it basically re-verify what it says, and include other bots who’s job is solely to catch gpt-isms, nonsensical replies, refusals, etc. It would basically be a system with multiple different bots (maybe dozens) reiterating on it and having specific jobs.
You can make hundreds of replies and have an other bot rank them so you only put out only a few of the most believable ones every day.
In this day and age, you kind of need to assume you might be being manipulated. There is no technical barrier to this anymore.
“intelligent” bots? Nope, not even once.
Not what I mean.
I’m saying it is easy to create something that would appear to be an actual human. The context window can be hundred of thousands of words, so it’s easy to give it the whole article linked in the post, as well as the whole thread, and other threads pulled from reddit and other forums, just to help it make a reply that could easily fool someone. It’s also easy to instruct it and train it to speak like a normal social media user.
You can also have it basically re-verify what it says, and include other bots who’s job is solely to catch gpt-isms, nonsensical replies, refusals, etc. It would basically be a system with multiple different bots (maybe dozens) reiterating on it and having specific jobs.
You can make hundreds of replies and have an other bot rank them so you only put out only a few of the most believable ones every day.
In this day and age, you kind of need to assume you might be being manipulated. There is no technical barrier to this anymore.