Online harassers are generating images and sounds that simulate their victims in violent situations.
You must log in or # to comment.
Man, it’s starting to seem like the people developing AI aren’t being very responsible with it.
What do you propose they do? How do you detect and stop something like this? I don’t think it’s possible.
How can I contribute to this?


