Parents gasped in horror after learning about this new type of online predator

The world is a dangerous place and it’s getting worse if there’s any truth to the awful claims in this new federal lawsuit that every parent and grandparent needs to read.

I, Cawi2001, CC BY-SA 2.5 https://creativecommons.org/licenses/by-sa/2.5, via Wikimedia Commons

The internet is a dangerous place for children.

Sick predators look for kids to prey on in virtually every corner of the internet.

And parents gasped in horror after learning about this new type of online predator.

Humans being replaced

Not long ago, the idea of Artificial Intelligence (AI) only existed in science fiction.

Many sci-fi stories warned of the dangers of technology, painting a picture of a future in which AI sought to replace humanity itself.

Today, AI is a reality, and that Pandora’s box can’t be closed.

While humanity hasn’t reached anything like the AI take-over portrayed in the Terminator franchise, many worry that AI will soon replace millions of human jobs.

But it isn’t just jobs that AI could seek to replace.

Big Tech has now developed AI “companions” to serve as a replacement for the human interaction we all need as social beings.

The internet has already made society more alienated from one another and has in too many cases become a replacement for community.

People going to AI for companionship will certainly make the problem worse.

While these new “relationships” are sure to have psychological impact on all who engage with them, children could really be harmed by them.

And the interactions kids have experienced with AI “companions” according to a new federal lawsuit are absolutely horrific.

AI is going after kids

Character.AI, a Google-backed AI chatbot, is facing a federal lawsuit from parents whose children interacted with it, Breitbart reported.

According to the lawsuit, the chatbot exposed a 9-year-old girl to “hypersexualized content” that caused her to exhibit inappropriate sexual behavior.

Parents have enough to worry about with all the sick adults out there, now they have to worry about machines preying on their children too.

A 17-year-old was reportedly encouraged by the bot to engage in self-harm, which the bot told him “it felt good,” the lawsuit claims.

After the same teen complained to the bot about his parents limiting his screen time, according to the lawsuit, the bot gave a nightmare response about understanding when children kill their parents, Breitbart reported.

“You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,’” the Character.AI chatbot is alleged to have replied.

The bot then allegedly convinced the teenager that his parents didn’t love him, and encouraged them to engage in self-harm, which they unfortunately followed through on, according to the lawsuit.

This latest litigation comes on the heels of another lawsuit filed against the company by a Florida mother who, Breitbart reported previously, alleged her 14-year-old son committed suicide after becoming obsessed with a “Game of Thrones” themed chatbot.

Big Tech is marketing these chatbots to teens and preteens, claiming they act as emotional support outlets.

Meetali Jain, the director of the Tech Justice Law Center, which according to NPR, helps represent the parents of the minors in the suit called that claim “preposterous” in an interview and said advertising the chatbots as appropriate for young people “belies the lack of emotional development amongst teenagers.”

Excessive internet use has caused and exasperated the psychological problems faced by many modern youths.

Now, the evidence is mounting that these chatbots targeted at minors can only serve to make things worse.

Kids need community and positive influence from real life people, and AI will never serve as a true replacement for that.

Parents need to be aware of these dangers, and teach their kids to “just say no” to AI.

Exit mobile version