James McGreggor
5 min readOct 21, 2024
Created with ChatGPT 4o

As a parent that works in the technology industry, I feel like I have a good grasp on how to keep our kids safe. We are all faced with the questions of when (or if) to allow your child to have a phone, and what level of access can they have if they are on a computer or mobile device. While keeping up with the latest apps and websites is challenging, adding AI to the mix adds a layer of complexity that some may not even be aware of.

With this level of complexity continuing to rapidly grow and evolve, and hearing from many parents that they trust their kids to have unrestricted internet access, I must admit, I find myself asking: how much do you control, how much do you put in place, before you cross the threshold of a neurotic helicopter parent?

Tools like Bark*, FamiSafe*, etc*., as well as other parental controls built into on browsers, mobile devices, and apps like Spotify (Kids)* and Youtube (Kids)* provide a fair amount of coverage; however, content that is not deemed as explicit still gets through, and predators and cyberbullies continue to find ways to side-step protections and redirect content so parents can not become complacent and trust that just because these tools exist, your kids are completely safe. Whitelisting or blacklisting traffic at home also provides additional coverage, but there are also limitations to this; it really only protects your kids from content while it is on your controlled network.

While you can implement and use all of these tools mentioned, what happens when you open up the internet to your kids? What threats or risks exist, are you aware of those bad actors, and are you aware that some may actually be innocuous when used properly, but highly dangerous when not?

This is where I believe that parents need to become more aware of the social and psychological impact AI is having, because the risk is more than having a tool that kids can use to plagiarize a book for their next homework assignment or getting a digest of the latest episode of a cartoon that you may not be okay with them knowing about. In comes the term I will use called False Authorities.

Consider this scenario: you have a child who is browsing the internet and comes across a web page that is backed by an AI. For context, lets place this child in the age range of 8–14 years of age. Let’s also add in the fact that, unbeknownst to you, that they are suffering from anxiety due to bullying and general social isolation at school. Now this child lands on a page that looks fun and inviting, or maybe just interesting. This page serves up an apps or chatbot that companionship, relationship guidence, or self help and so the child begins to start asking it questions. The AI begins providing “answers” to your child that a professional in the industry (Counselor, Psychologist, etc.), or any logical adult, would never do. This leads your child to start asking for summaries of books, movies, and songs that further push your child into ruminating on dark thoughts rather than helping them address the issues constructively and seeking real help. They feel like they have found a friend and continue to go to this AI resource (False Authority) for relationship advice, which also is not appropriate or accurate because of so many reasons. The AI’s advice leads the child to make poor decisions which further changes the child's mental state and creates further reliance on the AI. Then at some point, something happens and you are completely caught off guard and the child is forever traumatized or even worse. The child did not know that the AI would give bad advice. They thought it was an AI and therefore intelligent. The protections that you put in place never filtered out that site or any of the content given because none of it triggered any content filters. You were unaware and so were they.

Unfortunately, this is not the worst case scenario. What if there were applications / sites that serve highly illicit content that is targeted towards kids but sold as for adults — because there are. To be clear, if there is a platform out there offering this mixed content (adult and child) with no filter or security protecting children, they are targeting kids. There is absolutely no excuse for not adding in the basic layer of protections. With these sites that same child finds companionship with an “AI”. That companionship exposes that child to content that they should never see, but whats worse, it changes their brain chemistry to become addicted to something that is both perverse and not even real. To add to the depravity of this, what is happening behind the scenes may be bad actors that are using this information, whether that is the PII gained or simply the conversations had.

It used to be just apps and illicit websites, now we have AI and we need to be aware of the different ways that it could be harmful.

Make no mistake, I am still an advocate of AI when implemented ethically. From quality inspections using computer vision, to using GenAI for creating illustrations and analyzing project plans, AI certainly has it’s use; however, we need to really take the time to understand how it can be used negatively, even if the intent was meant to be helpful. We also need to take the time to educate others, especially those who we may be handing our kids off to (e.g., schools, childcare babysitters, grandparents, etc.).

Being a parent is hard, and keeping up with Technology is even harder, but we only get one chance with our kids and so I would rather be the neurotic helicopter parent when it comes to their safety (especially with Information Technology), than be complacent and wished that I had done something before it was too late.

It is my hope that you read this and find some new information that you can use to help keep your kids safe, because these scenarios are real, and our kids deserve better.

*Not an endorsement of the product or site listed, nor a verification of any claim made by the company.