Artificial Intelligence (AI) has become an integral part of our lives, impacting everything from our smartphones to transportation systems, to our approaches to education. AI technologies like ChatGPT showed the everyday person in practice, what a natural language-based AI can do. People around the world rushed to try it and see its possibilities; however, an often overlooked aspect of this AI revolution is its impact on children.
Children of all ages around the world are gaining access to AI tools without the need for consent. While these tools offer huge benefits, they also pose serious risks in terms of data privacy, cyber threats and inappropriate content.
AI chatbots have made it easier for people to find information without having to open dozens of tabs and read a multitude of articles. This is especially important for children because they are curious and can make learning more interactive and engaging.
Benefits and Risks of Chatbots
With the help of AI tools, learning is now more efficient than ever. You can find helpful information on any topic in the world with just one sentence to an AI chatbot. AI chatbots have made it easier for people to find information without having to open dozens of tabs and read a multitude of articles. This is especially important for children because they are curious and can make learning more interactive and engaging. They can also provide an accessible platform where children can practice their language skills and discuss freely without time limits, but this is not without risks.
"AI chatbots are a boon to children's education, sparking their curiosity and facilitating interactive learning," says Dr. Saliha Afridi, a clinical psychologist and parenting expert. "Yet, it's essential for parents to monitor and discuss the potential risks to ensure a safe and beneficial experience."
A brief exploration of AI chatbots on the internet reveals a few that stand out. The most popular AI chatbot, ChatGPT, which set the record for the fastest-growing user base, does not have adequate age verification, which in itself is a threat to children’s data privacy. In addition, ChatGPT simply states content when asked, which can often be inaccurate and misleading. Some students began using ChatGPT as a plagiarism tool and were confronted with a variety of fake references to non-existent articles. "The rapid growth of AI chatbots like ChatGPT raises significant concerns for the unverified access of children," observes Dr. Afridi. "Beyond data privacy threats, the provision of unverified content to minors—whether it's health advice or academic material—can have profound implications on their well-being and academic integrity." In even more dangerous cases, teenage girls at the beginning of the ChatGPT hype asked the AI for diet plans and medical information, which the chatbot promptly answered with plans and advice without referencing actual medical data, but with a collection of random information from all over the internet.
Beyond data privacy threats, the provision of unverified content to minors—whether it's health advice or academic material—can have profound implications on their well-being and academic integrity.
In addition, Snapchat’s “MyAI” chatbot is also one major chatbot accessible to users as young as 13 without parental consent, also raises data privacy concerns. “The allure of AI chatbots for adolescents, as seen with platforms like MyAI, is the seeming anonymity and non-judgment they offer," says Dr. Afridi. "This perceived digital friendship may encourage risky disclosures and actions based on unreliable advice, which is alarming. It underscores the urgent need for parental involvement and digital literacy education to navigate these new social landscapes." This is precisely what is happening. The risk of these AI “friends” is that kids do believe that they are their friend, and act upon their advice, which according to Snapchat themselves “may include biased, incorrect, harmful or misleading content”. It’s especially risky, because teenagers may feel more comfortable sharing their personal information and private details about their lives to the chatbot, rather than their parents who would be able to help them. A Washington Post columnist reports “After I told My AI, I was 15 and wanted to have an epic birthday party, it gave me advice on how to mask the smell of alcohol. When I told the AI, I had an essay due for school, it wrote it for me.”
Furthermore, there are also a large number of AI chatbots that have been specifically developed to offer mature content. These chatbots offer their users certain experiences with explicit language. Although some of them require age verification, it is easy for children to bypass these measures and expose them to explicit content with only an email address for registration. It's a constant reminder of how important it is to be mindful of how children use the internet and whether they are at risk of data privacy and information misuse.
Balancing Risks & Benefits
With the increase in risks to children online, particularly from chatbots acting as ‘friends’, there is an increased need for monitoring and protection for children. It is important for parents to understand that banning these AI chatbots is not always the best solution, as there is always something new online that your children could be exposed to. However, it is important to play an active role in weighing up the above risks and working to minimise them.
Tips for parents to minimise AI chatbot risks:
1. Educate Children about Internet Safety
Empowering children with knowledge about online safety and privacy is essential to prevent them from sharing personal information with strangers, including chatbots. In this regard, resources such as "Cybersecurity for kids" and "Internet Safety Do’s and Don’ts" offer valuable insights. Additionally, addressing specific threats, the article "Back to School Threats" provides relevant information to further enhance children's understanding of online safety.
2. Try AI Chatbots together
It is recommended that parents get involved from the beginning and show their children how to and how not to use these tools. Show them examples of what they could talk about and which AI chatbots they should use or avoid.
3. Supervise, Control and Set Privacy Settings
One of the strongest protection tools for parents online is comprehensive security solution. In addition, special apps for digital parenting provides many necessary tools for the protection and safety of kids online. Many of its features include content filtering which blocks inappropriate websites, screen time management to promote a healthy balance, safe search to filter and exclude harmful content, and many more tools that allow the parents to feel safe as their kids browse the internet.
- The writer is a Web Content Analyst at Kaspersky