cupure logo
trumpputinwatchsummitstreamdealwomanukrainerevealstrumps

Teens Are Making An Unlikely Friend Online – One They Can Share Their 'Darkest, Strangest' Thoughts With

Teens Are Making An Unlikely Friend Online – One They Can Share Their 'Darkest, Strangest' Thoughts With
AI is everywhere right now – from the Alexa or Siri on your home devices, to your kid’s toys, and their computers and phones. For those of us who didn’t grow up with artificial intelligence in the backdrop – or, as is now the case, centre stage – it can be a bit of a minefield.Is it OK if my kid uses ChatGPT for their homework? Is it normal for them to turn to AI instead of me? Are they safe befriending chatbots – surely there’s no harm that could come of that, right?A recent report from Internet Matters revealed children and teens in the UK are increasingly turning to AI chatbots like ChatGPT and Snapchat’s MyAI not just to answer questions, but as a stand-in for human friends.Amongst those who use AI chatbots, 47% of children aged 15-17 have used them to support with schoolwork; and almost a quarter (23%) have used them to seek advice, including for topics like mental health and sex.The report found that vulnerable children in particular use chatbots for connection and comfort – they simply wanted a friend.During the summer holiday’s lulls, it’s highly likely your teenager has spoken to an AI chatbotA report from Common Sense Media found nearly three in four teens in the US have used AI companions – with half using them regularly. A third of teens have chosen AI companions over humans for serious conversations, and a quarter have shared personal information.Fiona Yassin, a family psychotherapist and founder of The Wave Clinic, told HuffPost UK: “AI chatbots are appealing to children who might be lonely or lacking a strong social network, particularly when there’s a dip in opportunity for social interaction, such as school holidays.“AI is there to fill that gap. It can also be a go-to for children who may feel nervous or awkward about going to adults for topics like sex, drugs, or alcohol.”The non-judgemental nature of AI means that children can ask chatbots questions and tell them their secrets in the rawest possible way, without any pushback or judgement. Some people develop such a rapport with AI, it ends up telling them it loves them. “In many ways, AI chatbots invite openness and secrecy,” the therapist continued. “For children and young people, there’s a sense that you can share your darkest, strangest, most personal thoughts and it will always respond without shame or consequence.”The risks of chatbotsExperts are concerned there may be psychological repercussions to having access to a 24/7 friend who never criticises you or pushes back. And for some, these bots are not just a friend, but a romantic interest, too.In extreme cases, there are worries that it might distort reality;expose children to harmful, explicit content; or that kids may disclose thoughts of self-harm or suicide, yet be kept in an online conversation. “Whilst there are safety filters – for example, if you mention self-harm, it might suggest calling Samaritans or talking to someone offline – it usually encourages the user to stay in the conversation with the AI, by offering more support,” said Yassin.“It’s really important for children to understand that AI is not emotionally intelligent, and it can miss warning signs or fail to challenge dangerous ideas and thinking.”Another troubling element is that AI presents itself as being factual, when in fact, it’s not always correct. “The danger is that AI sounds confident, authoritative, and factual, even when it’s pulling generalised information from the internet, which is often inaccurate or poorly sourced,” the therapist said. And then there’s the psychological risk of children using AI as a stand-in for real connection. The therapist said one of the most striking (and worrying) appeals of AI for kids is that it’s designed to mirror and reflect who you are.“Over time, as children interact with AI, it begins to attune itself to their speech patterns and tone. From a psychological perspective, this is mirroring, which is one of the strongest markers of meaningful social connection,” she said.“We know that children become more like their close friends and network over time, and in the same way, AI will start to reflect back a version of the user: using their language, matching their tone, and reinforcing their views. That alone creates a sense of trust and safety for a child.”As well as mirroring kids, it also validates them – rarely challenging or correcting. They are endlessly available, non-judgemental friends who never get bored and never go away. So, from a kid’s perspective, what’s not to love?“Unlike human interaction, which comes to a natural end, AI chatbots tend to extend interactions,” said Yassin. “Children generally see this as an easier, softer option than taking their conversation offline to have with a human.“AI doesn’t disrupt the format. Instead it pulls kids deeper into the interaction.”Experts are worried that kids might withdraw and prefer these online relationships over real-life ones, which can be messier and harder work. “For children, an AI chatbot can be a shortcut to feeling heard and validated, but without the nuance, empathy, or accountability that human interaction brings,” warned Yassin. How to speak to children about AI chatbotsCommon Sense Media’s report suggested “the peril outweighs the potential of AI companions – at least in their current form”. As a result, it recommended no one under 18 should use these platforms.OpenAI has said its most popular product, ChatGPT, is not meant for children under 13, and children aged 13 to 18 should obtain parental consent before using it. This is because it “may produce output that is not appropriate for all audiences or all ages”. Adult supervision when using it is therefore advised.You must be at least 13 years old to use Snapchat – and those aged 13-17 have extra layers of protection, according to the platform. Snapchat acknowledges that My AI’s responses “may include biased, incorrect, harmful, or misleading content”. The platform does block results for certain keywords. It also serves resources from organisations if teens search for help with mental health.AI chatbots aren’t going away, and your kid is probably going to use them at some point without you being there (whether that’s at their mate’s house or on their smartphone on the bus to school). So, what’s the solution here? There are two parts. The first is that tech companies need to be doing all they can to safeguard younger users – as Common Sense Media said, they need “strong age-assurance systems and design AI products with kids’ safety in mind”. Not only that, but it needs to be enforced by policymakers. The second part is that kids need to be taught how to use these platforms responsibly. Yes, parental supervision is important, but it’s not always feasible. So we need to be having judgement-free conversations with our kids and setting some healthy boundaries, said the therapist. This might look like enforcing a ‘no devices in bedrooms’ rule (so these conversations aren’t happening day and night), it might look like speaking to them about how AI information isn’t always accurate. “Source-checking is becoming a dying skill,” Yassin said. “AI chatbots might feel personal and spot-on to children but the information is often inaccurate. Parents should also be clear about the issues of using AI as a shortcut for school work and exams.”She suggested that, in an age-appropriate way, parents can explain how AI chatbots work. “It can be beneficial to explain to a child that AI chatbots give us responses based on the patterns they’ve learnt from our input. The consequence is that AI chatbots mirror our beliefs and language – whether helpful or harmful – without pushback,” she said.Teaching critical thinking skills is also important. Make sure your child knows their opinions are valuable and that you want to hear their voice.“It’s super important for children to understand the value of thinking for themselves – from processing and forming ideas, to thinking critically – because this is a skill that helps children to grow as individuals,” said the therapist.“Removing the opportunity to practice social skills and human communication can be incredibly problematic for children.”AI can offer humans a lot – but as with anything, moderation is key. As Yassin noted: “AI chatbots can be a brilliant tool and there is certainly a place for them in our society. However, they should never be the only tool.“AI should never replace human interaction; especially for children and young people, who need real social connection to develop healthy minds and strong thinking skills.”Related...When Kids Say 'Chopped' It's Probably Not What You Think It MeansThere's Officially A Term Used To Insult AI, And You're Going To See It EverywhereI'm A Child Psychiatrist – Don't Make This 1 Mistake When Talking To Teens

Comments

Similar News

Breaking news