The latest Ofcom report highlighting the numbers of underage TikTok users – or ‘TikTots’ – serves as further evidence of a growing issue: the duty of care when it comes to vulnerable internet users.
For parents, more concerning is Ofcom’s further findings around the numbers of such social media users actively lying about their ages to join platforms such as Instagram and Tiktok with fake accounts, as well as actively deleting their search histories (19%), using incognito mode to access sites (21%) and even one in 20 getting around existing parental controls to access platforms.
These companies could solve this problem by implementing robust age verification services, and thereby create safer spaces for children and young people but as yet have failed to do so. Arguably, they are knowingly putting children and young people at risk. Children and young people typically want to interact with people in the same age band and friends they know in the real world.
Safer Internet Day
These latest stats follow a Safer Internet Day report, published by UK Safer Internet Centre, which found that 38% of young people who play online games say they have experienced offensive or mean comments from other players once per week or more.
Some 60% of young people want to learn more about how to avoid strangers sending them requests in online games. Plus, the majority (59%) of young people want to learn more about how to safely play games online.
38% of young people who play online games say they have experienced offensive or mean comments from other players
Clearly, existing safeguards to protect children and young people are not enough, and there is a growing voice for more to be done, and quickly – take Duncan McCann, who is taking class action against YouTube and action against TikTok.
Parents youth organisations and young people themselves are mobilising to call for companies to create safer spaces for children and young people
Digital educational development
This issue is closely interlinked with digital educational development. We recently spoke with Nic Wetton, the headteacher of a primary school in Chester, about Safer Internet Day and her experiences with children’s online safety. She spoke powerfully about the ways in which children’s digital and offline lives intersect and how education is impacted by the online world.
For instance, in relationships education, an emerging challenge has been how best to teach children what a friend is, without exposing them to the dangers of what might happen if they misidentify a stranger they’ve met online as a friend. There’s a balancing act, she says, between what students need to know and what they are old enough to know.
92% of girls and 74% of boys said sexist name-calling happens a lot or sometimes to them or their peers
This balancing act is also informed by the critical role of curiosity and risk-taking in learning and development, which have become complicated by the severity of consequences of risky online behaviours, as well as the cultivation of new social norms in digital contexts.
Ofsted’s Review of Sexual Abuse in Schools and Colleges highlighted that “sexual harassment occurs so frequently that it has become ‘commonplace’. For example, 92% of girls and 74% of boys said sexist name-calling happens a lot or sometimes to them or their peers. The frequency of these harmful sexual behaviours means that some children and young people consider them normal”.
This normalisation has also been identified as leading to the commonly held sentiment that there is no point in reporting and/or challenging such behaviour.
Taking a bottom-up approach
Wetton highlighted the power of online influences as being disproportionately powerful among children and young people when compared to those of teachers, school staff and even parents. This has, in part, led to the situation that Ofsted identified in its report whereby “in some schools, the threat of being caught and punished is a much weaker influence on behaviour than an underlying culture where sexual harassment and online sexual abuse can thrive”.
It is vital, then, to try and tackle the harms and safeguarding challenges arising from internet use with a bottom-up approach, beginning with the platforms which children are using, rather than a caretaker oversight approach. Prevention is always more valuable than the cure, and so we must go to the source of the issues to ensure they are effectively mitigated.
Gaming, social media and other types of online platforms must be held accountable, with greater harm prevention strategies for children and young people. Vulnerability is a very fluid and changing thing for young people. It’s not always possible to know the characteristics that are going to make somebody vulnerable. It is, therefore, essential that the industry takes a broad approach to harm prevention.
One such approach is age verification, which would enable those same platforms to render their services age appropriate.
With greater age verification measures in place, it will be possible to reduce lines of communication between children and young people and adults with a sexual interest in children and/or those who target children with maliciousness. With opportunities to ensure connections remain age appropriate, upholding children’s rights and stated needs in digital environments gains greater traction.
Another critical issue is that of data-driven harms, whereby children’s data, both that which is associated with their physical identity and their online behaviours, is tracked and sold – often to the detriment of their wellbeing. For example, in one case the student had looked up a recipe for a smoothie, triggering an algorithm to surface health-related content and then devolving into extreme dieting content. This kind of snowballing is incredibly commonplace.
The processes underpinning content recommendation algorithms also facilitate contact recommendation systems. Adults with a sexual interest in children no longer need to search extensively for children online to groom. They simply watch a few videos produced by children in a specific age band, gender and ethnicity that meets their preferences, and the recommendation systems will surface children that match those preferences to them.
One major way to prevent the proliferation of online harms to children is to take a curriculum-based approach to tackle a culture where reporting is perceived as ‘snitching’
Prevention is core to effectively tackling online harms. One major way to prevent the proliferation of online harms to children is to take a curriculum-based approach to tackle a culture where reporting is perceived as ‘snitching’. Part of that is to demonstrate a capacity to listen and allow children and young people to lead the conversation.
The recent announcement by the UK’s Digital Minister Chris Philp that age verification will be performed on sites that publish pornography is a confident step forward for age-appropriate technical blocks. However, it does not tackle the new social norms that have proliferated in digital environments where children and young people are overwhelmingly influenced by adults who are operating in such ways that their activities would be age-gated in offline contexts.
There needs to be a holistic approach to the way of tackling online harms by regulators, online platforms and educators – one which accounts for the change in social norms, is led by children and young people’s experiences and needs, and can implement prevention in various age-appropriate ways as children grow and develop. What works for a six-year-old will not be the best solution for a 14-year-old.