top of page
  • Caroline Keen

Children's digital privacy rights just got more grit!

Updated: Nov 9, 2021

A significant portion of the General Comment 25 talks about children's data privacy, with particular emphasis on the use of AI in both consumer products such as Internet of Toys IoT, and in particular the growing industry of AI in education technologies - EdTech.

Sometime in the making, it's architects and advocates are jubilant this is a 'game changer' as all governments duty-bound to the UN Convention of the Rights of the Child will now have clear instructions on why and how they need to protect children's rights in the digital context. This means they will need to ensure that businesses, organizations, and institutions will need to meet these new responsibilities.





Some of the key highlights in the General Comment 25 that speak to issues of children's privacy will be key for government, schools and EdTech companies going forward.

Section VI, E. 'Right to Privacy' outlines strong commitment to children's data privacy acknowledging that 'Children’s personal data are processed to offer educational, health and other benefits to them' but that such activities pose new threats. 'Threats to children’s privacy may arise from data collection and processing by public institutions, businesses and other organizations, as well as from such criminal activities as identity theft'. (CRC/C/GC/25. p. 11).

The document is forward thinking, clearly identifying the evolving nature and scope of new forms of data being generated about children as they use digital technologies.

Although most parents and teenagers remain unaware of the extent of information being collected, Document 25 is very specific in raising and addressing the concerns of privacy advocates and researchers:

'Data may include information about, inter alia, children’s identities, activities, location, communication, emotions, health and relationships. Certain combinations of personal data, including biometric data, can uniquely identify a child. Digital practices, such as automated data processing, profiling, behavioural targeting, mandatory identity verification, information filtering and mass surveillance are becoming routine. Such practices may lead to arbitrary or unlawful interference with children’s right to privacy; they may have adverse consequences on children, which can continue to affect them at later stages of their lives.' CRC/C/GC/25. p. 11-12).

The key point here is that interference with a child's privacy is now only permissible if provided for by the law, and is 'intended to serve a legitimate purpose, uphold the principle of data minimization, be proportionate and designed to observe the best interests of the child', all of which raise questions around the use of AI in EdTech.

Educational technology practices are undoubtedly accelerating due to Covid-19, with increased online learning, recording through cameras, student monitoring when online, session recording, and technology that tracks attention and engagement online. But the use of AI to monitor mental health and wellbeing, while promising the improve individual learning, may also have some downsides. We need to consider a range of issues around the efficacy of the science, the reductionist approach to measuring human emotion, its problems capturing diversity, its impact on children's understandings of emotional life and development, the role of educators and private companies in children's wellbeing should data indicate vulnerability and risk, reliance on digital measurement over human engagement, and issues around 'choice' and 'consent' to the use of AI in the education sector.



Consent by parents and by children has been flagged as an issue and this will be one that needs to be considered within the education sector given the exponential growth in EdTech, and the Covid crisis which has seen investment in EdTech start-ups more than trebling in 2020.

How schools and educational institutions deal with student data being generated through educational technologies will need to be carefully thought through in light of their use of tracking technologies, biometric data and the use of AI that promise to deliver personalized learning but which in fact may not be using sound, scientific foundations appropriate for the developing child.

Once concern I have is the way in which technology is appropriate and repurposed for consumer markets. AI is frequently developed in military and medical sciences only to be co-opted for other uses. One such technology is the adoption of facial recognition to assess emotion. Originally developed to help identify the emotional state of adult dementia patients, this technology has been rapidly applied to assessing the emotional states of children through facial recognition while they do their schoolwork online. However, such uses are working against emerging regulatory frameworks such as the CRC/C/GC/25 and the GDPR not to mention various Acts, Bills and industry codes across the US, UK, and Europe that must now recognise and limit biometric and AI applications that harvest children's data.

EdTech using emotional AI is in tension with the UN CRC stance. On one side one could argue that applications of AI in the classroom will help a child's ability to reach their 'fullest potential' which is one provision of the General Comment 25. It is tempting for educational institutions to assume a normative defense for EdTech companies assuming the use of AI as essential for individuals to reach their full potential, but we need to now consider children's rights more fully in the digital context which will mean evaluating EdTech more fully applying the UN's General Comment No. 25 on children's rights in relation to the digital environment.


I will be talking more about children's rights to data privacy in the education sector in Auckland next week at the Cyber Security for Schools Conference.


Dr Caroline Keen

bottom of page