Privacy Perspective: Adopting Safe and Secure AI

How do schools and teachers promote safe and secure use of Artificial Intelligence (AI) technology?

Privacy Awareness Week 2025 is an opportunity to reflect on emerging issues schools are facing with respect to AI use and its integration in the school environment AI is becoming increasingly relied on in schools and its use for student learning and student assessment is expected to become commonplace.

Generative Artificial Intelligence (the AI in vogue at present) refers to a computer- based learning models which include large language or multimodal learning models.

Since the inception of ChatGPT in 2022 there have been several pivots from governments to move to address and in some cases welcome the use of generative AI in different contexts.   Schools deploying AI tools to drive efficiency and enhanced learning are already turning their minds to the risks that comes with adopting new technologies. This article explores some of the data privacy and security implications of using generative AI tools. To read about managing the overall risk of AI in schools please refer to our previous article here.

Privacy and Security concerns

Generative AI is intelligent and has the capacity to learn from the information that is input; it becomes part of the training model, particularly in open-source AI tools. Some of the key risks schools face when using or seeking to deploy AI-powered tools include:

  • Unintentional disclosure of personal information;
  • Inability to track who or what has access to the information;
  • Confusion around where consent is provided to use the AI software for the intended purpose;
  • Retention and destruction incompatibility; and
  • Loss of trust or reputational damage.

Combatting Privacy and Security Concerns

There is no argument that the AI landscape is changing rapidly and as more developments occur, the accessibility and ease of its use will only grow.

Whist AI can assist schools streamline their administrative processes and support student learning outcomes, when considering generative AI tools, schools must deploy and use these technologies consistent with existing privacy laws.

Developing an AI framework can also assist to set ground rules for how schools approach implementation of AI-powered applications. The framework should ensure visible of what AI is being used or intended to be used, the intended purpose of use and data that will be fed into the tool, and considering the capabilities of each tool, and the terms and conditions of the service providers. Once this is mapped, schools can then make informed decisions about what safeguards are required to integrate those tools into regular practice and school operations with a level of confidence.

How do schools keep privacy front of mind?

In previous articles we have emphasised the importance of privacy-by-design  in combatting systems changes and reducing the risk of data breaches or non-conformance with privacy laws. When considering generative AI, the principles of privacy by design should be applied.

Key recommendations

  • Develop an AI framework: it is recommended schools develop and adopt a strategic approach which governs the use of AI in their school environment. This will set schools up to ensure implementation aligns with objectives, risk appetite and data privacy obligations.
  • Data Privacy Impact Assessments: Conducting data privacy impact assessments to evaluate the potential risks associated with data collection and feeding practices in AI-driven education, when seeking to implement new technologies is key to making privacy an automatic consideration. This will help identify potential privacy risks and inform appropriate mitigation strategies from the outset rather than trying to retro fit privacy requirements.
  • Promote a Culture of Privacy Awareness: Encourage a culture of privacy awareness within the school community, emphasising the importance of safeguarding students’ personal information and fostering a sense of responsibility and accountability among educators. Bring students into the conversation and consider a child friendly privacy policy or collection statement.
  • Implement Data Collection Guidelines: Establish clear guidelines for data collection, specifying the types of data that can be fed into AI models and ensuring that only relevant and necessary information is used.

Looking to the future and how we can help

Our Education team is in demand for up-to-date, informative and practical staff Professional Development on privacy matters including AI. Our team can assist with reviewing and updating these policies to ensure your organisation continues to mitigate privacy and data security risks posed by new technologies. We can also provide tailored advice and support on your commercial arrangements with technology service providers and data privacy impact assessments.

What our clients say about our Professional Development sessions:

I want to thank you for participating in our seminar. We are extremely appreciative of your significant contribution. Overall, the feedback received from the seminar has been excellent and we are pleased with the outcome.

Thanks very much for your presentation this morning. Directors commented very favourably afterwards and your advice was very useful also. Great discussion, as well. Will see you at the next breakfast session.

Thanks so much Cecelia, the presentation and discussion today was fantastic.

Contact us

Please contact us for more detailed and tailored help.

Learn more about our Professional Development sessions.

Subscribe to our email updates and receive our articles directly in your inbox.

Disclaimer: This article provides general information only and is not intended to constitute legal advice. You should seek legal advice regarding the application of the law to you or your organisation.