When AI Becomes a Weapon: How Schools Can Prevent and Respond to Abusive Deepfakes

We are firmly living in the age of artificial intelligence (AI). Generative AI technologies are evolving at extraordinary speed, and their use is becoming increasingly embedded in everyday life. Children and young people, in particular, are at the forefront of this shift, using AI companions that simulate personal relationships and tools that can manipulate video, audio, or images to make it appear as though someone is saying or doing something they never did.

While these technologies can be creative, engaging, and exciting, they also present an entirely new category of risk for children and young people. Schools are increasingly encountering one of the most concerning manifestations of this risk: the creation and circulation of malicious and abusive deepfakes.

Across our work with school clients, we have seen leaders and Boards grappling with how to prevent these incidents and how to respond decisively when they occur. This is not a localised issue.

In December 2025, the Hong Kong Privacy Commissioner released a practical toolkit to support schools in preventing and responding to abusive deepfake incidents, highlighting the global scale and urgency of the problem.1

Drawing on that guidance alongside our direct experience advising schools, this article sets out:

  • Actionable steps for Boards and school leaders;
  • Practical steps schools can take to prevent the creation of abusive deepfakes; and
  • Strategies for responding effectively when incidents occur.

Actionable steps for boards and school leaders in responding and preventing deepfakes

Boards and school leaders have a responsibility to take reasonable, proactive steps to protect students from reasonably foreseeable online harms, including the creation of abusive and malicious deepfakes. By adopting a proactive approach, schools can minimise risks and ensure that their schools are prepared to respond quickly when an incident occurs.

1. Education and Awareness Campaigns

    Education is the first and most critical line of defence. School leaders should invest in educating employees, volunteers, students, and parents about the risks associated with deepfakes and the importance of online privacy. Awareness programs can help students recognise the dangers of manipulated media and create space for them to engage in open conversations about their online activities. Student learning about online behaviour should explicitly address AI‑generated content, noting that many children and young people are unable to reliably identify manipulated or synthetic media, which significantly increases their vulnerability.

    Schools should also communicate clearly with students about:

    • Their responsibilities when using digital tools;
    • The legal and disciplinary consequences of creating or sharing harmful content; and
    • The importance of reporting abusive or malicious deepfakes early.

    Equally important is empowering students to protect themselves online, including the use of privacy settings, blocking and reporting concerning behaviour, and seeking support from trusted adults.

    2. Collaborate with Experts and Legal Advisors

    Given the legal and technical complexity of deepfake incidents, schools should not attempt to navigate this landscape alone.

    It is important for schools to understand when the creation and distribution of abusive deepfakes may amount to criminal behaviour. For example, under sections 53S and 53T of the Crimes Act 1958 (Vic) (Victorian Crimes Act), it is a criminal offence to intentionally distribute an intimate image of another person or threaten to do so, if the distribution is contrary to community standards of acceptable conduct.2 Consent is irrelevant if the victim is under 18 years old. The Victorian Crimes Act defines “image” to include images digitally created by generating the image or altering or manipulating another image3, which could include malicious and abusive deepfakes. The penalty is up 3 years imprisonment.

    There are also relevant federal offences. For example, under s 75 of the Online Safety Act 2021 (Cth), a person must not post or threaten to post an intimate image online of another person without their consent. Under that Act, “intimate image” includes material that has been digitally altered, such as deepfakes. Under section 474.17 of the Criminal Code 1995 (Cth), it is an offence to use a carriage service in a way that reasonable persons would regard as being menacing, harassing or offensive. Under section 474.17A, there is also an offence which applies to those who use technologies to artificially generate or alter sexually explicit material (such as deepfakes) for the purposes of non-consensual sharing online. These offences are subject to serious criminal penalties of up to six years imprisonment.

    Beyond legal advice, schools should also establish relationships with cybersecurity and technology experts who can assist with detection, evidence preservation, and risk mitigation.

    3. De-risk Staff and Students’ Use of AI by Establishing Clear Policies and Student Codes of Conduct

    Schools should develop (or update) and implement comprehensive policies that set clear boundaries regarding acceptable use of technology (including AI and deepfakes), expectations for online conduct, and the consequences of misuse.

    School should also review their student code of conduct and acceptable use policy to ensure that specific AI image abuse matters are explicitly referenced, including consequences for distributing and sharing material created by others or of unknown origin.

    These policies and codes of conduct should address the importance of safeguarding children’s privacy and set clear boundaries for the use and sharing of images, videos, and personal data as well as the creation of deepfakes using AI tools.

    Tips to Prevent the Creation of Abusive Deepfakes

    While schools cannot entirely prevent the creation of deepfakes, they can take steps to limit the risk of their students being targeted. The key lies in proactive privacy protection and data management strategies.

    • Limit the Use of Personal Data and Digital Footprints
      Encourage employees, volunteers, students, and parents to limit the amount of personal information shared online. This includes photos, videos, and other identifying data that could be misused in creating deepfakes. Schools should work with parents to support their children to adjust privacy settings on social media platforms and ensure that images and videos of students are only shared by the school with explicit consent.

    • Adopt Strong Data Protection Practices
      Schools should implement robust data protection measures to ensure that sensitive student information is secure. This includes encrypting data, using secure file-sharing platforms, and conducting regular audits to ensure compliance with data protection regulations. Using secure methods for online teaching, such as password-protected video conferencing, will help protect students from being filmed or recorded without consent.

    • Encourage Use of Watermarking and Digital Signatures on Images and Videos Shared by the School
      Watermarking photos and videos with identifying information, such as the school’s name or logos, can make it more difficult for individuals to use these images in deepfakes. Similarly, digital signatures can help verify the authenticity of images or videos, reducing the likelihood of misuse.

    • Limit the Use of AI-Generated Content
      Schools should be cautious when using AI-generated content for teaching purposes or school events. If students or staff create digital content, it should be made clear that they must not manipulate, edit, or alter others’ images or likenesses or use AI tools to create deepfakes, whether they are intended to be abusive or harmless.

    How Schools Should Handle Abusive Deepfake Incidents

    Despite proactive measures, it is possible for malicious and abusive deepfakes to be created and shared online. Schools must be prepared to respond quickly and decisively to protect their students and ensure accountability.

    • Respond Immediately
      Schools must act quickly upon discovering a deepfake involving a student. This includes identifying the source of the abusive content, removing the content from all platforms to the extent this can be done and notifying or encouraging the subject of the deepfake to contact appropriate authorities such as the eSafety Commissioner.

      Care should be taken when gathering information and evidence of probative value when evidence may be deleted or may be illegal to possess. Whilst it may be necessary and appropriate in some circumstances to take a screenshot of the abusive deepfake image, schools must very carefully consider how any images could be securely stored to minimise the risk of inadvertently breaching child abuse image laws and ensure images cannot be accessed by somebody who should not have access to it. Consider whether and when the eSafety Commissioner, with its greater powers, may be needed to intervene.

    • Engage with Legal and Regulators, As Required
      Schools should consider whether there is a requirement to notify relevant authorities when an abusive deepfake is identified, including police. School must understand that the creation and distribution of abusive deepfakes could be unlawful or even criminal behaviour, depending on the circumstances. Schools should consult legal experts to navigate and reduce the risk of potential legal claims.

    • Provide Emotional and Psychological Support for Victims
      The psychological impact of being the victim of a deepfake can be significant. Schools should provide access to appropriate services and supports and ensure the child feels supported and protected from further harm. Open communication with the student’s family is important, and schools should work with families to ensure that the student’s privacy is respected, to the greatest extent possible.

    • Public Communication and Rebuilding Trust
      If the deepfake becomes publicly known, schools may need to communicate with a broader audience, such as the school community. Great care should be taken when doing this. Transparency and clear messaging may help rebuild trust among students, employees, volunteers, and parents. A public statement should reassure the community that the school is taking all necessary steps to address the situation, including implementing further safeguards.

    • Long-Term Prevention and Follow-Up
      After an incident, schools should review and revise their policies to ensure they are equipped to prevent future occurrences and learn from any gaps identified. This may include updating data protection policies, enhancing awareness campaigns, and providing additional training for employees and volunteers on identifying deepfake content. Long-term prevention also involves working with parents to educate them about online safety and deepfake risks, ensuring that both school and home environments are aligned in protecting children from digital harm.

      To ensure proper discharge of duty of care, train those conducting investigations into student conduct about how they can obtain evidence of probative value when evidence may be deleted or may be illegal to possess – and to know when the eSafety Commissioner, with its greater powers, may be needed to intervene.

    Schools have a duty of care which requires them to take proactive steps to protect their students’ privacy and personal data, prevent the creation of abusive deepfakes, and respond quickly and appropriately to any deepfake incidents that may occur.

    By educating the school community, collaborating with legal and cybersecurity professionals, and adopting strong data protection policies, schools can significantly reduce the risks associated with malicious and abusive deepfakes. If an incident occurs, a clear and speedy response, combined with the provision of useful supports, will ensure that the wellbeing of affected children and young people are protected.

    How we can help

    Our Child Safety, Safeguarding and Discrimination team are skilled in supporting schools to keep children safe. Our team can provide peace of mind in developing and implementing prevention strategies and navigating incidents if they occur.

    Contact us

    If you would like to discuss how we can support your organisation, our team is here to help. Please contact Skye Rose or Tal Shmerling if you would like further support.

    Subscribe to our email updates and receive our articles directly in your inbox.


    Disclaimer: This article provides general information only and is not intended to constitute legal advice. You should seek legal advice regarding the application of the law to you or your organisation.

    1. Office of the Privacy Commissioner for Personal Data, Hong Kong, ” Abuse of AI Deepfakes: Toolkit for Schools and Parents”, December 2025. Can be accessed here: https://www.pcpd.org.hk/english/resources_centre/publications/files/ai_deepfake.pdf. ↩︎
    2. In summary, under section 53S of the Crimes Act 1958 (Vic), if a person (A):
      – intentionally distributes an intimate image of another person (B) to another person (C); and
      – the distribution is contrary to community standards of acceptable conduct; and
      – person (B) did not consent the distribution of the image or the manner in which the image was distributed;
      that person (A) is guilty of an offence.
      In summary, under section 53T of the Crimes Act 1958 (Vic), if a person (A):
      – threatens another person (B) to distribute an intimate image of (B) or another person (C); and
      – the distribution is contrary to community standards of acceptable conduct; and
      – (A) intends that (B) will believe they will carry out that threat;
      that person (A) is guilty of an offence. ↩︎
    3. Section 53O of the Crimes Act 1958 (Vic). ↩︎

    Authors