Is Your School at Risk of a Deepfake Attack?

25 November 2021

In simple terms, deepfakes are images or videos that have been manipulated to superimpose the attributes of one person onto another. For example, deepfake videos can allow a person to create a video where the person looks and sounds like another person-often someone well known in the media such as a politician or an actor.

The word ‘deep’ refers to the use of machine learning and deep-learning technology to alter an original piece of media to seem like something else. In essence, what this refers to is the ability of Artificial Intelligence (AI) software to use a large data set mapped from facial features, unique mannerisms and even flow of speech in original videos and images and then transpose this onto another image or video. This creates a somewhat seamless integration of one person’s characteristics onto another’s. Basically, it looks so ‘genuine’ that it is very difficult to discern it from reality.


How Are Deepfakes Made?

Deepfake technology has been available since the 1990s, although it evolved into its current more sophisticated form in 2017. Creating deepfake-style content is available via a variety of software programs and even on smartphones. There is a large number of free apps that are available for download and boast the ability to create fun videos of someone’s face in a pre-existing library of videos.
However, much like all technology platforms, the deepfake technology itself is not the problem. It is the application of this media that poses a potential threat to students and schools.

How Are Deepfakes Being Used?

Deepfakes take many forms and are used for an array of different reasons. At the more innocuous end, deepfakes are commonly used as a form of entertainment, usually involving superimposing a face onto that of a celebrity or TV character. These applications are common among the general public, posting to social media as a bit of lighthearted fun. In contrast, new trends in cyber-criminal activities have seen the use of deepfakes in social engineering attacks. Deepfake audio impersonations have been used to trick employees into passing on sensitive company data under the belief they were conversing with their company director, resulting in multi-million dollar losses.

Deepfakes have also been used as weapons against everyday individuals. An unfortunate trend in deepfake creation has involved the use of innocent people’s faces in pornographic videos. These videos depict non-consensual participants and are an extremely harmful byproduct of this type of media manipulation. A well-known instance of pornographic deepfake abuse was that of 18-year-old Australian law student, Noelle Martin. Ms Martin had been insidiously targeted in an image-based abuse attack, which had involved her face being superimposed on a pornographic video, which was then widely distributed online, even finding its way onto popular adult video websites. Ms Martin’s recount of her experience captures the real and present threat that deepfakes present to the general public.


What Is the Impact on Our Schools?

Deepfakes may seem like esoteric technology, however, we are seeing the impacts of this media type affect our school communities. Students and teachers alike are being subject to victimisation in and via deepfake videos.

Deepfakes have been a devastating vehicle for cyberbullying, image-based abuse and blackmail. Global examples have surfaced about competitive parents using deepfake videos to attack rival children. In ySafe’s dealings with schools, we have assisted with several incidents of secondary school students being victimised online by pornography deepfakes. The risk posed by this media type is so prolific that the Office of the eSafety Commissioner has formally published a position statement on the topic, in an attempt to educate the general public on the potentially-harmful media content.

In lieu of widespread use of deepfake detection technologies, schools and communities play an important role in combatting the risk that is posed by the deceitful or non-consensual use of deepfake media.


What Can Schools Do to Seek to Address This Risk?

Although a technology-related issue such as this can feel daunting and difficult to manage, fortunately we recommend managing this potential threat in the same way that schools manage all new challenges. We suggest considering an approach that addresses policy, education and incident management.


The raft of cyber safety issues that exists in schools and that is likely to impact students (and staff) includes:

  • cyber bullying of staff
  • sexting
  • predatory behaviours
  • posting of offensive or illegal content
  • social engineering
  • image-based abuse (such as the use of deepfakes).


Given the complexity of cyber safety risks that can arise, it is critical for schools to implement ‘whole of school’ cyber safety strategies to minimise risks.

Some possible initiatives include:

  • the establishment of a ‘Cyber Safety Team’
  • a structured curriculum and peer group support system that provides age-appropriate information and skills for students relating to cyber safety
  • education and training of students, parents and staff in cyber safety strategies (see the next section)
  • undertaking regular risk assessments in relation to cyber safety within the school by surveying students to identify cyber safety issues
  • maintaining and analysing complaints and other records of reported cyber safety incidents, in order to identify systemic issues and to implement targeted prevention strategies where appropriate
  • including cyber safety strategies in students’ school diaries.


In addition to the steps above for mitigating cyber safety risks, it is critical that schools have in place robust child safety programs. These should include:

  • a Child Protection Program: a holistic Child Protection Program should relate to all aspects of protecting children from abuse and include establishing appropriate and robust work systems, practices, policies and procedures
  • Child Safe Codes of Conduct: Codes of Conduct should list behaviours that are acceptable and those that are unacceptable and may include:
    • a Code of Conduct for all adults that provides high-level statements and expectations of professional boundaries, ethical behaviour and acceptable and unacceptable relationships
    • a Code of Conduct for students that provides clear guidelines for students about their own responsibilities to help ensure the safety and wellbeing of themselves and their peers. Both should set out expectations about conduct when online
  • Student and Staff Usage Social Media Policies.



The fundamental protective mechanism for minimising risk around deepfakes is education i.e. building students’ capacity for critical analysis of digital media.

If students are taught to demonstrate a healthy scepticism about media content, the damaging effects of deepfakes may be minimised for victims. We suggest adopting a mix of formal education and informal conversations with students to promote critical thinking skills.

Strategies to ‘spot a deepfake’ and promote critical thinking skills online include:

  • looking for differences in the resolution in the video/image
  • identifying any ghosting around the hairline or ears
  • considering the emotional response that the media elicits. If something makes a person feel very strongly, either happy, angry or shocked, the media may have been manipulated or presented in a way to evoke this reaction
  • considering whether the video or image seems ‘out of character’ for the person in focus. If so, students need to know to question if and why they would post that type of media content.

Furthermore, we suggest the dissemination of information that instructs students and parents on what they can do if they see damaging media content. These steps may include reporting the content to:

Avoid sharing the content (even out of concern) or storing the content (if image-based abuse).

Incident Management

Schools are encouraged to follow their existing complaints programs and their incident management and/or behaviour management processes to respond to problematic incidents relating to digital media.

The dissemination of deepfake media may have legal implications, depending on the content itself and the contextual factors surrounding the dissemination. Altered or modified media content may fall under image-based abuse laws if created and/or shared non-consensually.

Deepfake content designed to bully or attack someone may also fall under the Telecommunications Act 1997 (Cth), in respect of the harassment of a person using a carrier service. These points may be taken into consideration during incident management steps.



In the current online landscape in which students operate, the media presented to them is not only an echo-chamber of their own values and activity, but the increased distribution of deepfakes means that it may also be a misrepresentation of reality.

The prevalence of deepfakes exacerbates the need for students to demonstrate the application of critical thinking skills as it is becoming increasingly difficult to differentiate between objective media and augmented content. 





  Jordan Foster

  Clinical Psychologist and Managing Director, ySafe

Jordan is one of Australia’s foremost cyber safety experts. She is a Clinical Psychologist and the founder and CEO of ySafe, a now nationally-adopted household name in the field of cyber safety. Jordan has extensive clinical expertise in working with children and adolescents managing problematic technology use, including cyber bullying, image-based abuse and compulsive gaming.


SG HR Policies Ad (1)



CompliSpace is an Australian company that helps over 600 non-government schools across Australia with their governance, risk, compliance and policy management. What makes us different is that we monitor over 200 sources of legal and regulatory change to ensure our clients have the updated policies and tools they need to meet new requirements. We share that knowledge with the broader Education community via School Governance.