Learn more
Deepfake detection & identity fraud protection
Blog
share:

Deepfake detection & identity fraud protection

Throughout history, deception has played a role in fraud and disguise. Today, artificial intelligence has amplified this threat, with deepfake technology accelerating new risks in identity verification and fraud.

The rise of deepfakes in the Southern Hemisphere

Deepfake technology, which uses AI to create hyper-realistic yet entirely fabricated audio, video, and images, is rapidly evolving. In Australia and New Zealand, this advancement poses significant risks to identity verification processes, particularly in sectors like financial services, government services, and online retail.

Regulatory landscape: Australia verses New Zealand

Australia has taken proactive steps to combat deepfake threats:

  • Online Safety Act 2021: empowers the eSafety Commissioner to investigate and take down harmful online content, including deepfakes.
  • Criminal Code Amendment (Deepfake Sexual Material) Bill 2024: specifically targets the creation and distribution of non-consensual deepfake sexual content.
  • Australian Consumer Law (ACL): prohibits deceptive conduct in trade or commerce, offering compensation for consumers misled by deepfake content.

New Zealand, on the other hand, faces legislative gaps:

  • The Harmful Digital Communications Act 2015: addresses harmful digital communications but may not encompass AI-generated content like deepfakes.
  • Experts call for a combination of public education, social media takedowns, and technological authentication methods to combat deepfakes effectively.

Real-world impacts

Deepfakes have been used in various fraudulent activities:

  • Face swaps: fraudsters replace their face with someone else's in a video or image, often matching a stolen ID photo to fool facial recognition systems.
  • Impersonation: AI-generated voices and videos are used to impersonate executives, leading to unauthorised transactions. A shocking example involved a CEO authorising a £20 million transaction based on a deepfake message from what appeared to be a trusted employee.

Deepfake detection challenges and solutions

As deepfakes become more advanced, traditional detection methods struggle to keep up.

However, emerging AI-driven technologies are being developed to address these challenges:

  • Deepfake detection frameworks: evaluating tools based on deepfake type, detection method, data preparation, model training, and validation to create more reliable solutions.
  • AI and Machine Learning: leveraging advanced algorithms to detect subtle inconsistencies in facial movements and voice patterns.

Proactive measures for business

To protect against deepfake-induced identity fraud, businesses in Australia and New Zealand should:

  • Implement robust verification processes: Utilise multi-factor authentication and biometric verification to enhance identity checks.
  • Stay informed about legal obligations: ensure compliance with relevant laws and regulations pertaining to digital content and consumer protection.
  • Adopt advanced detection technologies: invest in AI-driven tools capable of identifying deepfakes and other synthetic media.

Conclusion

Deepfake technology is evolving at an alarming rate. Businesses, regulators and fraud prevention experts must remain vigilant. By understanding the legal landscape, recognising the risks, and adopting proactive measures, businesses in Australia and New Zealand can safeguard their operations and maintain trust with their customers.

If you’ve enjoyed this article, you may also like our on-demand webinar watch it now.

Sign up for more expert insights

Hear from us when we launch new research, guides and reports.