Another Facial Recognition Technology Privacy Breach – AFP in the Spotlight

The Australian Information Commissioner and Privacy Commissioner, Angelene Falk, has recently determined that the Australian Federal Police (AFP) has failed to comply with its privacy obligations in using the Clearview AI facial recognition tool (FRT). This is not the first time Clearview AI has been involved in a breach of the Privacy Act by collecting data without consent and we prepared an article on this previously here.

Clearview AI’s FRT is a tool which permits a user to upload an image of an individual’s face and match it to the images of that individual’s face that have been compiled from the internet. The FRT then links the image to the location where the image appeared.

Between the dates of 2 November 2019 and 22 January 2020, Clearview AI provided free trials of this FRT to the AFP-led Australian Centre to Counter Child Exploitation (ACCCE). The ACCCE uploaded facial images of Australians to trial the functionality of the tool, and in some cases, used the tool to try and identify either a person of interest or victims in active investigations. However, the AFP failed to assess the risks of providing this personal information to third parties located overseas, assess its security practices, precision and create safety precautions.

The Commissioner noted when used correctly and with precautions, FRT can be used for the public benefit. However, in this case there were a number of red flags regarding the third party offering that should have prompted a careful privacy assessment.

The Commissioner went further to say that ‘by uploading information about persons of interest and victims, the ACCCE were handling personal information in a way that could have serious repercussions for individuals whose information was collected’.

It was determined that the AFP had breached numerous privacy obligations, specifically:

  • Before the AFP began utilising this Clearview AI technology, it failed to complete a privacy impact assessment (PIA), breaching part 12 of the Australian Government Agencies Privacy Code, which requires a PIA for all high privacy risk projects.

  • When using the technology, the AFP breached part 1.2 of the Australian Privacy Principles by neglecting to take reasonable steps to execute practices, procedures and systems relating to the implementation of Clearview AI to ensure its compliance with clause 12 of the Code.

  • The Commissioner also determined that the AFP did not have an appropriate system in place to identify, trace and accurately record its trial of the investigative technology involving the collection of personal information.

The Commissioner highlighted that there were many flaws within the AFP’s systems to understand personal information collection practices, and that there was no coordinated method to distinguish high privacy risk projects. Further, there were flaws in the AFP’s mandatory privacy training, including inadequate information about conducting PIA’s.

The AFP was directed to:

  • appoint an independent assessor to examine and report to the OAIC on outstanding deficiencies in the AFP’s practices, procedures, systems and training in relation to privacy assessments and any essential adjustments recommended in the report; and

  • ensure that the applicable AFP personnel have completed an updated privacy training program.

Businesses engaging in or developing facial recognition technology must ensure that their technology abides by Australian privacy laws in order to avoid legal repercussions. If you are unsure whether your FRT upholds its privacy obligations or further information, please contact our team.

Paul Gray
Principal
T: 03 5225 5231 | M: 0414 195 886
E: pgray@ha.legal

Hugo Le Clerc
Lawyer
T: 03 5225 5213
E: hleclerc@ha.legal

Ryan Popovski
Lawyer
T: 03 5226 8572
E: rpopovski@ha.legal

Previous
Previous

Workplace Investigations and why they are needed

Next
Next

A green light for the reform and transformation of Australia’s Payments System