Although digital manipulation of images and video has been around for some time, the combination of software that improves the realism of the fake and the speed at which fakes can be generated and disseminated to targeted audiences has made headlines in the political sphere over the last year or so.  The manipulation of video images using artificial intelligence has been called ‘deepfake’, a portmanteau of ‘deep learning’ and ‘fake’.

The evolution of deepfake technology from individually targeted ‘revenge porn’ towards use in the political arena is of particular concern for organisations involved in the political process, from governments, to political parties and interest groups.   Concern is increasing that deepfakes have the potential to catastrophically undermine trust between different parts of society as information is skewed and beliefs manipulated.

A specific legislative response to political deepfake is yet to be forthcoming (and Australia is not alone in this). Further, there are arguments that existing laws are adequate to capture the behaviour itself, and organisations ought to focus on prevention and response.  However existing laws are a little patchy and applications will vary depending on the use to which a political deepfake is put.   

For instance, copyright laws (including takedown protocols with Internet Service Providers) may apply, but ownership of the original footage would need to be established and there is likely to be a fair use argument against enforcement where there is an element of satire or parody.  Privacy laws may apply where personal information is involved, however the Office of the Australian Information Commissioner has not yet provided official guidance on the applicability of Privacy Laws to deepfakes.  Defamation could also assist individuals, however there would need to be the requisite impact on the individual’s reputation and appetite for taking court action.

Lastly, federal telecommunications offences relating to use of a carriage service may apply provided the threshold of ‘menacing, harassing or offensive’ behaviour is crossed.

Few of these laws are practical to implement, or effectively enable organisations to limit the distribution of images, which is where the most harm is done.  We are following with interest the ongoing debate in relation to the role that technology platforms (Facebook, Twitter, Instagram, etc) should play in identifying and removing certain content.  It is likely that immediate regulatory developments will focus on this area.

Part 2 of this article will look developments in the EU, and what Australian organisations can do to prepare and respond to a deepfake issue.

For further information or advice please contact:

Paul Gray
Principal Lawyer
T  03 5225 5231
E  pgray@ha.legal

Alexander Gulli
Lawyer
T: 03 5226 8573
E: agulli@ha.legal

Previous
Previous

Political Deepfakes - Part 2

Next
Next

Buying cloud software – the end-to-end issue