Although digital manipulation of images and video has been around for some time, the combination of software that improves the realism of the fake and the speed at which fakes can be generated and disseminated to targeted audiences has made headlines in the political sphere over the last year or so. The manipulation of video images using artificial intelligence has been called ‘deepfake’, a portmanteau of ‘deep learning’ and ‘fake’.
In our last article, we reviewed the problem and what current laws might apply. It’s a rather bleak view for political organisations, so we’re discussing prevention and preparation steps that organisations can take. Fortunately, in our judgement these should fit into familiar compliance frameworks, and could include:
a. reviewing policies on the creation and dissemination of video content;
b. performing a risk assessment on video content alongside other types of information (including personal information);
c. implementing appropriate organisational controls;
d. ensure technology security controls are in place; and
e. promote education among stakeholders
In terms of response, speed is critical. Organisations who have existing data breach or emergency response plans will already be planning for quick responses to incidents, and - if considered high risk for your organisation - a deepfake issue, should form part of that planning.
For further information or advice please contact: