Over the past two decades, society has seen incredible advances in digital technology, resulting in the wide availability of cheap and easy-to-use software for creating highly sophisticated fake visual content. This democratisation of creating such content, paired with the ease of sharing it via social media, means that ill-intended fake images and videos pose a significant threat to society. To minimise this threat, it is necessary to be able to distinguish between real and fake content; to date, however, human perceptual research indicates that people have an extremely limited ability to do so. Generally, computational techniques fair better in these tasks, yet remain imperfect. What's more, this challenge is best considered as an arms race – as scientists improve detection techniques, fraudsters find novel ways to deceive. We believe that it is crucial to continue to raise awareness of the visual forgeries afforded by new technology and to examine both human and computational ability to sort the real from the fake. In this article, we outline three considerations for how society deals with future technological developments that aim to help secure the benefits of that technology while minimising its possible threats. We hope these considerations will encourage interdisciplinary discussion and collaboration that ultimately goes some way to limit the proliferation of harmful content and help to restore trust online.