At the ‘theAfricoder’ news desk, we’ve been taking a deeper look into the technology behind deepfake, a form of synthetic media. This artificial production, manipulation, or modification of images, videos, and audio is accomplished using artificial intelligence (AI) and is increasingly being used to create fake personas with potentially malicious intentions. The original report on this matter was compiled by Eva Sgroi and can be accessed here.

Deepfake leverages deep learning, a type of machine learning technology. Based on artificial neural networks with representation learning, the “depth” in deep learning alludes to the multiple layers present in the network. This technology enables the neural network to “learn” anyone’s facial or voice characteristics from hundreds or even thousands of examples. After learning these features, AI applies the transformations to produce an almost exact replica.

While deepfake technology is often associated with misuse, such as spreading political misinformation or generating fake celebrities, it also has beneficial applications. For instance, it can add visual effects to Hollywood movies, create lifelike human avatars for customer service, or generate entertaining social media clips.

Deepfake creation employs a mix of innovative AI-based manipulation tools, traditional Computer-Generated Imagery (CGI), and special effects. These technologies behind synthetic media have been evolving since the late 1990s. In 1997, a program named Video Rewrite synthesized new lip movements from a separate audio track, marking the first significant use of AI to convincingly manipulate video.

Today, software for synthetic generation of video and audio is easily accessible. Some YouTube channels, like Shamrock and Ctrl Shift Face, are solely dedicated to deepfakes and have garnered large followings. Moreover, user-friendly deepfake apps have emerged, which are often used to create completely synthetic celebrity profiles on Instagram.

Detecting deepfake content is crucial to combat its potential misuse. According to research conducted by Oxford Academic, three sources of information can help accurately identify deepfakes: context, audiovisual imperfections (i.e., technological glitches), and content. Technical anomalies can reveal manipulation, while unnatural lip movements, audio and lip movement discrepancies, or traces of digital rendering can also be red flags. As the technology continues to evolve, awareness and understanding are crucial for public discernment of genuine and manipulated content.

Facebook
Twitter
LinkedIn

Subscribe to our Newsletter

Enter your email address to register 

to our newsletter subscription!