As deep learning technology develops and computing power continues to drop, the ability for anyone to generate realistic audio, video, and text of politicians and military leaders will likely grow. This “deepfake” poses a significant threat to online information environments during international armed conflict, and is a key challenge that security, intelligence, and public safety officials must address to protect their citizens and communities.
What is Deepfake? Understanding the Technology Behind AI-Generated Videos
Creating a high-quality deepfake requires impeccable data for both the source and target media. For example, to create a deepfake as uncannily authentic as Ume’s Tom Cruise TikTok, it would have taken years of meticulously capturing Tom Cruise from various angles and light conditions to train the model. This is in stark contrast to “shallowfakes,” which use more conventional and accessible image, audio, and video manipulation tools such as slowing or speeding media or renaming files to mislead.
One way to make a deepfake is to use an algorithm called a generative adversarial network, or Gan. These algorithms pit two artificial intelligence algorithms against each other. In this case, a generator is fed random noise and turns it into an image, while the discriminator is trained on photos of people. Then the generator is combined with the discriminator, and the result is a very realistic image of someone who does not exist.
This type of spooky AI can be used to impersonate anyone in any setting. For example, a business owner could be tricked into handing over millions of dollars to con artists posing as executives on video conference calls. In 2024, a finance employee of a multi-national corporation was duped into doing this and lost $25 million.