Although most deepfakes target politicians and celebrities, the technology has already been used in employer scams.
lame Forrest Gump. The 1994 movie used new technology to edit Gump’s character into scenes to make it seem like he talked with John F. Kennedy or sat next to John Lennon — an editing magician’s trick that won the film accolades.
That technology has evolved into what is now referred to as “deepfake” technology: a mix of AI and machine learning that allows users to alter videos, audios, and photos in powerful ways.
One deepfake example: A widely-shared video of Speaker of the House Nancy Pelosi that was doctored to slow down the speed of her speech, creating the impression Pelosi was impaired. Deepfakes can make it seem that someone is saying something or doing something they may never have — and that can create a new kind of security woe for employers of all types.
Just because deepfakes haven’t showed up at your company doesn’t mean they’ll stay away forever, Randy Barr, chief information security officer for Topia, a global mobility management software company, told HR Dive; “We’re going to start to see a lot more than this as soon as technology is readily available for people to use and try.”
What can HR do now to ensure employees are safe?
It’s all fun and photoshop until someone gets hurt
Deepfake technology can have positive purposes, such as in the creation of digital voices for those who have lost the ability to speak, or the David Beckham video that shows him explaining how people can protect themselves from malaria, using deepfake tech to look like he’s speaking in nine different languages.
But unlike the altered content from Forrest Gump and Instagram filters, the audience isn’t supposed to know that the deepfakes are manipulated pieces.
On top of that, the technology is often used explicitly to create trouble, Niraj Swami, CEO of SCAD AI, an AI consultancy, told HR Dive. “It stems from leveraging controversial material…offensive content or offensive perspectives,” he said. When this material pops up in social media, it creates media confusion, he said, and many viewers react emotionally to the false information.
Some deepfake videos can be identified relatively easily, Barr said. “One of the simple ways of detecting it is if you look at the video, see how often that individual blinks, because [with] the current AI technology and deepfake, it’s hard to impose the face over a body if the eyes are closed,” he said. Other tips are to look for a mismatch in skin tone, and placement of the eyebrows and chin, he added.
Just as deepfake technology is becoming more sophisticated, so is the technology used to identify altered media, with improvements on both sides expected to continue.
How deepfakes can harm employers
Although most deepfakes thus far have targeted politicians and celebrities, the technology has been seen in the work environment — and it may be used with increasing frequency, experts said.
Imagine a CEO placing an urgent call to a senior financial officer requesting an emergency money transfer — except the CEO’s voice was deepfaked by criminals, as Axios reported happening to a number of companies already. Deepfakes could be used to attack a company, Barr said; “[It] could be the evolution of how ransomware takes place.”