
While Artificial Intelligence (AI) offers countless benefits, in the wrong hands, the technology can be used maliciously to spread disinformation or scam victims. This is especially true of its ability to generate digitally altered images, videos, and audio tracks known as deepfakes.
Deepfakes are fabricated likenesses and voices of real politicians, celebrities, or even people you may know. Often times, they have been digitally manipulated so it appears they are saying something inflammatory or defamatory. Such altered files are designed to defraud, harass, intimidate, and undermine people. This creates confusion among viewers.
If you encounter a video clip of a public figure saying something out of character, stop. Do not share it. Instead, do a little research. It only takes a few seconds to confirm or disprove, and you could help stop disinformation from taking on a life of its own and causing real harm.
Five ways to tell a deepfake from the real thing:
- Facial and body movements As of today, AI struggles with replicating natural human movements. If a subject’s movement doesn’t feel right, the video may be a deepfake.
- Mismatched lip syncing When altered audio is paired with a deepfake video, there is often an obvious mismatch between the spoken words and the movement of the mouth.
- Inconsistent shadows or reflections If you look closely at a deepfake, you’ll notice that shadows and reflections on nearby surfaces don’t appear as they would from a consistent light source.
- Unnatural appearance For now, AI has difficulty rendering human skin and lifelike hair. If a person in a video appears plastic, stiff, or wooden, the video may be a deepfake.
- Unusual audio noise In order to mask inconsistent changes in the audio, deepfake algorithms often add artificial noise. If you hear white noise in the background, it’s likely a deepfake.
Alerting you to the dangers of audio and video deepfakes is another way Credit Union ONE is helping to protect you, your data, and your finances.