Saving ourselves from false information and scams caused by deepfakes.
That’s right. As the presidential election comes up, here I am with a blog post about deepfakes to warn my fellow U.S. citizens.
Firstly, let’s address the scam that hurt the large British engineering and design company, Arup.
Arup. To give some context about this company, it has over 18,500 employees across several offices. It was responsible for building the Bird’s Nest stadium for the 2008 Beijing Olympics as well as the Sydney Opera House. Quite an impact.
Arup’s Real-Time Deepfake Scam
But recently, one of Arup’s Hong Kong workers received an email from the Chief Financial Officer (CFO) of a company based in the United Kingdom.
In this email, the CFO mentioned a secret transaction that had to be carried out over a video call on Zoom. (See my previous article for an overview of phishing detection with machine learning.)
But this CFO was actually a scammer posing as a corporate leader. It is said that the Zoom call included deepfakes of other staff members with the CFO too.
Based on the Hong Kong police’s predictions, the scammers might have found public YouTube videos of a real person and created deepfakes to emulate that person’s voice, persuading the worker to follow their directions.
Initially believing the CFO in the Zoom call was real, the Hong Kong worker paid $25.4 million through 15 separate transactions. After realizing the Zoom call didn’t include real company staff members, this worker reported the scam to his colleagues. But the money was lost.
In this case, the scammer used what’s called a real-time deepfake. As you might be able to tell, this means a scammer can replicate a person’s voice and movements in real-time with technology.
Even since before this incident, people have made deepfakes to replicate public figures — like politicians and celebrities — saying false statements or doing questionable actions. More recently, though, people have spread countless deepfakes of Joe Biden, Kamala Harris, and Donald Trump.
Protecting Ourselves from Deepfakes
Of course, there are certain ways to identify deepfakes which I’ll cover in future blog posts. But for the sake of this article, I’ll leave some practical tips that we can use to protect ourselves and those around us from a deepfake’s harm.
Clarifying all requests. For starters, let’s double and triple check for safety before making any financial transactions. In Arup’s incident, the reference of a secret transaction over email is suspicious already.
Of course, I don’t know all of this incident’s details since I wasn’t involved, but it’s always worth checking with our colleagues and corporate leaders before agreeing to any company transactions. Aside from the corporate world, it’s worth checking with family members before responding to random requests for money — many of which are scams.
Social media. Secondly, let’s be careful when sharing our videos online, especially those of us who aren’t presidential candidates.
Nowadays, social media helps lots of people expand their businesses and sell their personal brands. But if you don’t trust certain people to respect your privacy, it’s better not to let them see your content.
There’s nothing wrong with blocking a person who acts disrespectfully online. And if someone lies intentionally by creating a deepfake, it’s best to report them too before more people are harmed by this person. Not everyone is offended if a deepfake is made about them, but tolerating it encourages the creator to slander more people.
Logical thinking. Thirdly, we all tend to track our favorite news and social media platforms each morning to stay informed. But especially on social media, it never hurts to check for signs of deepfakes such as ridiculous-sounding news stories, distorted figures, and visual footage that doesn’t make sense historically.
Example: Visualize an image of Joe Biden and Scott Key Fitzgerald talking to each other in front of the White House. This doesn’t make historical sense. (One of these men passed away two years before the other was born.)
Sharing carefully. Especially when a person isn’t aware of misinformation and disinformation, it’s tempting to forward any interesting stories to group chats or family members immediately.
To make sure that these “fascinating” stories are accurate, always check images and videos for signs of deepfakes. And while you’re at it, check multiple news sources to verify these stories.
Humor. Like most other technology, not all deepfakes have to be dangerous; they are often created for humor too! But it’s safer not to share funny deepfakes in large group chats where some people tend to forward all content to other groups immediately. If a group member sends your deepfake blindly without catching the joke, they might spread false information to their other contacts.
If you do share a funny deepfake, add a context or verbally explain the joke to other people as you show it to them. This way, other people understand the humor and don’t slander those depicted in the deepfake.
Conclusion. As we approach the elections — no matter what countries we’re from — deepfakes can reach any of us. Let’s be wary of the content we consume and encourage others to do the same.
Further Reading:
[1] British engineering giant Arup revealed as $25 million deepfake scam victim (2024). Available at: https://rb.gy/22126t
[2] Deepfakes in the 2024 US Presidential Election (2024). Available at: https://farid.berkeley.edu/deepfakes2024election/