Reality Defender CEO Ben Colman On Rethinking Deepfakes And Weaponized GenAI

Published on: September 21, 2024
Roberto Popolizio Roberto Popolizio
Published on: September 21, 2024

Information is the new gold. Whether you’re a business or an individual, cybercriminals are always finding new ways to breach your security defenses and steal your sensitive data.

It takes one data breach to compromise your financials and personal information.

So, how can you secure your online privacy and security beyond the mainstream advice found all over the internet, that hackers already know as much as you do?

In this new interview series by Safety Detectives, we bring you exclusive insights from top executives and leading cybersecurity professionals. Join us as they share expert tips, real-world experiences, and untold truths about protecting and securing your valuable information.

Our guest today is Ben Colman, CEO & Co-Founder of Reality Defender, the award-winning deepfake detection platform trusted by Visa, Microsoft, NBC, and more enterprises and governments to identify harmful deepfake content (audio, video, visual, and text) and stop it before it becomes a problem.

What you will learn:

  • The most dangerous type of deepfake and the latest innovations against it
  • Why most companies and people act too late against weaponized generative AI
  • How Reality Defender safeguards enterprises and governments against AI-generated threats and disinformation

To start, can you please introduce yourself, and share the story of what inspired you to pursue your career path, and your achievements?

I always found the act of taking apart and putting machines back together fascinating, which led me much further down the line in my life — at or near the start of my career — to work extensively at the intersection of cybersecurity and data science.

At first, I did this in technology, then the technology side of finance, and finally, in 2021 (after it was a non-profit for some time), at Reality Defender, which is the supreme and most realized version of my focus and my work. It is not only at the intersection of cybersecurity, data science, and what we now know as generative AI, but the work that I feel has the biggest impact — a positive net benefit, if you will — on the world as a whole.

What are all the pain points you solve and for whom? Explain it in simple terms.

Deepfakes can be used for good and bad. We are here to help others safeguard the good uses and protect against abuses, threats, and other ill-intended uses of AI-generated content. It is for enterprises and governments, which in turn is for customers, clients, users, and citizens. As deepfakes can have an impact on the macro and the micro, we are here to defend against both.

What are all the pain points you solve and for whom? Explain it in simple terms.

What are the most common or overlooked cybersecurity and online privacy threats that you see affecting end users in your industry? Why are these threats particularly concerning?

They are too innumerable to name, but the ones we fear most at this given moment are voice deepfakes used to defraud and defame.

For instance, voice biometrics are used to verify bank users over the phone. Such technology — which has seen billions in investments from large financial institutions — can now be bypassed with a freely-made deepfake of a voice taken from 20 seconds (or fewer) of any speaker.

There is already a tangible impact being made on companies and their bottom lines from such fraud — as well as individual account holders — which is why we are working with banks to retrofit their existing voice biometric systems in call centers and help desks with real-time voice deepfake detection. This will undoubtedly turn the clock back to a time when such a security risk didn’t have the impact it does today.

What are some things that people should START doing today that they’re currently not doing to protect their information?

Employing some semblance of protection against weaponized generative AI. As AI has been democratized, even lower skilled attackers can commit highly sophisticated attacks and trigger damaging incidents using off-the-shelf tools. To not be steeled against any potential such attacks — anything from lacking deepfake detection to lacking widespread education of deepfakes and/or weaponized AI — is to inevitably court disaster.

What has been your most memorable experience dealing with a cybersecurity threat in your career?

We were eating dinner outside of Randazzo’s Clam Bar in Sheepshead Bay, Brooklyn when we got a Google Alert from a leading Taiwanese news source that said, as determined by Taiwan’s version of FOIA, Reality Defender was used to find this deepfake.

We had no idea because we don’t see what our clients upload or have any access to such data, so we learned that we were used in this instance for this specific and serious thing. That right there proved the gravity of the problem and the efficacy of our product on a world stage.

🔎 What happened in Taiwan
Taiwan’s Ministry of Justice Investigation Bureau used Reality Defender’s deepfake detection software to investigate an audio clip that sounded like Taipei, Ko Wen-je, an official nominee of the Taiwan People’s Party (TPP) and former mayor. In the clip, he’s talking about his opponent, Lai Ching-te, current vice president and nominee for the Democratic Progressive Party (DPP). The Investigation Bureau found that the recording was indeed likely fake.

In your opinion, have tools and technologies improved enough to help end users secure their online privacy effectively? What improvements can be made in this area?

End users don’t really care about online privacy and security until after an attack. For instance, a friend of mine had their email recently hacked and suddenly started caring about two-factor authentication after the fact.

Though companies should and often do have cybersecurity education as part of their onboarding and auditing/training processes, they should place even greater emphasis on training employees regularly, as scams — including those using deepfakes — become more sophisticated and harder to spot.

For end-users and everyday citizens, education early and often is the best approach. After all, if you’re going to use the internet for hours out of the day — some nearly the entire day — there should be greater efforts to help individuals do so in a safe and slightly more cautious manner.

How can our readers follow your work?

Website:  https://realitydefender.com

LinkedIn:  https://www.linkedin.com/company/reality-defender/

X: @detectdeepfakes

About the Author
Roberto Popolizio
Published on: September 21, 2024

About the Author

Over a decade spent helping affiliate blogs and cybersecurity companies increase revenue through conversion-focused content marketing and Digital PR linkbuilding. <div class="logo-block"></div>

Leave a Comment