How AI And ML Impact Data Security: Q/A with MoogleLabs CEO Ganesh Verma

Updated on: July 1, 2024
Roberto Popolizio Roberto Popolizio
Updated on: July 1, 2024

Information is the new gold. Whether you’re a business or an individual, cybercriminals are always finding new ways to breach your security defenses and steal your sensitive data.

It takes one data breach to compromise your financials and personal information.

So, how can you secure your online privacy and security beyond the mainstream advice found all over the internet, that hackers already know as much as you do?

In this new interview series by Safety Detectives, we invite top executives and leading cybersecurity professionals to share their tips, stories, and untold truths about protecting and securing your valuable information from their hands-on experience.

Our guest today is Ganesh Verma, the CEO of MoogleLabs, an IT company specialized in AI/ML, Blockchain, DevOps and Data Science solutions.

Here are the topics we discussed:

  • How AI brings new cyber threats and solutions at the same time
  • Real-world examples of AI-driven cyberattacks
  • How MoogleLabs leverages AI and ML to improve data security without compromising on user experience

To start, can you introduce yourself and share the story of what inspired MoogleLabs?

Okay, let’s start with an introduction. I am Ganesh Verma, the CEO of MoogleLabs, and tech enthusiast, a Father, a Husband, and an entrepreneur. I have loved the world of tech from my early life, and my education reflects the same.

With a Master of Computer Science back in 1999, I entered the workforce as a software engineer. Then, I climbed the corporate ladder, and it was during this period that I saw a disconnect in the requirements of the customers and what the IT companies had to offer.

MoogleLabs was my solution – an organization dedicated to working on the latest and most revolutionary technologies currently in the market. I have always been fascinated by the latest technologies in the world, so I created an organization filled with curious individuals, experts in their respective fields, and in need of a dynamic workplace that encourages them to learn something new every day.

What are the most common, overlooked, or sophisticated cyber threats that you have noticed this year related to the widespread adoption of AI and ML?

Technology is never good or bad. However, in the hands of the wrong people, it can be disastrous.

AI is a great example. It can learn from enormous datasets and, therefore, can launch more complex and convincing attacks. Phishing attacks are becoming easier, including smishing (SMS phishing), vishing (voice phishing), and several others. Hackers are also using AI for Distributed Denial of Services (DDoS), zero-day, and brute force attacks. Now, we also have IoT-based attacks.

So, the adoption of AI and ML does come with such issues, but it is also an anecdote to the problem, as it can be used to prevent, detect, and combat all types of cybercrimes. Apart from detecting and stopping phishing attacks and other threats, it also does rapid incident analysis with machine learning to identify potential security threats.

Can you share any real-world examples of when such risks materialized? What damage did they cause?

Well, there have been several instances of cyberattacks that were leveraging AI to cause havoc. In 2023, Google had to fend off the internet’s largest DDoS attack that used AI and reached the crescendo of 398 million requests per second. It was seven and a half times bigger than the previous attack that Google had to face. This attack used a technique called “HTTP/2 Rapid Reset”

What is HTTP 2 rapid reset vulnerability?
The HTTP/2 Rapid Reset exploits a flaw in the HTTP/2 protocol to crash servers and disrupt websites by quickly sending and canceling requests, sometimes causing thousands of connections per second. Given the sheer number of websites using the HTTP/s protocol, this attack can impact millions of users.

Those emails that appear legitimate and ask users to click on links where they need to share sensitive information are also on the rise, and it is becoming easier to automate them thanks to artificial intelligence and machine learning. Such instances cost companies significant money in terms of revenue lost. Additionally, businesses that face data compromise due to such attacks also must shell out hefty fines for the loss of privacy, which several big names had to pay in the recent past.

To be fair, while this was the largest DDoS attack on record, Google’s robust DDoS mitigation capabilities and global infrastructure allowed it to successfully defend against the attack with minimal impact to its services and customers. Nevertheless, such an attack highlighted the need for continued vigilance and investment in DDoS protection as attacks continue to grow in size and sophistication.

How does MoogleLabs leverage AI and ML to secure devices from these threats?

AI and machine learning play a huge role in detecting, preventing, and responding to cyber threats. They can help with threat detection and prevention through anomaly detection, predictive analytics, and behavior analysis.

AI can automate the task of flagging suspicious activities and detect social engineering attempts with malicious intent. Additionally, it can help with improving the overall response time to certain cyberattacks, including isolated infected devices and blocking malicious IP addresses.

Machine learning based authentication can assist with analyzing user login patterns, typing cadence, and device characteristics to identify legitimate users. This can prevent unauthorized access attempts even if hackers steal passwords.

Let’s try to find the balance here. In what ways can AI-driven solutions improve user experience without compromising cybersecurity?

AI-driven cybersecurity DOES improve user experience without compromising security. For one, it offers smoother access and risk-based authentication (where AI can assess risk). For devices you use daily, it can offer easy access like fingerprint authentication, whereas for unusual locations and new devices stronger authentication methods like multi-factor verifications and behavioral biometrics can offer higher security. These adaptive security measures also guarantee a streamlined user experience.

While MFA and behavioral biometrics can provide additional layers of security beyond passwords, they are not a complete solution. Attackers are constantly evolving their techniques to bypass these measures, such as through social engineering, man-in-the-middle attacks, and advanced malware.

For example, according to the IBM X-Force Threat Intelligence Index 2024, there was a 71% increase year-over-year in the volume of attacks using valid credentials. This and other data studies highlight the ongoing risks posed by poor password management and the need for even stronger authentication methods.

Passwords remain a vulnerable attack vector, and organizations must adopt a comprehensive security strategy that includes strong password policies and leverages AI to enhance MFA and monitoring for suspicious activity.

Last but not least, AI makes proactive threat detection and user education possible by using context-aware threat analysis* and personalized security training.

* What is Context-aware threat analysis
Context-aware threat analysis is a security method that evaluates threats by considering information like user behavior, location, and device type. This approach can reduce false positives by up to 50% and help identify real threats more accurately.

What are, in your vision, the ethical considerations in using AI and ML for consumer cybersecurity, and how does MoogleLabs address these challenges?

Consumer cybersecurity is personal. My children use devices, and while I do my best to educate them on how to safely navigate the internet, it would not compromise their safety in the name of convenience. That’s why, when it comes to AI and machine learning (ML) in cybersecurity, we focus on building a future-proof shield.

However, there are some challenges that we had to overcome in order to achieve that:

  • Bias in the Machine: Data is the fuel for AI, so biased data can lead to biased AI. We prioritize squeaky-clean, diverse data sets to train our ML models. Besides that, we build models that can explain their reasoning – like showing you why a website raised a red flag. This fosters trust and lets you understand the thought process behind the protection.
  • Privacy, Not a Mystery: We understand some AI solutions can be like black boxes, shrouded in secrecy, so we strive for transparency. You deserve to know how your data is used to keep you safe. We explain things in plain English, no technobabble.
  • Privacy-Preserving Techniques: Utilizing techniques like differential privacy helps us keep your data anonymized while still allowing the AI to learn and adapt.
  • User, Not a Cog in the Machine: AI shouldn’t replace your human judgment. We design our AI to be a guardian angel, whispering warnings and suggesting actions, but you’re always in control. We empower you to make informed decisions, not lock you into an automated system. This way our AI acts as a powerful sidekick, but the final call is always yours. We believe the best defense combines cutting-edge technology with human expertise.

AI is a powerful tool, but ethics are the foundation. At MoogleLabs, we’re committed to building a future where AI empowers a safer, smarter online experience for everyone, from grandma to tech gurus.

Do you think that AI and ML can actually help in educating consumers and companies about cybersecurity best practices? If so, how?

AI and ML play a huge role in educating consumers and companies about cybersecurity best practices. For one, it can be used to create interactive learning experiences for people in companies and consumers who want to learn more about the field. Interactive learning experiences and microlearning on the go are already changing the way people look at the topic.

Scalable awareness campaigns and continuous improvements of learning material are also possible using AI technology and that can help make it easier for companies and consumers to know about cybersecurity best practices.

AI has already proven to be an excellent tool in a variety of industries. Cybersecurity and education both can benefit from it.

How can our readers follow your work?

Website:  https://www.mooglelabs.com/

LinkedIn: https://www.linkedin.com/in/ganesh-verma/

X:  https://x.com/ganesh_verma1

About the Author
Roberto Popolizio
Updated on: July 1, 2024

About the Author

Over a decade spent helping affiliate blogs and cybersecurity companies increase revenue through conversion-focused content marketing and Digital PR linkbuilding. <div class="logo-block"></div>

Leave a Comment