Why AI Necessitates a New Approach to Proactive Security

Shauli Zacks
Shauli Zacks Content Editor
Updated on: October 25, 2024
Shauli Zacks Shauli Zacks
Updated on: October 25, 2024 Content Editor

As artificial intelligence (AI) evolves, it introduces new complexities to cybersecurity, shifting the dynamics of both defensive and offensive strategies. Organizations must now think beyond traditional defense mechanisms and adopt a more proactive approach to mitigate risks posed by AI-driven threats. To explore these new challenges and solutions, SafetyDetectives sat down with Jason Mar-Tang, Field CISO at Pentera, to discuss how AI reshapes security paradigms and how proactive testing plays a vital role in maintaining robust cyber defenses.

In this insightful conversation, Mar-Tang shares his journey from working with blue-team technologies to embracing the attacker’s perspective at Pentera. He explains how automated security validation offers organizations a more effective way to simulate real-world attacks, including emerging AI-based threats. Additionally, he highlights the importance of continuous testing across hybrid environments to stay ahead of adversaries who are leveraging automation and AI to outmaneuver existing defenses.

Can you start by telling us a bit about your role as Field CISO at Pentera and your journey in the cybersecurity industry?

My name is Jason Mar-Tang and I am an AVP and the Field CISO at Pentera, the market leader for Automated Security Validation. I have been working in Cybersecurity for the past 15 years at companies such as RSA and CyberArk, engineering different solutions for clients of all verticals, such as Multi Factor Authentication (MFA), Data Loss Prevention (DLP), Security Information Event Management (SIEM), Network Detection and Response (NDR), Endpoint Detection and Response (EDR) and Privilege Account Management (PAM).

After spending numerous years working with blue team defensive technologies, I wanted a different challenge and that brought me to Pentera. Pentera allowed me to change my mindset: Instead of looking at security from solely the defender’s perspective, I was able to start seeing it through the attacker’s lens. This perspective enabled me to assess the effectiveness of many different environments that I had previously been in to re-strategize them based on exploitation and attacker based TTPs.

In my role as Field CISO I am fortunate to travel to every market around the world and speak with CISOs about their security testing strategies. These strategies vary greatly between organizations and regions. Some companies contract external third parties to pentest their environments, while others have specialized teams in-house (or a combination of both)! No matter the current strategy, there is a common denominator: There is a growing desire for more testing and a greater appreciation for the attacker’s perspective. Whether that is initial implementation of testing strategies or scaling them, organizations are realizing that continuous testing against real attackers helps them prioritize risk and understand if their security is effective.

How has Pentera’s approach to cybersecurity evolved in response to the growing complexity of threats like those driven by GenAI?

So I think there is a slight misconception about the current maturity of AI-based cyber attacks and where AI is on the adoption curve with the majority of threat actors. It’s true that threat actors can already utilize AI for specific use cases, such as video/audio deep fakes, combing through documents and databases for key data at a faster pace, and to improve their social engineering attacks both in quality and scale. However, as this year’s DBIR report pointed out, we haven’t really started to see the real impact or advanced use cases of AI-based cyber threats in the wild.

With that in mind, Pentera is constantly innovating to ensure that we (and therefore our customers) are ready to meet the most pressing challenges. Threat actors are able to use AI to augment their capabilities, improving their password cracking, and using AI to scan through vast amounts of data for valuable information such as credentials. We are applying the same principles to our platform.

How can organizations adapt their proactive security testing to keep up with the rapid advancements in AI-driven attack techniques?

What we’ve observed is that attackers are leveraging AI-based techniques to support social engineering attacks to target  identities and credentials. This could be very specific identities with privileges or identities that can lead to lateral movement. What would happen if Bob from marketing’s username and password were compromised by threat actors? Most organizations don’t know, and this is where proactive testing comes in.

Organizations should take these accounts into consideration and understand exactly what the “blast radius” will be by proactively testing against scenarios. Naturally in environments that scale very wide, as well as environments with many identities, this can be an arduous task without the use of automation and AI. With automation, security teams can take a programmatic, scheduled approach to help with this, focusing on different identities in the environment as well as critical areas in the network. By proactively testing, organizations can effectively assess whether their controls can contain compromised identities, pinpoint gaps and areas where defenses are inadequate, and determine the overall impact of such an attack.

In your view, what’s the most significant shortcoming of current cybersecurity tools when faced with AI-driven threats, and how can proactive testing help fill these gaps?

I’m not sure that there is an inherent problem with any category of solutions in particular. Of course our solutions will need to adapt to the unique challenges posed by new threats, but this is just the standard evolution of cybersecurity: New threat emerges – cybersecurity solutions must adapt to address it. For example, in the future, anomaly detection will need to learn to account for the specific behavior and footprint of AI-driven attacks. But this was the same evolution that took place when machine learning and automation were introduced. Same evolution, new problem.

What tends to be the biggest problem is what it has always been: The human element. Humans are imperfect; we make mistakes. We unintentionally misconfigure security tools all the time, whether it’s leaving a port opened that shouldn’t be, or accidentally setting our EDR to “monitor” on a specific endpoint (instead of “prevent”). With AI and automation’s capacity to scale attacks, these “seemingly small” misconfigurations could end up being what provides attackers with the initial access to propagate a larger attack. With AI, threat actors will be able to move faster and cover more of our environments and perhaps some small mistake that would have gone unnoticed in ages past, will be identified and exploited.

Proactive testing is crucial because it allows organization’s to assume the attacker’s perspective and see their environment the way real threat actors do. This testing enables security teams to identify exploitable security gaps, and remediate these issues before attackers ever get a chance. In many organization’s this is achieved in the form of Pentesting and Red-Team exercises, and many more mature organizations have threat hunting teams for this exact purpose.

How do you see proactive security testing evolving in the next few years, and what innovations is Pentera focusing on to stay ahead of the curve?

Pentera’s automation is revolutionizing security testing by shifting organizations from periodic, manual pentests and red-team exercises to continuous, automated real-world attack emulation across all attack surfaces. This is where the world of security testing is heading. To counter the speed and adaptability of threat actors, we need to ensure that our complete IT environments are tested and validated against the tactics, techniques, and procedures that they are using in the wild. We need to get as close to the real hacker as possible.

This year we added a major component by introducing the first solution for automated pentesting in cloud environments: Pentera Cloud

As organizations continue to embrace the cloud, they redistribute a greater proportion of their cyber risk to their vulnerable cloud environments. Threat actors are adapting and increasingly targeting lucrative cloud assets to the point that today research shows that 82% of breaches involve data stored in the cloud. Pentera Cloud is moving the industry forward by introducing the first-ever automated, on-demand pentesting solution for the cloud. Pentera Cloud empowers security teams to proactively test their defenses against the cloud-native attack techniques that malicious hackers are actually using. Through Pentera’s safe-by-design attack emulation, security teams gain visibility into exactly how malicious hackers can exploit their current defenses, enabling them to take corrective action and reduce their exposure to cloud-native threats.

But it goes a step further. Attackers are not limited to one environment. Just because an attack starts in your Cloud environment doesn’t mean it will end there. Hackers can utilize credentials harvested in your AWS or Azure environments to potentially exploit your on-prem (and vice versa). You can’t test your Cloud environments in a silo. As part of Pentera’s complete platform, Pentera Cloud applies the creativity of experienced threat actors, utilizing data discovered within your cloud ecosystem to move laterally to exploit on-premises environments. This ensures that your environment is tested against the way hackers really behave.

What are the most important steps an organization should take today to future-proof its security strategy against emerging AI-driven threats?

I am confident that the “assume breach” mindset, along with a layered defense strategy, will remain essential as AI-driven threats continue to evolve. However, no matter how many layers of defense are implemented, gaps will inevitably exist. With the speed and precision of future AI-powered attacks, adversaries will likely be able to effectively identify these gaps. To counter this, organizations must incorporate security validation as a core component of their strategy to find them before malicious attackers do.

We can no longer assume that our defenses, regardless of their complexity, will hold. Validation is essential to eliminate doubt. AI is not an unstoppable force—just like a human threat actor it relies on finding openings to exploit. The speed that AI is able to function at is rendered ineffective when attackers encounter dead-ends with no viable paths to progress. Security validation ensures that all potential routes are tested and remediated, preventing adversaries from achieving their objectives.

About the Author
Shauli Zacks
Shauli Zacks
Content Editor
Updated on: October 25, 2024

About the Author

Shauli Zacks is a content editor at SafetyDetectives.

He has worked in the tech industry for over a decade as a writer and journalist. Shauli has interviewed executives from more than 350 companies to hear their stories, advice, and insights on industry trends. As a writer, he has conducted in-depth reviews and comparisons of VPNs, antivirus software, and parental control apps, offering advice both online and offline on which apps are best based on users' needs.

Shauli began his career as a journalist for his college newspaper, breaking stories about sports and campus news. After a brief stint in the online gaming industry, he joined a high-tech company and discovered his passion for online security. Leveraging his journalistic training, he researched not only his company’s software but also its competitors, gaining a unique perspective on what truly sets products apart.

He joined SafetyDetectives during the COVID years, finding that it allows him to combine his professional passions without being confined to focusing on a single product. This role provides him with the flexibility and freedom he craves, while helping others stay safe online.

Leave a Comment