Published on: November 26, 2024
Doug Kersten, Chief Information Security Officer at Appfire, brings over two decades of experience in securing IT programs for global financial institutions and law firms. Since joining Appfire, a leading provider of software that enhances platforms like Atlassian, Microsoft, and Salesforce, Doug has spearheaded initiatives such as the award-winning Trust Center and the attainment of internationally recognized security certifications. In this SafetyDetectives Q&A, Doug shares his insights on Appfire’s approach to AI, the importance of human oversight in cybersecurity, and the future of human-AI collaboration in the SaaS and cybersecurity industries.
Can you introduce yourself and talk about your role at Appfire?
I have 20+ years of experience leading security and IT programs for some of the world’s top financial institutions and law firms. I joined Appfire, a leading global provider of next-generation software that enhances, augments, extends, and connects the world’s leading platforms such as Atlassian, Microsoft, monday.com and Salesforce, in 2021 as Chief Information Security Officer (CISO). In this role, I’m responsible for maintaining effective information security and incident response programs and fostering a positive security culture. We have prioritized the security of our technology since Appfire’s inception, and continue to do so as technology and cyber threats become more advanced.
Appfire is committed to prioritizing the highest standards of data security and compliance. To support this, in 2022 my team launched Appfire’s award-winning Trust Center, which connects customers, partners, and prospects to the latest information on the security, privacy, and compliance of the company’s products and services. This commitment has earned Appfire a series of internationally recognized data security certifications, including the International Organization for Standardization (ISO) ISO 27001; ISO 27017; System and Organization controls (SOC) SOC 2, Type l; and SOC 2, Type II.
Why do you believe that treating AI like a human is an effective approach, and what inspired this philosophy at Appfire?
Treating AI like a human is a perspective shift that encourages security teams to think of it as a collaborative partner that is prone to making human-like mistakes. By adopting this mindset, organizations can establish checks and balances that ensure AI’s autonomy is aligned with business goals, ethical standards, and security compliance in ways very similar to their human counterparts..
In the event an AI system fails, teams with adequate oversight can identify where it fell short and preemptively address it. By regularly interacting with and leading the AI an organization leverages, teams exercise AI responsibility and can ultimately mitigate risks. While AI can provide valuable insights and automate critical routine functions, it should never operate in a vacuum and human oversight remains a critical factor of ensuring AI accountability.
At Appfire, the philosophy of treating AI like a human stems from our belief that it should enhance, not replace, human counterparts but also that it has failings similar to humans. This stance equips our organization to effectively navigate the opportunities and challenges AI brings to the enterprise. By treating AI like a human, we can effectively ensure it supports our mission while maintaining the highest security and compliance standards.
In what ways can approaching AI like a team member improve its development and performance?
Approaching AI as a team member transforms it into a collaborative and adaptive resource. By treating AI this way, organizations establish a feedback loop where users and leaders can evaluate its outputs, identify gaps, and fine-tune its performance. This ensures AI evolves alongside business goals while mitigating potential risks from inaccurate outputs.
Much like managing employees, this approach emphasizes setting expectations, monitoring progress, and maintaining accountability. For example, training teams on crafting effective AI prompts enhances clarity and consistency and improves results. Regular interaction with AI encourages responsible use and helps identify areas for improvement which ultimately refines its performance.
This mindset also helps integrate AI seamlessly into workflows, balancing its autonomy with human oversight. It shifts AI to being a dynamic partner that grows with the organization to deliver reliable outcomes.
Why is it essential to approach AI responses with a “trust, but verify” mentality, especially in high-stakes fields like cybersecurity?
The ‘trust, but verify’ principle acknowledges that AI can be a valuable asset to any organization — when used responsibly— but at the same time recognizes that it’s prone to missteps.
Ignoring the undeniable fact that AI can make mistakes and/or propagate biases is an oversight that can result in dire consequences for any organization. This approach is necessary for a healthy cybersecurity posture and is applicable to AI.
Instead of over-relying on AI to generate accurate results every time, teams must remain vigilant and carefully consider AI-generated recommendations and verify them against established knowledge and real-world conditions.
Additionally, in light of today’s rapidly evolving threat landscape, it’s worth noting that teams — not just security personnel — need to be aware of adversarial manipulation. AI systems can be targeted by attackers who exploit their algorithms to produce false positives or negatives, potentially masking real threats or creating phantom ones. Without proper oversight, teams may unknowingly rely on compromised outputs, leaving systems and data vulnerable. Put another way, you need to know when your AI is sick.
How does human oversight contribute to ethical AI use, and what specific practices does Appfire employ to ensure accountability?
Because AI continues to rapidly evolve, it’s crucial for organizations to have systems in place that allow them to closely monitor ongoing risk assessment and proper use. In establishing effective human oversight, organizations can exercise AI accountability and its ethical use. Policies and processes for mapping, managing, and measuring AI risk, in tandem with accountability structures that keep teams and individuals empowered, responsible, and trained, should be incorporated for productive human oversight.
The National Institute of Standards & Technology (NIST) recently released its AI Risk Management Framework and is an example of a model that was enforced to better manage risks to individuals, organizations, and society associated with AI and to measure trustworthiness.
At Appfire we have defined AI controls that evaluate AI risk, implementation, and use. This includes enhanced procurement and evaluation processes to identify permitted AI tools and how they should be used to comply with legal, security, regulatory, ethical, and operational requirements that are specifically designed to reduce AI risk. By understanding the interrelationship of requirements and specifics of AI implementation, AI actors in charge of one part of the process are provided full visibility or control over other parts.
What changes do you predict for human-AI interaction over the next decade in the SaaS and cybersecurity industries?
AI’s role in the SaaS industry specifically will extend beyond traditional chat agents. As organizations evolve their digital strategies, the focus will shift toward agents with clear objectives and goals. This kind of innovation will fundamentally change how work gets done, freeing employees to focus on higher-value tasks.
Additionally, AI will continue to blend seamlessly into the user experience, supporting the core use cases end users struggle with most. I envision AI to be so well integrated across industries that users won’t even realize they’re using it.
In the cybersecurity space, cyberattacks will become even more sophisticated, with layered attacks targeting multiple vulnerabilities simultaneously, rather than focusing on a single point of failure. The evolving threat landscape will pose a significant challenge for organizations that rely too heavily on one layer of defense while neglecting others. With this in mind, more cybersecurity vendors will leverage AI to monitor and react to attacks.