SafetyDetectives spoke with Ted Harrington from Independent Security Evaluators (ISE) about the challenges and responsibilities of ethical hackers, emerging cybersecurity threats and vulnerabilities that are common in software, and he gave three tips for companies to improve their security practices.
Can you talk about your background as an author and ethical hacker to introduce yourself?
I’m one of the partners at ISE, a company of ethical hackers. Companies hire us to identify security vulnerabilities in their software, with the goal of enhancing system security. We also publish a lot of research. For example, we were the first company to hack the iPhone and Android OS, and we recently published research about hacking password managers. We also launched and run the IoT Village, which takes place at DEF CON, RSA Conference, and many other conferences across the country and around the globe.
For nearly 12 years, I’ve been viewing the world through the eyes of an ethical hacker. I’ve realized that most people I speak with, whether they’re prospective customers, current customers, or individuals I meet after delivering a keynote, often have the same questions and issues.
I started thinking about how to solve these problems, and that’s when I realized I needed to write a book, which eventually became the #1 bestseller Hackable: How to Do Application Security Right: I realized that the conventional approaches that almost everyone talks about in order to solve these issues are flawed.
Think about how crazy that is: Someone wants to solve a problem through technology, they run into some issues with securing it, and when they try to solve those problems, the answers they get on how to do it are wrong.
I couldn’t stand that anymore, so I decided to write the book to identify and correct those misconceptions about how you actually build better, more secure systems.
How does one ensure that ethical hacking remains within legal and ethical boundaries?
When you’re working for a company or an organization that has hired you, that’s what governs the relationship. There usually aren’t too many legal or ethical issues, because the client is saying here’s our tech, here’s how it works, and here are our engineers.
In this scenario, you don’t often run into any potential ethical or legal issues because you have an agreement with the customer. As long as you don’t violate your contractual agreement and only disclose your findings to the company and don’t release them online, you’re typically fine.
Security research is slightly different because when you’re researching an organization, they’re usually not aware that it’s happening, and if they are aware, they likely don’t want to participate. In fact, they probably don’t want you to do it at all, because they don’t want their issues written about in the press or released online.
In this case, the area where most people stumble is by failing to adhere to what’s called responsible disclosure. Responsible disclosure is a practice whereby ethical hackers and security researchers disclose the issues they found to the company. Once an exploitable vulnerability is discovered, a researcher sends a summary of the vulnerabilities and how they can be exploited, along with recommendations on how to fix them, if possible. A timeline is articulated, typically between 30 – 90 days. Ideally, the afflicted company collaborates with the researcher to fix the vulnerability. It is pretty common for a company to ignore the researcher, so once the disclosure timeline expires, a researcher is in the clear to publish it – though they should withhold the attack details if the issue isn’t fixed. However, sometimes researchers might not inform the afflicted company first, or they might not give them enough time, or they might release the full attack details before the issue is fixed. All of those lead to potential issues.
Another area of concern has to do with the law. In the United States (and many other nations around the world) there are clearly defined laws about what constitutes a computer crime. So it is incumbent upon security researchers to know when they cross the boundary from research into crime – even if by accident.
At the conclusion of a research project, most researchers want to go present their findings at security conferences. This advances the state of the security community, while also, of course, elevating the professional profile of the individual or group who conducted the research.
What makes ISE stand out from other competitors?
There’s a pretty big problem in the security industry: confusion around what different terms mean. The most commonly used – and most commonly misunderstood – term in the world of security testing is “penetration testing.” This is a specific service that usually entails a heavily manual component, by a skilled hacker. Unfortunately, it’s often used to refer to lightweight vulnerability scanning.
We differentiate from scan-based approaches by applying a much more rigorous, manual, white-box assessment methodology. This enables our clients to find more issues that matter, and know how to fix them.
Of the few companies who are like us an also perform security assessments primarily manually, we differentiate by focusing on application security (rather than IT or network security, although we do those as well). We run hacking events like IoT Village and literally wrote the book on appsec, called Hackable.
What are the security challenges or emerging threats that organizations should be aware of with their software.
What’s really interesting about that question is that the more things change, the more they stay the same.
We can think about any emerging tech, whether it’s artificial intelligence, machine learning, blockchain, cryptocurrency, or cloud – pick your poison. People are going to say, “Everything’s different now, how we think about security with tech is really different.”
While the application of the principles is, to some extent, different, the principles of how you actually secure the systems remain largely unchanged. When we think about what causes security vulnerabilities today, they’re the same as they were 10, 20, or even 30 years ago. These principles are based on things like, the more complex the system is, the more opportunity there is for an exploitable issue, or the more permission and access that you give different types of users, the more problems it creates.
To answer your question more directly, the majority of issues boil down to one of two problems: authorization and authentication.
Most vulnerabilities fundamentally undermine one or both of those principles. For example, you might bypass authorization entirely and be like, “It doesn’t matter who I am, I have access now.” But that’s a failure in the system to actually verify permissions.
What advice would you give to organizations or individuals looking to strengthen their security practices based on your experience and research findings?
The place to start is to ensure you understand your threat model. A threat model is essentially an exercise that helps you answer three questions:
- What do I want to protect? These are tangible things like customer information, business intelligence, private intellectual property, or data. It also includes intangible things like brand reputation or trustworthiness. Ask yourself, why does it matter to us?
- Who do I want to defend against? Who is interested in the data you’re protecting? Is it of interest to nation-state actors, organized criminals, those engaged in corporate espionage, hacktivists, or just people who enjoy the challenge?
- Where will I be attacked? This involves thinking about anywhere that someone can interact with your system. It can be input fields, security functionality, or even human elements. These are known as attack surfaces.
Once an organization can answer these questions, it helps you determine how much money, time, and effort to invest in security and where to invest it.