Interview With Amber Scott and David Vijan - Co-Founders At Outlier

Shauli Zacks
Shauli Zacks Content Editor
Shauli Zacks Shauli Zacks Content Editor

In a candid interview with SafetyDetectives, Amber Scott and David Vijan, co-founders of Outlier Compliance Group, delve into the intricacies of anti-money laundering (AML) and data privacy in the evolving landscape of financial regulation. With backgrounds as former bankers turned compliance experts, Amber and David offer a unique perspective on the challenges and innovations shaping AML strategies today.

Can you please introduce yourself and talk about your role at Outlier?

Amber: Hi, I’m Amber Scott, the co-founder and CEO at Outlier Compliance Group. David and I were both previously bankers, working in the compliance space. For me, the idea for Outlier started once I left banking and started working in the consulting space. I saw how the leverage model worked, which was the idea that, essentially, if you throw enough smart folks at a problem, you can solve it. This was really different from the approach that Malcolm Gladwell espoused in his book Outliers, which is the idea that to be terribly good at something, you have to practice it a lot, roughly 10,000 hours.

When Outlier was founded, the idea was really that everyone on the team would have at least 10,000 hours of in-house compliance experience, so that people would understand compliance, how organizations work, and how operationalizing those concepts really worked in the long term.

David: Hi, I am David Vijan. I am a co-founder and CRO here at Outlier. We are an AML consulting firm, a compliance consulting firm, that specializes in AML, privacy, and other regulatory compliance consulting matters.

With financial crime tactics becoming more sophisticated, what sets your AML solution apart from others in detecting these threats?

Amber: I think it’s important to preface that our solutions are really consulting services, as opposed to software. When it comes to software, I won’t say that we’re exactly software agnostic, because we do recommend solutions and we always look for those solutions to be a good fit for our clients. However, in theory, we could work with any software solution.

I think that there are always two really important considerations.

  1. Does the software in question meet the regulatory requirements? Meaning, is it up to the regulator’s expectations in terms of what needs to be implemented.
  2. Does it manage the risk effectively?

Ideally, both of those conditions are met.

How does artificial intelligence and machine learning play a role in your solution’s detection and reporting capabilities?

David: As Amber mentioned, our wheelhouse is not in software related solutions per se. AI in general is great. We do have to remember the rule of garbage in, garbage out. That’s definitely something that we have to keep in mind here. AI really has to be understood by compliance staff.

We’ve seen compliance teams play around with AI, and they’re trying to develop policies and procedures using it. And while it does spit out something, it doesn’t have the level of detail that would meet the expectations of the regulator. It wouldn’t pass muster.

That’s a very important piece to the process, as it needs to be explainable to the regulator, but also meet their requirements and expectations. Because at the end of the day, it’s the regulator’s expectations that we’re really trying to satisfy.

Also, with AI, the rationale for decisions needs to be able to be translated into human-readable language. If you present something to someone, and they’re not able to recreate or understand it, it doesn’t really meet the needs our regulatory obligations or the capabilities of what we need it to do.

Amber: This is incredibly important in an examination context with your regulator. If you’re an in-house compliance person, and you’re going to be called upon to explain how you came to a certain decision. The answer can’t be “I did what the robot told me to do”, “it came out of a black box”, or “we don’t understand the rationale for a decision”. It has to be something that you can translate to human-readable, human-understandable language, and that needs to be part of your documentation all the way down.

How do you approach data privacy and security, especially when dealing with sensitive financial data?

Amber: Amber: I think it’s important to acknowledge that there’s a natural tension between anti-money laundering (AML) and privacy. For us, at Outlier as a service firm, we consider it to be very important to minimize the amount of data and personal information that we ingest, particularly when we’re talking about our customer’s customer.

However, that’s not always practical or even possible for our clients who have very different requirements. From their perspective, it’s always important to understand:

  • Where the data lives across various systems
  • How you are using that data
  • How different systems are communicating with one another, both your own internal systems and your vendor systems, that you’re going to be using to do various functions.

Having a solid mapping of where that personal information, or PI, lives, and how that PI is used, is incredibly important and to keep that updated on a regular basis.

At the other end, not just knowing what’s happening during that lifecycle, but you need to have a plan to be able to anonymize or purge PI that’s no longer required, or no longer in use.

There’s this funny thing about data that when we’re holding on to personal information or sensitive information, the risk associated with that data never goes away. It can actually increase over time where the usefulness of that data decreases over time. So you have something that just stays risky, but doesn’t stay useful to you. That alone needs to be a motivator to start to look at how we age off this data and how we move away from just retaining data forever. That doesn’t necessarily have a use for us. And that isn’t something that we could justify having if it were problematic.

David: Those are very important pieces. In our consulting services, we often see clients that don’t know where the data lives. It’s really important to understand where it’s mapped. Under privacy legislation, and we’re not really going to get into that, there are principles and one of them is limited use. Consent is given for a certain piece and sometimes we hear the business say, “Oh, well, we’ll use the data for something else later.” Well, there’s a whole other consent requirement you have to go back to. To Amber’s point, is there really a reason to hang on to data as it ages? Yes, in some cases, there are regulatory requirements, but we’ve seen data that goes back 10 – 20 years still in organizations systems. Is there a reason it’s still there and what is the risk? It’s probably not worth hanging on to it that long.

Can you discuss the significance of real-time monitoring versus batch processing in AML detection and reporting?

David: There definitely is value in having both approaches, and often you need both. Real-time is going to help with certain things such as fraud in progress, things that need to be captured right away. An example of that is listed person or sanctions. Those are transactions that you want to stop and that’s where real-time is going to really be important.

But sometimes batch reporting is needed because it actually learns. There are longer transactions patterns that it’s detecting, that will actually help you with different types of alerts. It’s important to look over those patterns over time and for those parameters to be changed. So that the system adapts over time and patterns become normal.

Amber: Absolutely. Nothing stays the same, except for the idea that things will change eventually.

That segues nicely to our next questions. How do you see the future of AML evolving, especially with the advent of new payment methods and financial technologies?

I think it’s important to say that monitoring at scale is impossible without technology solutions. We still, from time to time, see things where people are saying all of our monitoring is manual. I think we’re coming into a space where that’s not going to be the expectations of regulators at all. And it’s important to note that. There is an expectation that we’re using some kind of technology solution, and those solutions are going to continue to evolve.

The best solutions, in my opinion, consider the whole scope of a customer’s activity. This means their activity across different products and services. For example, if a customer has a mortgage, checking account, and credit card with us, we’re not looking at the risks of each of those products in isolation. We’re seeing the scope of the activity across all the products and services that the customer is using with us.

We’re also looking at the changes in patterns over time. We’re bringing in open-source intelligence or OSINT. So, what do we know about that customer from different potential sources? Where there’s virtual currency, we’re also looking at the risks that can be incurred from on-chain activity. If we know that a certain wallet is associated with that customer, we’re look at the risk of that wallet, not just in the transactions that are happening with our institution, but we’re able to monitor the general level of that wallet over time and what that wallet is interacting with.

Similarly, we can see connections between customers, so groups of people and entities that transact with each other, people that may own companies or entities together, sit on boards together, those types of things where you have multiple touchpoints between individuals. I think, in particular, if there’s one of those individuals that suddenly becomes high risk, that’s something that can trigger us to take a look at the other individuals to see if they may be involved in similar activity that would also change their risk ratings.

I think one of the biggest challenges is still data across various regions and across various languages. As we move more towards open banking and open data, I think this becomes very interesting because there are a number of external data points that we’ll be able to pull in and use in terms of monitoring and risk in very novel ways that we don’t necessarily see today.

About the Author
Shauli Zacks
Shauli Zacks
Content Editor

About the Author

Shauli Zacks is a content editor at SafetyDetectives.

He has worked in the tech industry for over a decade as a writer and journalist. Shauli has interviewed executives from more than 350 companies to hear their stories, advice, and insights on industry trends. As a writer, he has conducted in-depth reviews and comparisons of VPNs, antivirus software, and parental control apps, offering advice both online and offline on which apps are best based on users' needs.

Shauli began his career as a journalist for his college newspaper, breaking stories about sports and campus news. After a brief stint in the online gaming industry, he joined a high-tech company and discovered his passion for online security. Leveraging his journalistic training, he researched not only his company’s software but also its competitors, gaining a unique perspective on what truly sets products apart.

He joined SafetyDetectives during the COVID years, finding that it allows him to combine his professional passions without being confined to focusing on a single product. This role provides him with the flexibility and freedom he craves, while helping others stay safe online.

Leave a Comment