SafetyDetectives spoke with John Tomaszewski, co-chair of Seyfarth Shaw’s global privacy and security practice, about a wide range of topics pertaining to privacy and data, how companies can remain compliant while adopting new technologies, common misconceptions about privacy and security laws, and more.
Please introduce yourself and describe your role as head of global Privacy and Security at Seyfarth?
I’m John Tomaszewski, the co-chair of Seyfarth Shaw’s global privacy and security practice. I am one of the few data protection lawyers who exclusively focus on this field. There are many components to data protection, and it has been my primary focus since the late 1990s.
Seyfarth is an Am Law 100 full-service firm that primarily represents corporations. We cover labor and employment work, mergers and acquisitions, compliance, real estate, and immigration. One of the things that’s important about what I do, and what Seyfarth does with the privacy and cybersecurity practice, is when you look at privacy and cybersecurity, it’s never in a vacuum. They always involve components from other areas of law. Consequently, we have lawyers specializing in labor and employment, corporate governance, technology transactions, and intellectual property, all working with privacy law. As a co-chair, my role is like an air traffic controller, overseeing all areas of data protection law, both in the United States and globally.
I was one of the first Chief Privacy Officers as a result of the Gramm-Leach-Bliley Act in 2000. I served as the General Counsel for a privacy certification company in San Francisco and was part of the US delegation to APEC, working on cross-border privacy rules systems throughout Asia. My expertise spans a wide range of data protection regimes, including HIPAA, the Fair Credit Reporting Act, and many others.
Having spent 13 years in-house, I have developed a comprehensive understanding of various aspects of the field. When you’re in-house, you have to know it all. You don’t get to focus on one thing. Your employer expects you to come up with an answer to solve their problem, not to find someone else to handle it.
Therefore, because I have that broad reach of expertise, I end up operating as an air traffic controller. For example, when dealing with a mobile workforce issue out of Germany, I need to know who can handle not only privacy work but also understand things like right-to-work laws and regulations in the country. So there’s a lot of identifying other areas of law that are important along with data protection laws.
I know you mentioned it briefly, but can you talk about Seyfarth, and who are your typical clients?
Seyfarth started in Chicago as a labor and employment firm and has now grown to the 63rd largest firm in the US, based on revenue. We have offices in 17 cities across four continents – we’re probably the biggest law firm you’ve never heard of.
Our expertise covers a broad range of areas, with strong practices in the following::
- Labor and employment (both US and international)
- Corporate governance
- Technology transactions
- M&A
- Immigration
- Litigation across a lot of different sectors
- Real Estate
These days I’m doing a lot of support work with our class action defense team, handling cases like Meta Pixel lawsuits and similar matters. So, in terms of litigation, we primarily focus on corporate defense and corporate litigation. Our practice areas are similar to those of any other Am Law 100 firm, and as a consequence, when someone asks what type of law we practice, the answer depends on who is asking.
How do you see the relationship between privacy and security law and the management of data as a capital asset?
A lot of people confuse privacy and security law as the same thing, but they’re not. One is about data access and the other one is about data use.
The way I like to explain it is:
- Security – do you have the right key to the door., and can you get in the house?
- Privacy – Once you’ve gotten in the house, which chairs do you get to sit on?
Privacy actually presumes that you have access to the data, right? Because if you don’t have access rights to the data, then it becomes a security question. For example, were you properly authenticated, and do you have the right kind of credentials to access the system or dataset? They overlap because if you don’t have security, then you can’t have privacy, as you can’t limit use without limiting access. Privacy is about how data is used and the expectations around its use vis-a-vis the various participants in the ecosystem.
One of the challenges with privacy that many people face these days is approaching it from the perspective of ownership, like intellectual property (IP). While IP is an important component, you have to remember that there will always be at least two parties, if not more, that have use rights in personal data.
For example, your social security number is a classic piece of personal data. But you don’t own that; the federal government does. However, you have rights to it, as do the Feds, your bank, and anybody else who needs it as a means of reporting income to the IRS. So, people need to remember that there will always be other stakeholders in the ecosystem that may have rights to use that data.
Figuring out how to use data from a corporate perspective is, in many ways, like figuring out how to use any other capital asset. For instance, someone who owns a car rental agency has a fleet of vehicles, but how they use the vehicles is limited by regulation. That’s a capital asset. With data, if you have databases full of personal data, how you use that data, as we see in Europe, is limited by regulation.
In many ways, data as a capital asset is no different from any other capital asset. It’s got regulations around it, and how you use it is a function of not just legal risk but also reputational and operational risks. Privacy practices and security practices, in many ways, are tools that you use to manage those “buckets” of risk.
How can a business effectively protect its data while adopting new and emerging technologies like with AI, IoT, and Blockchain?
Technology in and of itself is neither good nor bad – it just is. In general, very much like the laws of physics, where every action has an equal reaction, every potential has an equal and opposite potential. What I mean is that if you’ve got a very powerful piece of technology, like artificial intelligence, it has a really massive potential for incredible positive impact. On the other hand, it has an equal potential for incredible negative impact. So, very much like managing any other capital asset, understanding the risks of misuse is just as important as understanding the benefits of positive use.
Generally, when you’re facing any technology, whether it’s IoT, AI, or blockchain, a lot of folks will come in and look at it from only one side of the continuum. They’ll just say, “Oh, this is great, it can do all these things that are so wonderful, etc., etc.” Then, they jump in headfirst. However, you need to actually go through an analysis of what the potential downsides and risks might be associated with the technology. Once again, it’s like any other capital asset, or any other technology. The upside of nuclear power is it’s clean and lasts for a long time, but the downside of nuclear power is it can kill everybody.
Once you understand the potential negative impacts, you can take steps to mitigate them. Ultimately, the biggest challenge with new technology of any sort is trust by the players in the ecosystem. You have to ask yourself, can I trust this new technology? Is it scary, spooky, or does it make me uncomfortable? One of the ways that you can manage that, once you recognize the downsides and put controls in place to manage them, is to communicate to your team or users what those controls are.
I’m not just talking about legal controls; it’s legal, operational, and reputational control. There are a lot of things that you can do to someone from an operational perspective that would freak them out, which would reduce the level of trust in the entire ecosystem. If that happens, you’ll have problems with adoption and retention of whatever business model is using the new technology.
In your experience, what are common misconceptions, myths, or misunderstandings that companies have about privacy and security laws?
The first one is privacy and security are the same.
The second one is privacy and security are a compliance problem. It’s not, and this ties into what I was just talking about in terms of trust in the ecosystem. Privacy and security are states that need to be maintained so that all of the players in the ecosystem are comfortable with how data is being used, protected, and developed.
A lot of folks look at it from the perspective of, “Oh, well, we have to abide by the law.” When I was in-house, and when I was a chief privacy officer, I would go to a CFO and ask for money for a project that I needed to take care of to comply with the law. If I said I needed this from a compliance perspective, he’d say, “How small of a check do I have to write?” If I went to him with exactly the same solution and said, “This will actually help drive adoption and retention of your products and services,” he’d say, “How big of a check do you want?”
It’s really a function of understanding that good privacy practices and good security practices are actually good for everybody. It’s not just a compliance question. The best marketers that I know understand good privacy practices; you don’t want to spam people. It doesn’t generate conversion. You don’t want to do weird stuff with people’s data because they stop trusting you and they stop engaging with you.
Data is a very short-lived asset in terms of its utility. The ability to continue to manage it and use it as a valuable asset is a function more than just a legal function. It’s a trust function. And that’s a marketing exercise as much as it is a legal exercise.
So, a lot of folks get stuck on just the legal component, not really understanding the underlying first principles as to why it’s there and what we’re trying to achieve.
That’s the biggest misconception that I see in terms of how people view privacy and security. It’s a lot more than just compliance or regulatory functions.
The last thing that I would offer is privacy is local, but data is global. Privacy, specifically as a cultural concept, is subjective. It’s not necessarily objective like security is. Depending on where you are in the world or what demographic you’re dealing with, the expectation of privacy is going to be very different. We’ve seen privacy go from the Brandeis model, which is your right to be left alone, to the Facebook model, which is you have all this data that I’m giving you, just don’t do things to hurt me with it.
How has the landscape of privacy and security law changed over the past few years? And are there any trends that you expect to see in the near future?
This is a bit of a historical answer. Europeans like to say that they invented privacy in the ’90s, but they didn’t. It actually came out of the Fair Information Practice Principles from what used to be HUD in the United States back in the ’70s.
Those principles were the foundation for the OECD (Organisation for Economic Cooperation and Development), which is primarily a European organization. The OECD privacy principles are the basis for pretty much every privacy system out there, and the United States actually created them.
So what we’ve seen is those principles get implemented culturally in different ways. The underlying principles are the same, but one size does not fit all. In a way, privacy hasn’t evolved that much. What has evolved is the volume and velocity of data collection about people.
Regulation tends to be reactive and reacts to things that regulators and politicians care about, mostly getting re-elected. So if society is concerned about a particular topic, regulators tend to get concerned about it as well.
What we’ve seen is the trend in data protection following the trend in technology. Twenty-five years ago, there was no iPhone or Facebook. When I started writing code in the late ’70s, there was no internet. I had an ARPANET email address when I was in college. So the idea of privacy evolving is looking at it from the wrong angle; technology has evolved, and the first principles have had to be applied differently.
It’s this weird situation where privacy has evolved, but it hasn’t. We’re encountering new technologies that create drastically different data collection opportunities, and data use practices. As a result, we’re starting to see a convergence around how people are beginning to regulate data protection. I won’t necessarily call it the European model because it’s not, even though they look the same. Asia is primarily a consent-based region, Europe is not consent-based, and the United States has a hybrid approach.
We’re seeing a convergence in how data protection is regulated, but it still has a highly localized flavor to it. So, it’s evolving in a spiral instead of a circle. It’s moving forward but in a somewhat predictable way.