SafetyDetectives spoke with Colin Constable, CTO and co-founder of Atsign, about taking control of your online identity, the advantages of the atProtocol, improving the security posture of IoT devices, and more.
Hi Colin, thank you for taking some time to speak with me today. Can you talk about your journey, and what motivated you to establish Atsign?
Around 10 years ago, I was in an accelerator in Silicon Valley with my co-founder Kevin Nickels. We were frustrated with the repetitive nature of providing personal information on different websites and applications. It bothered me that I had to enter my name, password, credit card details, and address repeatedly. Kevin posed the question, “What are we going to do about it?” and I responded with skepticism, “What can we do about it?”
Kevin had a broader vision. He pointed out that when you give away your information, you’re often treated poorly because the company simply wanted your data. This sparked our curiosity about finding a solution. We wanted to simplify asserting one’s identity on the internet. If I owned my online identity, I could have control over my data. Instead of answering numerous questions, I would simply trust another entity to handle authentication for me. This led to the genesis of our idea.
The question we faced was how to establish identity on the internet in the simplest way possible. Both Kevin and I had been working in the identity space for many years. We knew that identity was a challenging problem, especially in the context of the internet. Companies like Microsoft had attempted solutions, such as Passport, but we didn’t want to follow the same path. We sought the opposite approach, the simplest way to assert identity. After some coffee-fueled brainstorming sessions, we realized that a unique string could serve as an identity marker.
We referred to this unique string as an atSign. For example, my atSign is @Colin, Kevin’s was @Kevin, and our co-founder Barbara, who serves as the CEO, has @Barbara. These unique strings can be assigned to individuals, entities, or objects. It could be an IoT device, a document, or even a song. The common thread is that each has an identity represented by an atSign. By owning the atSign, you also possess the cryptographic keys. If I trust someone, I can grant them access to my data. This concept made it easy to delegate access to others. As you know, as a reporter, the answers you receive depend on who asks the question.
This design consideration was crucial. We were working on this idea ten years ago when only two prominent technologies were available: DNS and blockchain. DNS already translated human-readable strings into IP addresses, aligning with our vision. We aimed to merge DNS and blockchain to address the problem effectively.
However, we encountered scalability issues with this approach. Blockchain was not designed for rapid scalability, and people often need to modify or delete data, which blockchain does not easily accommodate. Due to these technological limitations, we put the project on hold, even though we believed it was a great idea.
Around four years ago, Kevin and I revisited the idea. We reevaluated our portfolio and still saw the potential in our concept. The available technology had advanced, enabling us to move forward. We utilized in-memory databases that could scale both vertically and horizontally. These databases provided a naming service, assigning DNS addresses to billions of atSigns with reasonable performance. We used Redis and similar technologies for this purpose. Additionally, edge devices like cell phones, IoT devices, and even SIM cards now have the capability to generate cryptographic keys efficiently.
In year one, we started developing the atProtocol. However, by year two, we realized that people were more interested in an SDK that allowed them to create their own applications using atSigns. Recently, we’ve seen a growing need for security and encryption in the IoT sector, which we tend to overlook. Without these safeguards, we expose ourselves to potential threats from malicious actors.
What led to the development of the atProtocol, and what specific challenges or gaps in the market does it aim to address?
The development of the atProtocol was driven by the entrepreneurial spirit, with the goal of creating a new internet protocol that could bridge the gap between a central database holding unique strings and the secure communication between individuals. This solution aimed to enable individuals to have their own atSigns and generate their cryptographic keys, ensuring secure and private communication between parties.
The atProtocol addresses two main gaps in the market:
- Communication between people, devices, etc: The atProtocol allows seamless communication between people, devices , and entities, regardless of their location on the internet. This is achieved without the need for inbound ports, which is typically hindered by network address translation. By using the atProtocol, devices or things can communicate with each other, overcoming this limitation.
- Encryption and security: The atProtocol itself does not handle encryption. To address this gap and provide encryption capabilities, an SDK was developed on top of the atProtocol. This allows people to incorporate their own encryption algorithms if they prefer not to rely on off-the-shelf libraries. By utilizing the SDK, users can ensure end-to-end encryption for their communications.
The benefits of using the atProtocol and SDK are manifold:
- No network attack surface: Devices or things using the atProtocol do not require open ports, minimizing potential security risks.
- Ease of communication: As long as the device or thing has internet connectivity, it can communicate with any other atSign-enabled device without the need for network administrators or port configurations.
- Event-driven mechanism: The atProtocol utilizes an event-driven mechanism, eliminating the need for continuous polling and enabling fast, near real-time communication. This ensures efficient communication, with measurements showing a latency of approximately 50 milliseconds plus the speed of light for transmitting information from one atSign to another.
How do you see the current landscape of data security and privacy in the technology industry?
I don’t think enough people are thinking about it. We’re still in this world of grab as much data as you can and then run away with it. Everyone seems to think that data is the new oil. I agree data is the new oil. However, like oil it’s messy. It gets really sticky. It gets everywhere and can cause you a whole bunch of problems.
Our approach is different. We aim to keep the data secure and inaccessible to anyone other than the data owner and whomever they choose to share it with. This is where the future should be heading. When using our technology, it is mathematically impossible for us or anyone else to access your data, which is quite unusual.
This means you can develop an application that works for your customers, while we and the application infrastructure itself won’t be able to see the data. That’s what we offer. Unfortunately, many application developers are unaware of the risks they expose themselves to.
For example, some developers use Firebase, which is a Google database, for their applications. It’s great for programming, but the issue is that Firebase manages all the customer data in the cloud, and the developers can actually see that data. This puts them at risk, especially considering the frequent breaches that occur.
Wouldn’t it be better to have a system like ours, where your customers can communicate with others, and the database is inherently encrypted by our SDK? Only authorized individuals can access it. For anyone else, including the application developer, it’s just a meaningless string of 1s and 0s. As the developers of SSH No Ports, we can’t see your SSH keys. It’s mathematically proven that we cannot access them because we don’t possess them.
What impact does the elimination of network attack surfaces, static IPs, VPNs, and firewalls have on the overall security posture of IoT deployments?
Eliminating network attack surfaces, static IPs, VPNs, and firewalls has a significant impact on the overall security posture of IoT deployments. It greatly improves security in multiple ways, and it should be the approach everyone adopts.
Having IoT devices with no network attack surfaces makes deployment easier and more efficient. Customers no longer need to worry about dealing with those complexities. They can deploy devices without the need for change control or waiting for port openings. Traditionally, this process could take around 9 months, but with our technology, it is reduced to minutes.
With our technology, all connectivity is outbound, eliminating the need for inbound connections and DMZ configurations. This means that the security measures and network changes typically required can be bypassed, allowing for near-instant deployment and reducing the overall time and effort involved. It combines improved security with enhanced velocity, offering a new philosophy for IoT deployments.
We often face skepticism and disbelief from people who think what we’re doing is impossible. Overcoming these barriers and demonstrating the feasibility of our approach is both challenging and rewarding. It may seem impossible at first glance—how can you communicate securely behind a firewall with no open ports? But it’s not a trick; it’s simply a different way of approaching the problem.
How can organizations strike a balance between data security and privacy on the one hand and leveraging data for innovation and business growth on the other?
Great question. Organizations should prioritize data and privacy considerations. It’s crucial to put customers in control of their data as much as possible. By letting customers know that they have ownership and control over their data, it compels companies to provide better services. This builds trust with customers and challenges the traditional approach of data collection and utilization that many companies adopt.
One industry that particularly suffers from this traditional approach is the insurance industry. They collect data and run with it, but if customers are at the center and aware that their data is the Golden Record, insurance companies can access more up-to-date and accurate data. For example, when customers change their address or car, they often have to contact multiple companies to update the information. It would be more convenient if customers could update their data in one place and have it automatically updated for all relevant companies. This not only benefits customers with better service but also provides insurance companies with more reliable and current data. It’s a win-win situation.
This shift towards customer-controlled data is the direction in which the world is moving, and it’s why we embarked on our mission. It opens up various possibilities for improved services. For instance, if I share my data with my favorite coffee company, my personal AI could suggest that it’s time for a coffee and ask if I’d like one. If I agree, it can negotiate with the local coffee company, such as Starbucks in my case, and handle the transaction. It knows my preferences and can even instruct me to bypass the drive-thru if it knows my location. It’s a whole new level of personalized experience enabled by trust, data transparency, and customer data ownership. That’s the world we’re living in, where we’re not just learning the importance of data but also actively leveraging it. It’s a game-changing idea that the industry is embracing, with some already making strides to enhance experiences through technology.