Get started
Login
WireGuard is a registered trademark of Jason A. Donenfeld.
© 2024 Tailscale Inc. All rights reserved. Tailscale is a registered trademark of Tailscale Inc.
Go back

Latacora and Tailscale: A conversation on compliance

May 17 2022
Laurens Van HoutvenAvery PennarunDavid Anderson
Laurens Van Houtven, Avery Pennarun & David Anderson

When Tailscale started working toward SOC 2, we started to ask some fundamental questions about growing and continually improving our security posture. This led us to partner with Latacora, a security firm that specializes in building information security practices for startups.

With Tailscale’s SOC 2 certification, Tailscale’s Avery Pennarun and Dave Anderson talked with Laurens Van Houtven (“LVH” to his friends), co-founder of Latacora, to talk about Latacora’s experience and business model, what Tailscale does in terms of its own security, and why it was important to Tailscale to hire a partner to lend a hand.

The interview below has been edited for clarity. If you’ve met any of us, let alone all three, you know what that took!

Tailscale’s blind spots and the origins of Latacora

Avery (Tailscale): Thanks for getting together today. I think the story of how Tailscale and Latacora work together is something our blog readers will find interesting. As I’ve told Dave a few times, I really believe Tailscale needs to have security at the 100th percentile for all our customers, because we can’t ever be the weakest link in their security. That’s a pretty terrifying prospect — every engineer knows you should never be the 100th percentile of anything. But we need to achieve it, or else we’ll be the weakest link in our customers’ security. With that goal in mind, Dave was a big proponent of Latacora. We didn’t just want someone to do a one-time security audit to throw over the wall, right?

Dave (Tailscale): I like to think that Tailscale is more security-conscious than your average startup. But even though I work on improving our security, I know for a fact that I have massive blind spots, and the problem is I don’t know what those blind spots are, because I haven’t spent a career figuring that out.

There are some places where I feel very comfortable. But then there’s this whole fog of war around me. You know, maybe there’s a horrible monster two feet away from me and I just have no awareness of it.

At Tailscale, we were talking about getting SOC 2 and other certifications, and I was trying to figure out how much “adult help” we needed for that. Because we could approach SOC 2 in a couple different ways. We could treat it as an inconvenience and hurdle, or we could treat it as an opportunity to learn, improve and get us where we want to go.

Anyway, I came across Latacora’s article “SOC 2 Starting Seven,” and my conclusion was, “Wow, these people are smart. We should ask them about other stuff and pay them for it.”

LVH (Latacora): That blog post is hugely popular. In summary, it gives people a bunch of low-hanging-fruit security things that will help them for SOC 2. They’re things they can go do that they don’t need us for and can do on a shoestring budget, but that will help start their budding security practice.

We started Latacora because we wanted to work with startups. The founders had a variety of different security experiences, either operating a pentesting shop via Matasano, or in my case, operating a managed security services provider. Most of those companies are built around big enterprise accounts, because that’s where the money is. It also turns out to be where the successful outcomes for clients are. Startups usually don’t get the same bang for their buck when it comes to traditional security services.

Most security people have their horror stories about clients, particularly startups, and Latacora’s founders are no exception. One time, one of us found a cross-site scripting (XSS) exploit in a staging environment specifically commissioned for the test. The company promised the test database would be decommissioned after the test. But a year later, that database record somehow made it from the test instance into the production database.

My personal favorite story is about a client that rented out a physical rack in a data center with a bunch of servers and appliances. One of those devices was a web application firewall (WAF), which is critically important for their compliance goals, supposedly. We went to check out the rack in person and discovered that the WAF was plugged into power — and nothing else. I’m not a scientician, but the traditional way to deploy a WAF involves at least one network cable. But hey! They had the WAF! The LED was green! The auditor signed off on it!

Engineers who hear those stories usually have a good laugh at that company’s expense. But there’s another way to look at it: Someone took their money, but did not fix that client’s security problem. Granted, a lot of customers in that segment of traditional security services treat them as a simple business transaction; they’re not in it for a security outcome, per se. They have a contractual obligation to get a pentest, so they get a pentest. They’re spending $25k today to close $250k tomorrow. I get it! You should take that deal any day. They think about it the same way they think about business insurance. It’s a cost of doing business.

Still, it was pretty clear to us that the industry was not consistently producing good security outcomes for most startups. But occasionally there’d be a counterexample, where you couldn’t help but notice that some shop was knocking it out of the park. Every time we went to examine that exception, what we’d find is one person who owned the security program, soup to nuts. It sounds so obvious it’s almost trivial, but you need someone to actually run the security program. You can’t just mimic the externals, such as getting a pentest, without the internals of a comprehensive program and expect the same results.

Dave: We once had a customer who wanted us to get a one-time pentest and give them the report. And so I was agitating for, you know… let’s not just hire someone to do a one-time test and plop out a report. Let’s take this as an opportunity to actually assess where we are, where we want to be, how to get there, and at the same time get the report this customer wanted.

As Avery mentioned, we have to be at the 100th percentile of all of our customers’ security. Tailscale is not there yet, not for everyone, not on its own, and our most cautious customers have to use Tailscale in combination with several other tools to build a stronger story. Where we are now, is in a maturing process. I like to think we’ve caught all the major scary things, and now we’re slowly building the wall up higher and higher to get to that 100th percentile goal.

The long-term vision for Tailscale as a product is — at the risk of using a buzzword — zero trust security. All the fancy things like device posture checking and authorization based on contextual clues, beyond just whether you had the correct password.

But, ironically, we can’t use our own product to achieve zero trust in our own operations, because that creates an unfortunate circular dependency. As tempting as it is to use our product to debug why our product is broken, it would be a terrible idea. I have tried this before in other jobs, and it never works. So we went looking for some outside help.

The joy of security architecture reviews

Avery: So Dave found Latacora and wanted to work with you. When companies come to you for the first time, how do you start?

LVH: The way we start engagements is a security architecture review. Most of that is interview-driven, though tooling-supported. We ask questions around what you are doing in a bunch of different areas. The right solutions depend on the customer, and people are often surprised at what we tell them is important or not.

For example, let’s say the client has AWS access keys in plaintext in their home directory, and virtually all of their employees have, you know, wildcard AWS permissions. And they all share one AWS account. That’s a nightmare scenario.

And let’s say they’re also running a React app. Plus a bunch of AI/ML stuff. So their dependency tree is more of a dependency kudzu forest. It’s a matter of time before there’s a compromised package somewhere in that dependency tree, right?

With the common security rules-as-written: That’s a second-order vulnerability! So that’s “informational”-level severity. Nobody takes it seriously. Something else would have to go wrong for this to be bad. So fixing it is technically a hardening measure, which is (supposedly) always secondary to a direct vulnerability.

By comparison, we’ve found internal admin dashboards where it’s been sort of cobbled together with garbage code: Every endpoint is CSRFable and every other feature is an SQL injection vulnerability in disguise. And sure, I’d like you to fix the vulnerabilities and downscope the access your support folks have, but that might be a massive undertaking. What the client might need to hear instead is, “You know what, put it on a separate domain behind an application load balancer (or Tailscale), and that’s good enough, because then it’s no longer your biggest problem.” It’s still gross — but at least I know staff are signing into it with SSO and WebAuthn and I know you can revoke access when you need to. Plus, it’s on a separate domain so suddenly I care a lot less about that CSRF. We’ll fix those other bugs down the line. The vulnerabilty might have been some OWASP Top 10 thing, but the best remediation answer is not necessarily a straightforward “go fix the bug.”

Somehow that admin dashboard with all the bugs, people always take that seriously. But people think the admin credentials in your home directory is a lower priority, because it hasn’t been exploited yet, it’s not “technically” an exploit. We can help them discover and fix these problems they find surprising.

That’s why I like the security architecture review. A lot of what we talk about is internal access. You’ve got the support team who accesses this dashboard, and the staff accesses this, and the staff access that.

And invariably you’ve got, say, the nice, tightly locked-down Postgres RDS instance. And then you have the data lake, which is exactly like the production database, including all the data, but minus all the access control. And you’ve got people accessing that from their developer machines. Or, you know, their gaming rig with the nice GPU. Oops.

Our whole process is trying to identify where the gaps are, and raise your maturity model. When it comes to a specific customer, sometimes we tell people to use Teleport because it’s very important that they have audit logs and things. Tailscale is a networking tool, Teleport is an audited access tool. Sometimes you need Teleport, and sometimes Teleport is too much of a hassle and you really just want Tailscale. We tell a lot of customers to just do, say, AWS SSM — not because it’s perfect, but because it’s the thing they’re going to deploy today that will make their situation materially better.

Anyway, for some customers, sometimes we very strongly recommend Tailscale. It’s not a universal recommendation, but I have literally verbatim said to a client, “You are either going to use Tailscale or you are going to screw this up,” and I meant it, and I fully expect to say that again someday soon.

The longer you know someone

Avery: One thing I like about Latacora is your depth of experience. You work with a lot of customers having all kinds of problems with operations, but also with app design. We wanted your advice as we design the product as well as with our internal networks.

LVH: As we work with clients longer and longer, we’ve noticed that, at first, many of the questions they come to us with are kind of trivial. Like, “Hey, I’m going to change an OpenSSL configuration. Is this okay?”

But as we progress in the relationship, the questions become more of a security pretext for getting us in the room. And the questions they’re really asking are, for example, product design questions that have little to do with security. Like, “Do you know anyone who uses AWS Cognito in this way and is happy with it?” There’s the veneer of a security question there. But what they really want is for us to recommend a way people can sign into their app that doesn’t suck. So I think that one really valuable thing about long term relationships is that it’s long term relationships all around.

Dave: That reminds me of a time that I asked you an obscure cryptography question about key reuse across protocols. The question I was actually asking was, “We have this new thing we want to launch. How do we do that gracefully without annoying our users with forced key rotation, reauthentication, and all that?” The cryptography was kind of a pretext for, can I just short circuit that entire UX problem by doing something tricky, and is it safe if I do that?

LVH: Right. I thought about that question for a while, wrote some stuff down on paper, then reached out to Filippo Valsorda, who’s a well-known cryptographer, to get a second set of eyes on it.

Now Imagine if Latacora were completely transactional, as in, you pay me $200 an hour and I answer your questions. Then you’d want us to just get that question out of the way as quickly as possible and move on to the next. But if we’re in a longer term engagement, we can afford to build up that kind of expertise — both in-house and via a network of experts. We have the ability to escalate to the right person in the community, who we’re sure will know the right answer or at least check our work — that’s worth a lot.

Avery: When I read your website, it says: We will help you. We will be your security team until you hire a security team. That would suggest that an engagement gets lighter and lighter over time. Does that actually happen? Because I don’t feel like we’re relying on Latacora less and less.

LVH: The initial pitch for Latacora was, “We’ll help bootstrap your security practices. We’ll get in, get it done, and get out.” But that approach made our clients anxious, because a year later they would see how deep their security rabbit hole is and start thinking, “Oh no, these people are going to leave any minute now, and I’m no closer to having a security team. This is terrifying.”

So it turns out the right length for our engagements is way longer than we thought it was going to be. The reason we originally framed it as being capped is that we wanted to keep ourselves honest. If you’re in a long-term engagement, it’s easy to make a client dependent on you in all sorts of ways. “Hey, we’re your IT provider now! Buy all of your per-device licenses through us and we’ll give you a 20% discount.” But as a consequence, the company becomes basically impossible to get rid of. We really didn’t want to create that dynamic.

Still, our engagements are way longer than we thought. I think our median is 3.5 years now. That’s nuts, because we’ve worked with recruiting firms to help companies hire, say, a director of security who is going to take over from us at some point. Some companies now expect that search to take longer than the tenure of that person!

As time goes on, we’ve found it’s better to start with a couple specific areas of expertise where a client starts to outgrow our capacity. We have a conversation with the client and say, “Listen, your staff is up to 50 PRs a week that require security review. We don’t want to train them to request less security review — and they actually do need security review, because your client code is built on some insecure framework. So maybe you should hire someone who can clean up the framework as a strategic remediation, and also someone who can do more reviews to address the capacity issue.”

We ended our engagement with a client recently who was 4.5 years in. They had hired something like seven security people. At that point, our relationship was basically, there’s a handful of areas where it’s just cheaper for us to keep operating some of the machinery. We, Latacora, do build and run a lot of tools. We already know all of them. The replacement cost of Latacora becomes, “Oh my God, I need to buy eight different products, and then who’s going to staff them? Forget it, no, I’m gonna keep Latacora around.”

Another thing that happens late in an engagement is we back off to become, say, tier 3 escalation, just because we’re bigger and we’re going to have more variety of experience. Like Dave’s cryptography question above, right? It doesn’t make sense for any of our clients to hire a cryptographer. But the 10 minutes that you needed a cryptographer’s time, you can’t replace that with any number of hours of somebody else’s time. You really just need the cryptographer, even if it’s only for 10 minutes. (Most of our clients’ cryptography questions aren’t anywhere near as involved, though!)

So at some point as an engagement matures, we turn into a kind of… bucket of optionality on experts. After the first year of hand-holding, what we’re really selling is access to a network of experts. Most of those are internal to Latacora, but maybe you need something really specific, and then we turn to our Rolodex of third-party security experts.

Plus, we can afford to go do a lot of preparational research that helps all our clients. For example, we’ve had clients where they’re like, “Oh my God, we lost an enterprise deal because we don’t have DLP. First off, what’s DLP?” The answer is, “Well, that’s a three-letter acronym, and it can mean anything from a mobile device management (MDM) configuration to, say, installing Blue Coat to man-in-the-middle everyone’s TLS connections, to AWS Macie. It’s a wide array of options." What we can do is explain: “At Latacora we’ve got this grand unified field theory of DLP. I can walk you through it all, or I could just tell you what the answer is in this case,” based on the fact that I know a ton about that client. I know if they’re on AWS and Google Workspace, and I know what their appetite is for implementing systems versus shrink-wrapped software, I know what their deal sizes are and what their customer profile right now is (and who sales would like to sell to next year).

Is Tailscale different?

Avery: I feel like Tailscale is a bit unusual in the security world — even as a startup, several of us are reasonably knowledgeable in security. I heard Latacora does think of us a little differently than your average client. And I’m curious to hear the similarities and differences with your other engagements.

LVH: You’re certainly not the only such client, but security is far more front-and-center for you than for our median client.

The parts that are similar are the parts that one might not traditionally associate with security. For example, mobile device management. I don’t get the impression that Tailscale is materially different in their understanding of and need for MDM inside the company. Maybe you’ve thought about it more and been a bit more cautious, where other clients would have said, “Great, what’s the default? Install it everywhere.” But in the end, Tailscale will pick an MDM and install it everywhere.

Similarly, say, email group configuration. You’ll do roughly the same things and they’re mostly going to be good, right? And there will be blind spots, there always are. Look at Google Groups: Is it a floor wax or a dessert topping? Is it a mailing list manager or an access control tool? Well, it depends which product-manager-shaped-void you’re asking. So we see it all the time: A self-service access to some Google Group results in a password reset vulnerability for a critical system, and just like that, game over.

That bug doesn’t feel very glorious, right? Glory is in the ninja alien space hacker wizard vulnerabilities. One of my favorite bugs that we’ve found at Latacora was TLS 1.3 session resumption, meets SSRF, meets protocol smuggling. It was like half a dozen bugs that came together, and the punchline was the user could make the requests say whatever they wanted.

But most of the day-to-day security problems at a company aren’t like that. They’re more like the trivial Google Groups thing.

The way Tailscale is unusual is how front-and-center you’ve made security to your business. It’s the 100th percentile thing you mentioned. It’s the explicit acceptance of that risk and that responsibility. Even when other startups are in very similar positions, the degree to which they take on that responsibility with appropriate gravitas can be very different.

It’s like this. I know plenty of companies where there’s absolutely nobody who would say the sentence that Dave said earlier: “I know that I have blind spots.” They would never admit to that, and frankly, their blind spots are significantly larger than Dave’s. I think that also speaks to Tailscale’s engineering culture. There’s very little ego. It’s okay not to know something and say that out loud.

Avery: I find that both reassuring and terrifying, depending on your point of view. One time our engineering lead, Denton, told me I was one of the most careful engineers he knows. And I just froze. Like, “Oh no. I have this endless list of things I meant to clean up and never got around to it. What the heck is everybody else doing?”

LVH: You have a list, though!

I saw some code today, a React application, where somebody had replaced core component rendering code with something that was called dangerouslySetInnerHtml, that just bypassed all the HTML escaping. So it’s not like this app has a single cross-site scripting (XSS) vulnerability. They enabled XSS! Globally! As a feature! And it turned out they did this because some people needed to be able to make some parts bold in one part of the app. Next thing you know, every single field has trivial XSS.

Compared to that, Tailscale is excruciatingly careful. And Tailscale is simultaneously simple. Sophisticated, but simple. Like, you know, Taildrop is Taildrop. It’s not a dozen other things. It’s isolated.

There are many parts where Tailscale is very carefully designed to prevent misuse. Tailscale the company can’t peek inside my tunnel. Things are carefully designed for defense in depth. There are clearly multiple layers, and localized failures don’t cause global catastrophe very easily. One example is a particular hosting vendor that you use for the DERP network in some regions because they are significantly cheaper. Even if it turned out they have a breach, even if we thought that they are less trustworthy than a big-name provider, it doesn’t matter, because we’re not really trusting them. We’re trusting them to move some encrypted packets around. If they fail at moving encrypted packets around, then the endpoints are going to realize that, and it doesn’t actually matter. You’ve thought through all this. You’re careful.

In which we ask “How is Tailscale doing?” and get a candid response

Avery: So, it sounds like you’re saying we’re more careful about security than other companies our size. What typical size company do you think we’re more comparable to?

LVH: That really varies a lot. There are business-to-consumer (B2C) companies, and I’m not saying that nobody there cares about security, but you know, the worst thing that could happen is somebody accidentally charges a Stripe tokenized card for $10 and somebody needs to rotate a credit card, right? Their sales don’t involve contract review. The paperwork bar is extremely low, often no SOC 2 or vendor security checklists. There’s much less bar to clear. That’s not a dig: That can be a huge slog, more power to them for not having to deal with it.

If there’s one thing that I’ve learned doing Latacora, it’s that companies can grow to a terrifying size without having any security function to speak of. Not even the externals, like getting a pentest. Nada. Zilch!

Whereas there are other consumer companies, say shared document systems, where if you pop them you get a bunch of shared passwords — ask me how I know! — and they need to have a very high bar right from the get-go.

Avery: We’ve talked about internal corporate security, and Tailscale application security. And okay, we’re careful. But this post isn’t controversial enough. It’s okay to criticize us. Hacker News people love that.

LVH: There’s really only one thing that comes to mind that I think Tailscale unambiguously screwed the pooch on. And it’s the feature where by default, anyone who joins from the same domain automatically gets added to the same tailnet, unless that domain is explicitly carved out (gmail.com, for example).

The reason it comes to mind is because my first reaction was, “No, I must have misunderstood. There’s no way they did that.” And my second-order reaction was, “Oh, it turns out knowing which domains are public email providers is, uh, let’s say a non-trivial exercise.” There are some ways to do it, but it’s fundamentally an unsolved problem. We will eventually discover some random company in Germany we’ve never heard of that also turns out to be a hugely popular email provider.

So that ends up as a really interesting example of a usability-versus-security tradeoff, because the way to do this securely is something like ACME-DNS challenges, and that’s a lot harder for users to pull off.

Avery: There’s a story behind how that happened. It’s much more dumb than actually having thought about the choice between usability and security. Back in 2019, we only had one customer, our first customer, and they had multiple users on one domain. So we hardcoded that domain, and then everybody else was gmail.com. And so we hardcoded gmail.com to be treated as a separate domain per user instead of one big domain. But then it turned out you could sign up for Google SSO using non-gmail addresses, and we assumed if you did that, it was because you were using Google Workspace, which meant you had a big company that should all be in the same tailnet. But no! It turns out you can sign up at one of these various public email providers, then attach that email address to Google SSO, then log into Tailscale using that, and you’d end up in the same tailnet as other random users on that domain. People don’t like that, for obvious reasons.

But ever since then, we’ve been busy handling all our growth. Every few weeks someone says, “Oh, we should really get back to the whole model of how we define a tailnet sometime, because one tailnet per domain is pretty dumb.” But now, it’s two years later, and we haven’t had time. At least we have a plan written out now, right? Something like the Slack model, where you define workspaces separately from identity providers.

It’s important to do that in general, because even with teams at a big company, they don’t necessarily want to share an account with everybody in the whole company just because they use the same domain name. Sometimes they want to share things with their team and buy it on a credit card. And then a different team wants to adopt Tailscale and share things inside their team. And sure, if it gets popular, then you have to eventually solve the problem of a whole company wanting to roll out Tailscale, combine all these networks, and get some top-down control. Eventually, the head of IT or the CISO will hear about it and go, “Okay, can you take these 12 tailnets, merge them together and let me manage them all as one, please — and also I’d like a volume discount.”

Of course, Slack didn’t really get it right either. The fact that I have a separate name and password for every Slack instance that I log into, even with the same email address, is pretty silly. We’re not going to do that. We want your identity to be your identity, but we want your tailnet to be a separate thing from your identity.

I guess maybe it’s a little more like Discord than Slack. The problem is right now your identity is strongly coupled to which tailnet you have access to. That’s the coupling we need to break. There’s your human identity, the one you log in as, and the networks you have access to are based on that. Of course, there’s some extra complexity when you really want separate identities, such as a work identity and a home identity.

LVH: And those two things should be very different and should not automatically elevate from one to the other. Ironically, Slack was safe back in 2015, but later they introduced a version of the shared email provider bug. Well, it’s a slightly different problem, but I’ll tell you what happened.

Someone at Company X wanted to share a Slack channel about an ongoing assessment with someone at Company Y. Company Y doesn’t use Slack. They use Microsoft Teams or something.

Unrelatedly, my friend from Company Y had joined a security nerd backchannel with their corporate email address. They probably should have used their personal address, but whatever.

Anyway, Company X had a Slack channel for an ongoing security assessment. They wanted to share it with my friend from Company Y, where they don’t use Slack, but Company X didn’t know that, so they sent a Slack invite to his Company Y email. And Slack decided to helpfully connect the dots and go, “Well, this person’s Company Y address is on this backchannel Slack, so clearly this Slack is Company Y’s Slack! So let’s share the channel with Latacora’s Slack instance.” And because I happen to be an administrator on the backchannel Slack, I got access to that private channel’s full chat history, including a discussion of some serious vulnerabilities in progress. Oops!

Dave: I’ll go ahead and make a note to not do that when we’re revamping Tailscale’s identity model. We’ll be paying attention because we’re very familiar with that failure mode.

Avery: Anyway, the plan is to decouple the existence of a tailnet from its identity provider. Right now they’re tied together.

But what about SOC 2?

Avery: Coming back full circle to SOC 2. Short term, does Tailscale achieving SOC 2 compliance really affect our security posture, or is it more of a checklist item?

LVH: I think hacker types have a tendency to dismiss auditors: “I’ve spoken to these people. It would be an exaggeration to say that they understand how computers work. Why should I take anything that they say seriously?” And auditors have a similar tendency to dismiss hacker types: “I can’t even get these people to write down what they do, let alone reason about it. They’re flying by the seat of their pants. Why should I take anything they say seriously?”

I’m exaggerating a little; I would hope that there’s a bit more mutual respect than that. But it’s common. And part of the problem is that they’re both often sort of right.

Now the good news is that it is what you make of it. Is there a way for you to lowball SOC 2? Yes, to a point. You can scope down and get the auditor to accept a bunch of exceptions, and then poorly document them and hope nobody reads the report carefully enough to notice.

The objective standards to which SOC 2 audits are pretty mild. The same is somewhat true (though much less so) for, say, ISO 27001, it’s a checklist. I really like the word “checklist,” because depending on context, technologists either use it derisively or with tremendous respect. If you say “checklist” in the context of an auditor, technologists think, “meaningless paperwork, divorced from reality.” But if you say “checklist” in the context of airplanes, then they immediately think, “Oh, yes, this is the thing that serious professionals do to guarantee that you won’t screw up.”

The thing with SOC 2 and standards like it is that you can do the checklists either way. You can have a SOC 2 that isn’t worth the paper it’s written on. It’s a cynical worldview, which turns out to work quite well in many situations, where customers want you to send them your SOC 2 paperwork and then they never even look at it.

A SOC 2 Type II covers compliance over an amount of time. Up until a couple of years ago, the Type II audit period was invariably a year. It was so common that I actually thought it was mandatory. And now it turns out that it could be nine months, and next thing you know it’s six, and now I’m even hearing as little as three. I don’t really know why that is — we’ve never had a client lose business because they had a Type I and were in the observation period for a Type II. I think this is somewhat driven by some of the new breed of SOC 2 startups (it’s certainly contemporaneous), though I don’t quite understand why.

So if you have a cynical mindset of how SOC 2 works, then you download a bunch of nonsense policy boilerplate that nobody reads, and you write your policies in a way where it’s pretty much impossible to prove or disprove compliance.

For a technical example, you write your encryption policy. You say we will use the best available encryption methods, based on a combination of factors, including strength, convenience, and availability. Okay, auditor, go ahead and prove that we violated that policy. Or, go ahead and prove that we didn’t violate that policy.

You can start with that cynical view, and you will get a cynical outcome. But it is what you make of it. You can treat it as an obstacle to closing a business deal, or you can treat it as a prompt. When the auditor asks you a question, you can say, “Well, that’s a good question. What should our baseline be?” And SOC 2 will suggest something, but it’s a good time to think about whether our bar should be set higher than that, or whether it’s sufficient where it is.

Dave: That’s certainly what I’m learning with Tailscale. A lot of SOC 2 sounds like meaningless checklists: “Prove to me that you have separate logins and two-factor on AWS.” Okay, well, it seems a bit silly that you’re going to take screenshots of that, but whatever, fine.

But it did force us to actually think holistically about our access control and whether it all hangs together in a sensible way. To commit to showing someone that we have actually thought about it, and this is how it works — that means we take fewer shortcuts.

A final thought

Avery: Let’s end on a self-serving question. Before we ever hired Latacora, I heard through the grapevine that you were telling your customers something like, “If you want to do security right, you should use Tailscale.” Do you have any reflections on that?

LVH: Can, have, and will do again (to be clear: ages before Tailscale became a customer!), though it’s not a blanket recommendation. For some customers it’s Teleport, for some customers it’s Tailscale. For some customers, it’s AWS SSM.

Some customers just have internal users who are all using, say, an internal dashboard. Maybe the right answer for them is an application load balancer. They can put it on a different domain and never think about it again.

But others have a gazillion internal services, and for them, Tailscale makes a lot of sense compared to whatever they were doing before.

All that said, there are definitely clients where I have spoken the words, “I can go into detail about what to do, but the punchline is you are either going to use Tailscale or you are going to mess this up.”

Subscribe to Tailscale’s blog

We have a deep commitment to keeping your data safe.

Too much email?RSSX
Loading...

Try Tailscale for free

Schedule a demo
Contact sales
cta phone
mercury
instacrt
Retool
duolingo
mercari