Blog
TLS 1.3: Will Your Network Monitoring Go Blind?
Webinar + Transcript: How to Maintain Visibility with TLS 1.3
Tyson Supasatit
May 11, 2018
The Internet Engineering Task Force (IETF) recently approved the TLS 1.3 specification after several years of work and 28 drafts. While this is undoubtedly a great improvement for security, the TLS 1.3 standard isn't without controversy. That's because TLS 1.3 mandates the use of Perfect Forward Secrecy (PFS) ciphers, which essentially blind passive network analysis appliances such as those used for performance monitoring, IDS, and DLP.
In the following webinar, Matt Cauthorn, VP of Cyber Security Engineering and I talk about what's next for TLS 1.3 and the implications of PFS ciphers for enterprise visibility. Here are the highlights of the webinar:
- Trends in encryption on the web and inside data centers (including an audience poll!)
- Threats to SSL/TLS that influenced the development of TLS 1.3
- What is Perfect Forward Secrecy and how it affects passive network analytics
- ExtraHop's novel solution for decrypting PFS-protected traffic
- A case study showing how one ExtraHop customer uses PFS decryption to maintain visibility
- How ExtraHop's SSL envelope analysis makes auditing your environment easy
Forgot your popcorn? Here's the edited webinar transcript for those who'd rather read than watch:
Tyson Supasatit: Welcome, everybody. My name is Tyson Supasatit. I'm a Product Marketing Manager here at ExtraHop. Today we're going to talk about TLS 1.3 and what it means for data center visibility.
As you may have seen, the Internet Engineering Task Force recently approved the TLS 1.3 specification after several years of work. This has been a great improvement for security, but it's not been without controversy. Matt, can you introduce yourself and give the audience a little bit of an idea about what we're going to cover today?
Matt Cauthorn: Sure. Hi, there. My name's Matt Cauthorn. I'm Vice President of the Cyber Pre‑Sales Team here at ExtraHop. It's very interesting. This is quite a big development that has gone with quite a bit of drama, especially in the last year or so, or maybe a little bit more, as it relates to enterprise visibility, inside the data center walls, in particular.
What we're going to cover is a little bit of the benefits of the TLS 1.3 spec and some of the challenges that come along with it, in particular as it relates to visibility and operational intelligence in situational awareness inside the data center.
Tyson: Great. First off, we're just going to talk a little bit about why does encryption matter? Why is it important? We can see, it seems like every month, it seems like there's some new vendor that's requiring stronger encryption. There's regulatory pressure from bodies like PCI.
Encryption matters because you want confidentiality and integrity of your data. Also, it engenders user trust, especially on the Internet. Frankly, there are increasing cyber threats, so encryption is one way to address those. Matt, do you have anything to add here?
Matt: Not too much. This is fairly well‑covered ground. I think it can be very safely assumed that we're going to see more and more encryption. Inside the data center, I don't know that it's going to be completely pervasive. I think it'll be mostly pervasive, though, across a broad swath of protocols, not just web servers and web browsers that consume those services.
If implemented correctly, encryption prevents passive tampering, sniffing and man‑in‑the‑middle, which is overall, a good thing. But again, not without its challenges.
Tyson: Matt, do you have any thoughts on what causes organizations to hold back from encrypting data traffic inside the data center?
Matt: Yeah, I do. This may or may not be very familiar territory for the folks on the call out there, but just as a practical matter, deploying encryption pervasively, especially with protocols that go beyond HTTPS or perhaps LDAPS, gets to be a little tricky.
What we see in the field is very strong uptake for database transports that have been encrypted, as well via various different schemes. Other protocols don't necessarily lend themselves to encryption or the deployments are unclear, undocumented, or poorly implemented in the first place. As I said before, I'm not sure that encryption inside the data center will be ubiquitous covering everything. But it will be very pervasive, and that presents visibility challenges.
Tyson: Great. Actually, I was looking for data on encryption inside the data center and this kind of internal traffic, there's not a lot of good research out there. I figured, let's use this webinar to gather some real‑time feedback. I've opened a poll in the BrightTALK console. I'm going to give you guys about 30 seconds to answer the question, like, "What amount of your internal data center traffic is encrypted?" You may not know for sure, just give it your best guess. We have about more than 75 people on the line, so I'd like to gather some real‑time insight here.
[poll results]
Tyson: This is great information. Thanks, everybody! We're going to move on. The webinar is about TLS 1.3. Before we get there, Matt, can you talk a little bit about SSL/TLS vulnerabilities and how that spurred the development of 1.3?
Matt: I think there were several big tectonic events that influenced the trajectory of TLS 1.3 and the urgency to come up with some more robust treatment of encrypted data. Heartbleed was particularly nasty. It's hard to rate these things, but Heartbleed was really nasty because you could read arbitrary memory space and get passwords and things like that. This was really a stark revelation that the actual security mechanism itself was vulnerable to exploit and was, in fact, the first‑class vector for bad actors.
Then, Logjam, POODLE, FREAK, they added the kindle to the fire and it really stoked a sense of urgency. It made us collectively in the industry evaluate what the approach is and what some of the common vulnerability vectors look like, step-down for example.
Tyson: Thanks for that, Matt. That leads us to Perfect Forward Secrecy. These types of ciphers have been around for a while. Matt, can you talk about how PFS works and how it addresses the weaknesses that were evident in the vulnerabilities we just talked about?
Matt: Sure. I'm sure almost all of you know that in RSA‑1, if I have the private key for the server, if I compromise that server or if I'm an internal actor and I have access to that private key in any way, then I can then retroactively decrypt the traffic that I've captured.
If I have access to that key, I can capture packets—this is the key—then I can retroactively decrypt the packets that have been captured. It turns out that this was a devastating problem at the time and it resulted in some very, very sensitive data being shared and in not so beneficial ways, frankly.
The danger of this was made evident when the Snowden leaks came out and we realized collectively that people were capturing raw packet data, getting the keys, actually decrypting that data after the fact and getting at the totality of the transactions, including all of the grittiest details.
What perfect forward secrecy does is it prevents that retroactive decryption, that's the number one benefit. If I have packets that I've written to disk using a PFS cipher such as elliptic curve Diffie-Hellman, then I can't retroactively decrypt someone's session.
What the net effect is, is we've got with PFS and TLS 1.3, in particular, a very lean and mean mechanism. It's quite fast, it's quite efficient to encrypt, at a session level, conversations that prevents that retroactive tampering or inspection.
Tyson: Right, right. With PFS, the key is just for this session. It's ephemeral because once the session is done, the keys kind of evaporate, right?
Matt: That's right.
Tyson: Even if somebody could get that key, they wouldn't be able to decrypt the other sessions that are adjacent to it.
Matt: That's right. Thank you. That's a key point. Even if they did get the shared secret that you and I negotiated, if they intercepted someone else's call and they didn't have the key, they couldn't get their session decrypted. They wouldn't be privy to the information in that other conversation. Now, contrast that to a server‑based RSA key where if I have that one key, I can decrypt everything.
Everything that I capture, I can then decrypt, that everything that was destined to that server, I can decrypt.
Tyson: Thank you for explaining PFS. The good news is that PFS is mandated. It is required in TLS version 1.3. That's the good news, right? Everybody's communications are going to be more secure. The bad news is that strong encryption with ephemeral keys breaks out-of-band monitoring for passive monitoring solutions such as for IDS, DLP, and performance monitoring.
Matt: Yes, it does.
Tyson: Matt, there was some controversy in the 1.3 approval process. What were some concerns from enterprises?
Matt: There were serious concerns from enterprises. This is not me, by the way, strongly endorsing one mechanism over to the other. I think that as a technologist, for every given technology decision, there are pluses and minuses. One of the big minuses here was TLS 1.3 and Perfect Forward Secrecy in general, was that now we have customers with potentially petabytes of raw packet data that have gone opaque. Our CTO puts the issue this way as it relates to packet data: "In the enterprise, we have cataracts today and we're going blind tomorrow."
For troubleshooting or for a security investigation, this means you will see little bit of the early stages of the handshake at Layer 4, and then poof, all of the Layer 7 data goes opaque.
This happens to be devastating as it relates to the data center and the analytics inside of the data center for performance and security.
And outside of a handful of individuals who formed a subgroup inside of the IETF to advocate enterprise data center visibility, there hasn't been a lot of discussion on this topic. The emphasis for TLS 1.3 and PFS in general was very, very web browser client‑centric.
Tyson: So now that the stack has been approved after several years of work, now what? Matt, given what you've seen in the past with previous specifications, what will you say about the types of events that can happen that will push enterprises to have to support TLS 1.3?
Matt: In most organizations, it's going to be basically server and client software adoption that's going to force the hand. In fact, if I remember correctly, one of the certificate authority's has already announced that soon, they're only going to be honoring PFS certificates that they issue moving forward. We've already started to see that.
We've even seen server software such as that from Microsoft start to default to PFS. In fact, last year we saw this. We'll talk more about this later, but we're starting to see defaults to PFS encryption schemes, which can have surprising effects when you upgrade.
From an industry level, browser vendors and mobile device applications are developing APIs and tool kits that are going to force adoption.
Tyson: Yeah, it could come really fast. It depends which industry you're in. For some organizations, it's happening now. Matt, tell me a little bit about the alternatives. We received a live question, "What you're describing is a concern for my organization. What are the alternatives? What are the things that people can do to deal with this issue?"
Matt: Yeah, that is the question. I think we could almost close down the questions section now based on that one alone, so thank you.
Option number one is endpoint monitoring.
Option number two is to install some sort of man‑in‑the‑middle appliance, which can terminate those PFS connections, then do the decryption and send that data out to exit stage, to a collection device of some sort.
Now there are some challenges with that, specifically because you're breaking the connection, certificate pinning is no longer valid. OCPS stapling is a challenge or no longer valid. I've heard arguments made that those packets are then no longer valid for some use cases because they've been been altered in a way.
Option number three is to approximate East‑West network visibility with a combination of header visible packet data. Just the headers only, effectively, or NetFlow. Both of which are inadequate for meaningful performance or security analytics, in my opinion.
Tyson: Let's talk about ExtraHop's solution. We've actually developed a novel way of decrypting perfect forward secrecy protected traffic. This solution maintains the integrity of the end‑to‑end encryption. It doesn't do the man‑in‑the‑middle, it's totally passive, out of band.
Matt, I know you're excited about the elegance of this architecture. Can you talk about how it works and how we can get these session keys?
Matt: I'm going to go back to the man‑in‑the‑middle approach as an option here. I am speaking from experience having implemented man‑in‑the‑middle appliances many times in the form of application delivery controllers.
Typically, they're not deployed back into the belly of the beast so you can't cover enough of an enterprise multi‑tier, end‑tier application delivery stack with man‑in‑the‑middle. If you consider that problem, that leads you to ask, "Well, what if you had a way to share the secret at a session level from the server itself?" ExtraHop's solution is to forward the shared secret for just an individual session from the server itself to our appliance. This is exactly what you're looking at on this diagram. Our forwarder lives on the server and does only one thing—forwarding the session keys via a PFS-protected secure channel. It doesn't do any processing.
We can then decrypt the session data and give you Layer 7 analytics like, for example, HTTP headers, URIs, a particular JSON blob that's been posted, or potentially command and control traffic that's initiating out down from a server. It's very performant and it maintains the end‑to‑end integrity, which is one of the goals of TLS 1.3.
Tyson: Just to reiterate, when you're sharing the key for the session, that means you're not sharing the key every time there's a transaction. You're just sharing key once for the entire session, which may be comprised of many, many transactions. Is that right?
Matt: Yes, unless it's been updated, in which case, we'll support that as well. If it's updated mid‑session, we will update that as well. Yeah, to your high‑level point, yes.
Tyson: Let's talk about a case study. One of our customers actually uses us to decrypt and analyze PFS-protected traffic. Matt, can you talk about this use case?
Matt: Yes, they've actually been doing this for quite some time now and it happened because of a Patch Tuesday update that came for Microsoft Windows Server. Now this is a very large .NET shop. The patch configured IIS to advertise and prefer PFS ciphers. If the browser supported them or if the remote client supported those cyphers, they would use those ciphers to negotiate the keys.
What happened is that 60 percent of their visibility dropped off the map straight away after the update went live. Many of this customer's KPIs were no longer monitored, as were some of their security controls. What they did was use the ExtraHop session key forwarder, the Secret Agent, as we call it. It would share the negotiated secrets with us and they had their decryption back and everyone was happy.
Tyson: For those that may not know, can you explain what ExtraHop is used for?
Matt: I think if you understood our charter, you'll understand why we've approached the PFS problem the way we have, which is, we sit out of band and perform real‑time stream analytics, all the way up to Layer 7, on every transaction, because transactions matter.
Anyone who says otherwise is wrong. We do transaction‑level analysis, including lower level TCP statement things. We know what the socket looks like. We know what Layer 7 transactions are traversing it. We know the consumption patterns, the server, the consumer. We also want to not perturb the systems that we're monitoring so we're out of band. To stay true to that, that largely informed our decision in how we were going to approach perfect forward secrecy.
Our CTO and I had a conversation about how to approach PFS years ago. He's a visionary kind of guy so it's not a surprise that he was thinking about this problem for years before we came up with the solution that we thought would: A, scale; B, be practical; and C, not violate the principles behind TLS 1.3 in particular.
Tyson: We only have four minutes here, so I want to give some time for Q&A.
Matt: Yeah, for sure. There's a great question asking about how ExtraHop can help the implementation of 1.3 in a network. This is a fantastic question that has a bunch of hidden implications that I wanted to address. One of the interesting things is how old some of the SSL and early TLS variants really are. They go back years. SSL version 3 was released in the 1990s and we still see a lot of SSL V3 in the wild, in environments today.
In fact, for any environment, I can almost guarantee that we will see some system in the data center somewhere that's running an older, outdated or known-to-be-vulnerable version of some encryption suite or some implementation, TLS 1.0 or 1.1, for example.
ExtraHop can see those handshakes happen—we don't even need to decrypt them—and identify what that handshake, the cipher spec, and the implementation.
We can help an organization audit their cryptographic footprint and their cryptographic usage patterns just by simply dropping in and watching the traffic on the wire. We could help an organization prioritize and say, "Hey, look, here are all your SSL v3 hosts. Those are immediately actionable, same with TLS 1.0. We need to take action on that."
Now with the TLS 1.2 stuff, you might have a little more runway on depending on where the tides of the industry go. Now we can help you: A, audit the cryptographic landscape in your environment; and B, come up with an action plan to remediate those systems and get them updated.
More Resources On PFS Decryption and Enterprise Security:
Discover more