Return of Bleichenbacher: ROBOT attack means trouble for TLS
December 16, 2017
17 Views
5 Min Read
A team of security researchers discovered eight leading vendors and open source projects whose implementations of the Transport Layer Security protocol are vulnerable to the Bleichenbacher oracle attack, a well-known flaw that was first described in 1998.
The Bleichenbacher attack has been referenced in all Internet Engineering Task Force specifications for the Transport Layer Security (TLS) protocol since version 1.0 in 1999, and implementers of TLS versions through 1.2 were warned to take steps to avoid the Bleichenbacher attack. However, the researchers noted that, based on the ease with which they were able to exploit the vulnerability, it appears many implementers ignored the warnings.
The attack is named after its discoverer, Daniel Bleichenbacher, a Swiss cryptographer who was working for Bell Laboratories in 1998 when his research on the vulnerability was first published. The TLS protocol, which was meant to replace Secure Sockets Layer, is widely used for encryption and the authentication of web servers.
The research team included Hanno Böck, information security researcher; Juraj Somorovsky, research associate at the Horst Görtz Institute for IT Security at the Ruhr-Universität Bochum in Germany; and Craig Young, computer security researcher with Tripwire’s Vulnerability and Exposures Research Team. “Perhaps the most surprising fact about our research is that it was very straightforward,” the researchers wrote. “We took a very old and widely known attack and were able to perform it with very minor modifications on current implementations. One might assume that vendors test their TLS stacks for known vulnerabilities. However, as our research shows in the case of Bleichenbacher attacks, several vendors have not done this.”
The researchers said many web hosts are still vulnerable to the ROBOT attack, and nearly a third of the top 100 sites in the Alexa top 1 million list are vulnerable. The team identified vulnerable products from F5, Citrix, Radware, Cisco, Erlang and others, and they “demonstrated practical exploitation by signing a message with the private key of facebook.com’s HTTPS certificate.”
The researchers described their work as the “Return of Bleichenbacher’s Oracle Threat,” or ROBOT, and published it in a paper of the same title, as well as on a branded vulnerability website. The team also published a capture-the-flag contest, posting an encrypted message and challenging the public to decrypt the message using the strategies described in the paper.
The ROBOT attack revisits a vulnerability discovered in 1998.
TLS protocol designers at fault
The researchers placed the blame for the ease of their exploits squarely on the shoulders of TLS protocol designers. The ROBOT attack is made possible by the behavior of servers implementing TLS using the RSA Public-Key Cryptography Standards (PKCS) #1 version 1.5 specification; the issues that enable the Bleichenbacher attack are fixed in later versions of PKCS. TLS 1.3, which is expected to be finalized soon, deprecates the use of PKCS #1 v1.5 and specifies use of PKCS #1 v2.2.
The TLS protocol designers absolutely should have been more proactive about replacing PKCS #1 v1.5. Craig Youngcomputer security researcher, Tripwire
“The TLS protocol designers absolutely should have been more proactive about replacing PKCS #1 v1.5. There is an unfortunate trend in TLS protocol design to continue using technology after it should have been deprecated,” Young told SearchSecurity by email. He added that vendors also “should have been having their code audited by firms who specialize in breaking cryptography, since most software companies do not have in-house expertise for doing so.”
TLS, as currently deployed, ignores improperly formatted data, and that is the root of the vulnerability that the ROBOT attack exploits. The original specification for TLS 1.0 even mentions the Bleichenbacher attack as an issue for the protocol. Writing in 1999 in RFC 2246, “The TLS Protocol Version 1.0,” the IETF noted that the attack “takes advantage of the fact that by failing in different ways, a TLS server can be coerced into revealing whether a particular message, when decrypted, is properly PKCS #1 formatted or not.”
The solution proposed in that specification for avoiding “vulnerability to this attack is to treat incorrectly formatted messages in a manner indistinguishable from correctly formatted RSA blocks. Thus, when it receives an incorrectly formatted RSA block, a server should generate a random 48-byte value and proceed using it as the premaster secret. Thus, the server will act identically whether the received RSA block is correctly encoded or not.”
Potential for attacks, detection and remediation
The researchers noted in the paper that the ROBOT flaw could lead to very serious attacks. “For hosts that are vulnerable and only support RSA encryption key exchanges, it’s pretty bad. It means an attacker can passively record traffic and later decrypt it,” the team wrote on the ROBOT website. “For hosts that usually use forward secrecy, but still support a vulnerable RSA encryption key exchange, the risk depends on how fast an attacker is able to perform the attack. We believe that a server impersonation or man in the middle attack is possible, but it is more challenging.”
Young said it might be possible to detect attempts to abuse the Bleichenbacher vulnerability, but it would not be easy. “This attack definitely triggers identifiable traffic patterns. Servers would observe a high volume of failed connections, as well as a smaller number of connections with successful handshakes, and then little to no data on the connection,” he told SearchSecurity. “Unfortunately, I am unaware of anybody actually doing this. Logging the information needed to detect this can be cumbersome. And for a site receiving a billion connections a second, it could be quite difficult to notice 10,000 to 100,000 failed connections.”
As for other, ongoing risks, Young said “PKCS #1 v1.5 is not being used in TLS 1.3, but it is still used in other systems like XML encryption. Whether or not it can be disabled through configuration is highly application-specific.”
Add Comment