Tag Archives: security

Not up to our usual standards

For a few years now, I’ve been working on and off on a set of libraries which collect cryptography- and security-related code I’ve written for other projects as well as functionality which is not already available under a permissive license, or where existing implementations do not meet my expectations of cleanliness, readability, portability and embeddability.

(Aside: the reasons why this has taken years, when I initially expected to publish the first release in the spring or summer of 2014, are too complex to explain here; I may write about them at a later date. Keywords are health, family and world events.)

Two of the major features of that collection are the OATH Authentication Methods (which includes the algorithm used by Google Authenticator and a number of commercial one-time code fobs) and the Common Platform Enumeration, part of the Security Content Automation Protocol. I implemented the former years ago for my employer, and it has languished in the OpenPAM repository since 2012. The latter, however, has proven particularly elusive and frustrating, to the point where it has existed for two years as merely a header file and a set of mostly empty functions, just to sketch out the API. I decided to have another go at it yesterday, and actually made quite a bit of progress, only to hit the wall again. And this morning, I realized why.

The CPE standard exists as a set of NIST Interagency reports: NISTIR 7695 (naming), NISTIR 7696 (name matching), NISTIR 7697 (dictionary) and NISTIR 7698 (applicability language). The one I’ve been struggling with is 7695—it is the foundation for the other three, so I can’t get started on them until I’m done with 7695.

It should have been a breeze. On the surface, the specification seems quite thorough: basic concepts, representations, conversion between representations (including pseudocode). You know the kind of specification that you can read through once, then sit down at the computer, start from the top, and code your way down to the bottom? RFC 4226 and RFC 6238, which describe OATH event-based and time-based one-time passwords respectively, are like that. NISTIR 7695 looks like it should be. But it isn’t. And I’ve been treating it like it was, with my nose so close to the code that I couldn’t see the big picture and realize that it is actually not very well written at all, and that the best way to implement it is to read it, understand it, and then set it aside before coding.

One sign that NISTIR 7695 is a bad specification is the pseudocode. It is common for specifications to describe algorithms, protocols and / or interfaces in the normative text and provide examples, pseudocode and / or a reference implementation (sometimes of dubious quality, as is the case for RFC 4226 and RFC 6238) as non-normative appendices. NISTIR 7695, however, eschews natural-language descriptions and includes pseudocode and examples in the normative text. By way of example, here is the description of the algorithm used to convert (“bind”, in their terminology) a well-formed name to a formatted string, in its entirety:

6.2.2.1 Summary of algorithm

The procedure iterates over the eleven allowed attributes in a fixed order. Corresponding attribute values are obtained from the input WFN and conversions of logical values are applied. A result string is formed by concatenating the attribute values separated by colons.

This is followed by one page of pseudocode and two pages of examples. But the examples are far from exhaustive; as unit tests, they wouldn’t even cover all of the common path, let alone any of the error handling paths. And the pseudocode looks like it was written by someone who learned Pascal in college thirty years ago and hasn’t programmed since.

The description of the reverse operation, converting a formatted string to a well-formed name, is slightly better in some respects and much worse in others. There is more pseudocode, and the examples include one—one!—instance of invalid input… but the pseudocode includes two functions—about one third of the total—which consist almost entirely of comments describing what the functions should do, rather than actual code.

You think I’m joking? Here is one of them:

function get_comp_fs(fs,i)
  ;; Return the i’th field of the formatted string. If i=0,
  ;; return the string to the left of the first forward slash.
  ;; The colon is the field delimiter unless prefixed by a
  ;; backslash.
  ;; For example, given the formatted string:
  ;; cpe:2.3:a:foo:bar\:mumble:1.0:*:...
  ;; get_comp_fs(fs,0) = "cpe"
  ;; get_comp_fs(fs,1) = "2.3"
  ;; get_comp_fs(fs,2) = "a"
  ;; get_comp_fs(fs,3) = "foo"
  ;; get_comp_fs(fs,4) = "bar\:mumble"
  ;; get_comp_fs(fs,5) = "1.0"
  ;; etc.
end.

This function shouldn’t even exist. It should just be a lookup in an associative array, or a call to an accessor if the pseudocode was object-oriented. So why does it exist? Because the main problem with NISTIR 7695, which I should have identified on my first read-through but stupidly didn’t, is that it assumes that implementations would use well-formed names—a textual representation of a CPE name—as their internal representation. The bind and unbind functions, which should be described in terms of how to format and parse URIs and formatted strings, are instead described in terms of how to convert to and from WFNs. I cannot overstate how wrong this is. A specification should never describe a particular internal representation, except in a non-normative reference implementation, because it prevents conforming implementations from choosing more efficient representations, or representations which are better suited to a particular language and environment, and because it leads to this sort of nonsense.

So, is the CPE naming specification salvageable? Well, it includes complete ABNF grammars for URIs and formatted strings, which is good, and a partial ABNF grammar for well-formed names, which is… less good, but fixable. It also explains the meanings of the different fields; it would be useless otherwise. But apart from that, and the boilerplate at the top and bottom, it should be completely rewritten, including the pseudocode and examples, which should reference fictional, rather than real, vendors and products. Here is how I would structure it (text in italic is carried over from the original):

  1. Introduction
    1.1. Purpose and scope
    1.2. Document structure
    1.3. Document conventions
    1.4. Relationship to existing specifications and standards
  2. Definitions and abbreviations
  3. Conformance
  4. CPE data model
    4.1 Required attributes
    4.2 Optional attributes
    4.3 Special attribute values
  5. Textual representations
    5.1. Well-formed name
    5.2. URI
    5.3. Formatted string
  6. API
    6.1. Creating and destroying names
    6.2. Setting and getting attributes
    6.3. Binding and unbinding
  7. Non-normative examples
    7.1. Valid and invalid attribute values
    7.2. Valid and invalid well-formed names
    7.3. Valid and invalid URIs
    7.4. Valid and invalid formatted strings
  8. Non-normative pseudo-code
  9. References
  10. Change log

I’m still going to implement CPE naming, but I’m going to implement it the way I think the standard should have been written, not the way it actually was written. Amusingly, the conformance chapter is so vague that I can do this and still claim conformance with the Terrible, Horrible, No Good, Very Bad specification. And it should only take a few hours.

By the way, if anybody from MITRE or NIST reads this and genuinely wants to improve the specification, I’ll be happy to help.

PS: possibly my favorite feature of NISTIR 7695, and additional proof that the authors are not programmers: the specification mandates that WFNs are UTF-8 strings, which are fine for storage and transmission but horrible to work with in memory. But in the next sentence, it notes that only characters with hexadecimal values between x00 and x7F may be used, and subsequent sections further restrict the set of allowable characters. In case you didn’t know, the normalized UTF-8 representation of a sequence of characters with hexadecimal values between x00 and x7F is identical, bit by bit, to the ASCII representation of the same sequence.

FreeBSD and CVE-2015-7547

As you have probably heard by now, a buffer overflow was recently discovered in GNU libc’s resolver code which can allow a malicious DNS server to inject code into a vulnerable client. This was announced yesterday as CVE-2015-7547. The best sources of information on the bug are currently Google’s Online Security Blog and Carlos O’Donnell’s in-depth analysis.

Naturally, people have started asking whether FreeBSD is affected. The FreeBSD Security Officer has not yet released an official statement, but in the meantime, here is a brief look at the issue as far as FreeBSD is concerned.

Continue reading “FreeBSD and CVE-2015-7547” »

Camouflage

Sechuran Fox / Mike Weedon / Wikimedia / CC-BY-SA 3.0
One fine morning, the King summoned Gerrard, Captain of the Guard, to attend to him at Council.

Gerrard bowed as he approached his monarch. “You asked for me, Sire?”

“Gerrard, my good man, I keep hearing stories about a band of smugglers led by a man who calls himself the Fox. I want to know what your men are doing about it.”

“Sire—we have guard posts and roving patrols, and sometimes we catch a smuggler or two, but they move quietly through the woods and brush, wearing camouflage, and they can choose any direction of approach, whereas we have to stretch our forces along the entire border.”

“Very well, Gerrard. I hereby ban the manufacture, sale and use of camouflage clothing except for the needs of the Royal Guard. You are dismissed.”

Three months later, the King summoned Gerrard again.

“I hear that the smugglers are still operating, despite the measures I ordered. What do you have to say for yourself?”

“Banning camouflage clothing cut off the smugglers’ supply, but did not prevent them from using what they already had. We made more arrests when they ran out, but then they started making their own out of green, gray and black fabric, and we’re back to square one.”

“Very well. Henceforth, the manufacture and sale of green, gray or black fabric or clothing shall be illegal, except for the needs of the Royal Guard. Get to it, Gerrard.”

Some months later, Gerrard was once again summoned to discuss the matter of the Fox.

“I am very displeased, Gerrard. I would have thought your men would have little trouble catching smugglers now that they can no longer buy or make camouflage clothing. And I have been told that the villagers are restless and discontent.”

“Sire, the smugglers are tying grass, moss and branches to their clothes, and blending in better than ever before! And the villagers are complaining that the ban on camouflage and dark clothing is making it difficult for them to hunt—we forbade them to use vegetation like the smugglers do.”

“There is only one solution, then. Burn down the forests and the brush. Let us see the Fox try to sneak through a charred wasteland!”

“But, Sire—”

“Do not question my orders, Gerrard. Burn it all down.”

“Very well, Sire.”

OpenSSH, PAM and user names

FreeBSD just published a security advisory for, amongst other issues, a piece of code in OpenSSH’s PAM integration which could allow an attacker to use one user’s credentials to impersonate another (CVE 2015-6563, original patch). I would like to clarify two things, one that is already mentioned in the advisory and one that isn’t.

The first is that in order to exploit this, the attacker must not only have valid credentials but also first compromise the unprivileged pre-authentication child process through a bug in OpenSSH itself or in a PAM service module.

The second is that this behavior, which is universally referred to in advisories and the trade press as a bug or flaw, is intentional and required by the PAM spec (such as it is). There are multiple legitimate use cases for this, such as:

  • Letting PAM, rather than the application, prompt for a user name; the spec allows passing NULL instead of a user name to pam_start(3), in which case it is the service module’s responsibility (in pam_sm_authenticate(3)) to prompt for a user name using pam_get_user(3). Note that OpenSSH does not support this.

  • Mapping multiple users with different identities and credentials in the authentication backend to a single “template” user when the application they need to access does not need to distinguish between them, or when this determination is made through other means (e.g. environment variable, which service modules are allowed to set).

  • Mapping Windows user names (which can contain spaces and non-ASCII characters that would trip up most Unix applications) to Unix user names.

That being said, I do not object to the patch, only to its characterization. Regarding the first issue, it is absolutely correct to consider the unprivileged child as possibly hostile; this is, after all, the entire point of privilege separation. Regarding the second issue, there are other (and probably better) ways to achieve the same result—performing the translation in the identity service, i.e. nsswitch, comes to mind—and the percentage of users affected by the change lies somewhere between zero and negligible.

One could argue that instead of silently ignoring the user name set by PAM, OpenSSH should compare it to the original user name and either emit a warning or drop the connection if it does not match, but that is a design choice which is entirely up to the OpenSSH developers.

Update 2015-08-27 NIST rates exploitability as “medium” rather than “low” because an attacker who is able to impersonate the UID used by the unprivileged child can use a debugger or other similar method to modify the username that the child passes back to the parent. In other words, an attacker can leverage elevated privileges into other elevated privileges. I disagree with the rating, but have never had any luck getting NIST to correct even blatantly false information in the past.

SSLv3

UPDATE 2014-10-14 23:40 UTC The details have been published: meet the SSL POODLE attack.

UPDATE 2014-10-15 11:15 UTC Simpler server test method, corrected info about browsers

UPDATE 2014-10-15 16:00 UTC More information about client testing

El Reg posted an article earlier today about a purported flaw in SSL 3.0 which may or may not be real, but it’s been a bad year for SSL, we’re all on edge, and we’d rather be safe than sorry. So let’s take it at face value and see what we can do to protect ourselves. If nothing else, it will force us to inspect our systems and make conscious decisions about their configuration instead of trusting the default settings. What can we do?

The answer is simple: there is no reason to support SSL 3.0 these days. TLS 1.0 is fifteen years old and supported by every browser that matters and over 99% of websites. TLS 1.1 and TLS 1.2 are eight and six years old, respectively, and are supported by the latest versions of all major browsers (except for Safari on Mac OS X 10.8 or older), but are not as widely supported on the server side. So let’s disable SSL 2.0 and 3.0 and make sure that TLS 1.0, 1.1 and 1.2 are enabled.

What to do next

Test your server

The Qualys SSL Labs SSL Server Test analyzes a server and calculates a score based on the list of supported protocols and algorithms, the strength and validity of the server certificate, which mitigation techniques are implemented, and many other factors. It takes a while, but is well worth it. Anything less than a B is a disgrace.

If you’re in a hurry, the following command will attempt to connect to your server using SSL 2.0 or 3.0:

:|openssl s_client -ssl3 -connect www.example.net:443

If the last line it prints is DONE, you have work to do.

Fix your server

Disable SSL 2.0 and 3.0 and enable TLS 1.0, 1.1 and 1.2 and forward secrecy (ephemeral Diffie-Hellman).

For Apache users, the following line goes a long way:

SSLProtocol ALL -SSLv3 -SSLv2

It disables SSL 2.0 and 3.0, but does not modify the algorithm preference list, so your server may still prefer older, weaker ciphers and hashes over more recent, stronger ones. Nor does it enable Forward Secrecy.

The Mozilla wiki has an excellent guide for the most widely used web servers and proxies.

Test your client

The Poodle Test website will show you a picture of a poodle if your browser is vulnerable and a terrier otherwise. It is the easiest, quickest way I know of to test your client.

Qualys SSL Labs also have an SSL Client Test which does much the same for your client as the SSL Server Test does for your server; unfortunately, it is not able to reliably determine whether your browser supports SSL 3.0.

Fix your client

On Windows, use the Advanced tab in the Internet Properties dialog (confusingly not searchable by that name, search for “internet options” or “proxy server” instead) to disable SSL 2.0 and 3.0 for all browsers.

On Linux and BSD:

  • Firefox: open and set security.tls.version.min to 1. You can force this setting for all users by adding lockPref("security.tls.version.min", 1); to your system-wide Mozilla configuration file. Support for SSL 3.0 will be removed in the next release.

  • Chrome: open and select “show advanced settings”. There should be an HTTP/SSL section which lets you disable SSL 3.0 is apparently no way to disable SSL 3.0. Support for SSL 3.0 will be removed in the next release.

I do not have any information about Safari and Opera. Please comment (or email me) if you know how to disable SSL 3.0 in these browsers.

Good luck, and stay safe.