Monday, October 27, 2008

Debunking Google's Security Vulnerability Disclosure Propaganda

Via CNET -

Question: You're a multi-billion dollar tech giant, you've launched a new phone platform after much media fanfare. Then, a security researcher finds a flaw in your product within days of its release. Worse, the vulnerability is due to the fact that you shipped old (and known to be flawed) software on the phones. What should you do? Issue an emergency update, warn users, or perhaps even issue a recall? If you're Google, the answer is simple -- attack the researcher.

With the news of a flaw in Google's Android phone platform making the New York Times on Friday, the search-giant quickly ramped up the spin machine. After first dismissing the amount of damage to which the flaw exposed users, anonymous Google executives then attempted to discredit the security researcher, Charlie Miller, a former NSA employee turned security consultant. Miller, the unnamed Googlers argued, acted irresponsibly by going to the New York Times to announce his vulnerability, instead of giving the Big G a few weeks or months to fix the flaw:

Google executives said they believed that Mr. Miller had violated an unwritten code between companies and researchers that is intended to give companies time to fix problems before they are publicized.

What the Googlers are talking about is the idea of "responsible disclosure," one method of disclosing security vulnerabilities in software products. While it is an approach that is frequently followed by researchers, it is not the only method available, and in spite of the wishes of the companies whose products are frequently analyzed, it is by no means the "norm" for the industry.

Another frequently used method is that of "full disclosure" -- in which a researcher will post complete details of a vulnerability to a public forum (typically a mailing list dedicated to security topics). This approach is often used by researchers when they have discovered a flaw in a product made by a company with a poor track record of working with researchers -- or worse, threatening to sue them. For example, some researchers refuse to provide Apple with any advanced notification, due to its past behavior.

A third method involves selling information on the vulnerabilities to third parties (such Tippingpoint and iDefense) -- who pass that information on to their own customers, or perhaps keep it for themselves. Charlie Miller, the man who discovered the Android flaw has followed this path in the past, most notably when he sold details of a flaw in the Linux Kernel to the US National Security Agency for $50,000 (pdf).

First, consider the fact that security is a two-sided coin. If Google wants researchers to come to it first with vulnerability information, it is only fair to expect that Google be forthcoming with the community (and the general public) once the flaw has been fixed. Google's approach in this area is that of total secrecy -- not acknowledging flaws, and certainly not notifying users that a vulnerability existed or has been fixed. Google's CIO admitted as much in a 2007 interview with the Wall Street Journal:

Regarding security-flaw disclosure, Mr. Merrill says Google hasn't provided much because consumers, its primary users to date, often aren't tech-savvy enough to understand security bulletins and find them "distracting and confusing." Also, because fixes Google makes on its servers are invisible to the user, notification hasn't seemed necessary, he says.

Second, companies do not have a right to expect "responsible disclosure." It is a mutual compromise, where the researchers provide the company with advanced notification in exchange for some form of assurance that the company will act reasonably, keep the lines of communication open, and give the researcher full credit once the vulnerability is fixed.

Google's track record in this area leaves much to be desired. Many top tier researchers have not been credited for disclosing flaws, and in some cases, Google has repeatedly dragged its feet in fixing flaws. The end result is that many frustrated researchers have opted to follow the full disclosure path, after hitting a brick wall when trying to provide Google with advanced notice.

I can personally confirm this experience, after I discovered a fairly significant flaw in a number of commercial Firefox toolbars back in 2007. While Mozilla and Yahoo replied to my initial email within a day or so, and kept the lines of communication open, Google repeatedly stonewalled me, and I didn't hear anything from them for weeks at a time. Eventually, Google fixed the flaw a day or two after I went public with the vulnerability, 45 days after I had originally given the company private notice. As a result, I have extreme sympathy for those in the research community who have written Google off.

[...]

The Android platform is built on top of over 80 open source libraries and programs. This particular flaw had been known about for some time and already fixed in the current version of the open source libraries. The flaw in Google's product only exists because the company shipped out-of-date software, which was known to be vulnerable.

----------------------------------------------

We saw the same thing in Google Chrome, which was built using an older and vulnerable version of WebKit. Utilizing open-source components and libraries is a dual-edge sword - at best.

Just look at IBM, HP, Apple and the hundreds of appliance vendors as perfect examples. All of these vendors regularly have to retro-patch open-source fixes (OpenSSH, PHP, Apache, OpenSSL, etc) back into their customized products. But we all know, some are quicker than others...

No comments:

Post a Comment