The Case Against Full Disclosure
I remember the first time I met H. D. Moore. It was in 2007. I've never met him in person, though. Instead, my experience meeting Moore was through use of a piece of communication software known as IRC (Internet Relay Chat). My interest in IRC stemmed from wanting to test my theory that spying on and profiling the criminals that use IRC would make me better able to advise my clients on countermeasures. While I never went on to achieve more than modest success as a break-in artist myself, I am proud to say that I know of no other penetration-testing team that is better than mine, outside of those that also monitor rival hacker groups.
“...publicizing a vulnerability found in popular software doesn't increase the likelihood that it will be successfully patched; rather it increases the likelihood that the vulnerability will be exploited.”
I met Moore, the principle author of a popular hack tool, by asking for tips on post-intrusion procedures on the Windows operating system. He volunteered accurate information rather freely, having first satisfied some concerns that I was not actively planning a crime. I was impressed both by Moore's level of knowledge and by his amicability. After all, I was speaking to something of a celebrity—the godfather of the automated shellcode sphere.
Yet, as much as I sometimes admire the man, certain of his actions are out of sync with views he purports to express. I'm almost hesitant to discuss this matter, given the admiration that I have for Moore and the fact that he is certainly not alone in giving in to curiosity about vulnerabilities in public infrastructure. Some law enforcement officials feel it's better if white hat hackers, even respected professionals such as Moore, aren't constantly poking at public infrastructure. Moore is not the first to have made such a mistake. Plenty of other hackers, Andrew Auernheimer (aka "Weev") for instance, feel they have a right to perform basic research, even if such research involves downloading sensitive data from public resources.
While no one wants to be the one to tell security researchers not to aid in the development of secure public infrastructure, from the perspective of someone who has worked professionally over many years to combat threats to public infrastructure, it so happens that most of the justifications for tampering that security researchers tend to employ to gain access to sensitive data are not fundamentally correct. For example, publicizing a vulnerability found in popular software doesn't increase the likelihood that it will be successfully patched; rather it increases the likelihood that the vulnerability will be exploited. What's more, detecting bugs doesn't decrease the incidence of bugs in software, nor does it detract from the likelihood of criminals breaking into systems. If anything, break-ins have become easier in the past twenty years. Break-ins aren't likely to decrease until the quality of system administration improves.
The problem is further complicated because white hats tend to blame, sometimes to the point of personal rage, computer programmers and application programmers for the current state of enterprise software security. Software design flaws should not be seen as a societal problem. After all, a criminal who will hop a four-foot security fence will also hop five-foot fence. A good defensive security team is trained to deal with threats of varying severity and sophistication.
The reason the typical white hat perspective is so skewed has to do with flaws in their underlying assumptions about online culture. White hats often incorrectly assume that once they finish patching all currently open security bugs in existing software, by hook or by crook, the security problems will be permanently solved for everyone, everywhere. White hats are not slowly closing off a small group of security vulnerabilities. Instead, since the population of exploitable vulnerabilities in the software ecosystem is dependent upon the rate of releases to that ecosystem, attempts to curtail the software bug population through individual reporting has a negligible effect upon the quantity, quality, and variety of bugs available for exploitation. Many vulnerabilities will never be discovered, instead aging out of the population as equipment reaches legacy status. Further, a considerable number of discovered vulnerabilities will never be known to the public. Groups that care to invest in clandestine exploit techniques invest until they reach, to them, a comfortable level of intellectual capital. On the chance that any of these exploits become public, the most direct effect on the part of an organized group will be to simply extend research and development procurement by a negligible amount. This tends to frustrate white hats, who feel that it is their personal duty to protect the Internet at large. Meanwhile, companies and private individuals are left vulnerable and may fall victim to "unknown" exploits, including those that permit intruders, botnet nodes, and so forth to reside on systems undetected for years, if ever. To the extent that white hats truly want to protect the Internet, they are not doing so.
The presence of open vulnerability data releases in the popular media is itself an anomaly. Security items in the news that tend to focus on celebrity exploits are, at best, distractions from actual developments in security technology, and at worst, mere trumpet blowing. White hats should understand that if they break the law publicly, they're likely to be confronted by law enforcement. Moreover, white hats should understand that tampering with public resources, even if only for purposes of research or exploration, is in many cases irresponsible.
The solution is to focus on hiring better-informed and diligent system administrators who can better contribute to the overall online safety of your average network, versus continually auditing against the latest celebrity virus.
The Trouble with Enterprise Software
We ask a lot from our software. We expect it to know how to process financial accounting reports, how to connect to a printer, how to download songs from the Internet, and how to find a good place for lunch.
While firewalls can block a variety of attacks, the single best defense of an application is good programming.
For even the simplest of these tasks to happen, your software will have to process thousands of variables. Especially in the case of externally supplied information, our software needs to contain instructions on how to detect, and what to do with, suspicious data when it is encountered. For software to perform only its basic job without checking for problems is to expose your network to external attacks.
It is unfortunate that, in their haste to meet release schedules, famous antivirus company Trend Micro released antivirus software that shows signs of vulnerability to external tampering, according to preliminary research reported in the Silent Signal blog.
Trust is Costly
Software is frequently released without ample testing (if any at all), especially software from large companies like Trend Micro. While Trend may have tested the software, they left transmission of security parameters over the network vulnerable.
The results are often not much different when other enterprise software is tested. Procurement teams should have software tested for vulnerabilities prior to implementation, with an understanding that software written too poorly should be disqualified from further consideration for reasons simply related to security. Most enterprises have a sufficiently vulnerable attack surface that these sorts of bugs can lead to unwanted disclosure of information, in many cases, or cause other problems including issues leading to full system compromise.
From the Field
I remember one of the largest application installations I performed for an enterprise client. While I was mostly happy to install the software out of the box according to the recommended installation guide, I felt it necessary to introduce one or two minor innovations, simply for reasons of risk management. Most importantly, I swapped out the encryption keys specific to session key exchange on the part of the server-client connections. It seemed like a good idea at the time, and fortunately, this relatively simple solution completely eradicated many of the same sorts of vulnerabilities found with the Trend Micro case.
Next to good application coding and application quality, application security engineering should be considered a critical line of defense. Application testing is important to employ following installations or modifications to applications, including following periodic maintenance.
Avoid Appearing Negligent
Security managers responsible for their employer's malware and antivirus software will have egg on their face if they've trusted Trend, for example. This sort of bug should have been detected before installations, or during routine software monitoring and maintenance.
Vulnerabilities of this sort are largely preventable.
Buying Security from an Auditing Company
Adventures on the Front Lines
You wouldn't buy a car from McDonalds.
I spent most of a recent weekend helping a friend of mine. He manages technology for a large financial company on Wall Street. I'd gotten the call late Thursday and knew that I'd have to help. His security staff, as well as his top security contractor, had all failed to reach necessary milestones needed for regulatory compliance deadlines.
When an individual or business entity has no vested interest or personal motivation on a particular subject, such as the security and privacy of our data, how can we expect them to care as deeply about it as we do?
My friend's frustration stemmed from his inability to get satisfactory insight and visibility into his team's proffered compliance plan. He grew increasingly skeptical that there was any real underlying strategy, or that his team was sufficiently organized to meet the deadline.
During the course of our work, I was able to independently assess and diagnose the problem. My friend's operational security plan, and the implementation of that plan, had been trusted to individuals who did not have the experience to pull it off. What made things worse was that the contractors, the hired guns responsible for fixing problems, had no real-world experience administering the computer systems they were attempting to analyze. Imagine, for a second, if you found out your trusted automotive mechanic was someone who didn't know how to drive!
It is unwise to get defensive security advice from a team of consultants who work for an auditing company. One hardly needs to stretch the imagination to identify the underlying reason why financial companies have so much difficulty delivering proper service to their customers. The auditing firms who have stepped into the security market in the past ten years have entered the field simply because they felt they could make a buck. None of their senior managers had any interest or background in information security. Desiring only to get a good quarterly bonus, most auditing firms have focused on profitability to the exclusion of even attempting to produce quality work. Excellence in the field is certainly not an ambition for them. One would expect the Lamborghini family to take an interest in producing first-rate cars or for the decedents of Stradivarius to be conscientious about the quality of violin manufacture. When an individual or business entity has no vested interest or personal motivation on a particular subject, such as the security and privacy of our data, how can we expect them to care as deeply about it as we do?
These financial auditing companies' hiring practices are even questionable. For cultural reasons, these firms believe that customer service trumps rudimentary technical knowledge, mistakenly believing that an applicant with previous consulting experience is more qualified than the applicant with a wealth of prior technical experience. The gentle reader should comfortably understand that one has a right to be skeptical of any technology consultant without recent applied technical experience.
Don't fall victim to audit firms' claims. Don't put yourself in the position to lose sleep over missed deadlines and uncertainty about your company's real security posture and ability to meet compliance. Firms historically solid in the matters of financial auditing may know rudimentary concepts of security auditing, but without the in-depth experience of true security technologists, they'll never be able to give you a complete picture or the satisfaction and assurance that comes with devoted specialists.
Technical Regulatory Compliance
Continuous attention is necessary to avoid pitfalls.
Regulatory Compliance is not something that happens only once per certification cycle, rather it is a continuous process due not only to regulatory constraint, but also to prevent unnecessary liability and to ensure that a company adheres to a set of standards during operation.
Many circumstances could arise that would invite scrutiny of a company's compliance efforts; for example, litigation, even if unrelated to compliance, or whistle blowing could invite excessive attention to your process and policy.
Unfortunately, many compliance staff feel unsupported compared to the technology managers they are attempting to regulate. Many rank-and-file employees tend to view their peers in the company's compliance department as adversaries—in most cases, extensions of the very regulators from whom compliance efforts are meant to shield.
The compliance officer's task is further complicated by the fact that they typically will not directly access the systems that they're meant to bring into compliance. They are reliant on technical personnel who fall under the supervision of technology managers—who should not be relied upon to be forthcoming with problems that they'd prefer not to disclose.
We get it. Iron Shields wants to help.
Consider the following points.
- Hands on Technical Audit: Do you have a comprehensive understanding of your company's security profile? Do you know for certain that due attention is being given compliance measures? Are you sure that every effort is being made to keep systems and software secure on a daily basis? Do you know for what your company may need to budget to increase your security posture?
You'll have the full picture after an Iron Shields briefing. - Continuous Monitoring: Technical staff are meant to perform periodic logging and analysis. Are they? This is one regulatory requirement that most companies will fail without our help.
- Pre-Audit Fire Drill: If you're not sure you're prepared for a surprise audit, or that you're unlikely to be ready for a scheduled audit by an established deadline, engage the help of Iron Shields. With our technical background, we understand how to usher along technical projects. We've never had a client fail an audit, and there's no reason your company should.
Don't assume that technical staff are giving you the complete picture. Iron Shields can provide you with direct technical insight, a realistic view of the technology for which you provide governance, and the people who manage it.
Security Managers Struggle to Manage Multiple Applications
In large organizations everywhere, security managers and CISOs struggle to manage the wide array of security applications that are meant to help them.
The problem is ultimately one of personnel hours.
Security managers feel the need to install the latest applications. For some, there is pressure from senior technology managers to keep pace with technological trends, or pressure from rank and file security staff to procure new gadgets. In other cases, these applications come bundled as part of larger software packages that security managers feel they genuinely need or they've been told they need to install additional applications during a security audit or for regulatory compliance. Some security managers plunge into a new application because they are bored with the existing applications or because they worry that if they do not get experience managing the latest application they won't have an impressive resume the next time that they need to apply for a job.
Security managers find themselves confronted with a variety of applications: firewalls, intrusion detection systems, honeypots, intrusion prevent systems, next-generation firewalls, LAN segregation technology, host-based security systems, anti-virus, anti-malware, data loss prevention systems, security information management systems, proxies, log management systems, and mobile device management, to name a few.
At many organizations, one will find a host of applications in a variety of different states. Some applications will be actively utilized. Other applications will have been procured but not fully installed. Still other applications will be installed but not in use, or they have not been adequately maintained and so are of declining usefulness to engineers that attempt to use them. Security applications sometimes get lost in various stages of the procurement process.
“You'd be amazed at what we find during an audit,” said Simon Montfort, a Senior Security Engineer at Iron Shields. “Applications in all sorts of states of disrepair. Frequently technology personnel have forgotten what exactly they have. Maintenance is done sporadically, if at all.”
Managing this technology will prove a daunting task for a security manager. After all, a typical technology manager may be responsible for managing one or two applications, or at most a dozen. Sometimes this will include network equipment or desktops. By contrast, the security department is not only tasked with all of the work related to maintaining their own equipment, but also with incident response, risk management, compliance, and enforcement. This means that a security manager will be literally trying to manage event monitoring, application deployment, and maintenance in their spare time.
It is no wonder that many applications are deployed incorrectly or are not wholly utilized.
One mistake that many security managers make is to overextend their attention across too many applications. Rather than focusing on providing good usage and maintenance of the most essential applications, managers sometimes allow or encourage their staff to deploy more applications than they can comfortably maintain, and in extreme cases, more security equipment than they can comfortably use.
At best, surplus applications can serve as a distraction. At worst, surplus applications can provide additional targets for hackers. These pitfalls can be avoided by giving focus to proper deployment, maintenance, and usage of core security technologies before even examining advanced technologies.
"It doesn't make sense to send your best technician to install a fancy new high tech special customized database data loss prevention system when you haven't patched your firewall in the last quarter, nor does it make sense to install a host-based intrusion detection system if you haven't made your quarterly review of your desktop baseline. What's the point? People get ahead of themselves," Montfort explained.
Wired: “Millions in Taxpayer Dollars Paid to Internet Companies to Cover Prism Compliance Costs”
This Wired article reveals that the NSA paid millions of dollars to a number of major social and email providers to cover surveillance under PRISM — after PRISM was ruled unconstitutional. While it is common for the government to pay to offset companies' costs to comply with surveillance and information requests, in this case it appears that payments were allocated for companies to find ways to circumvent the ruling and law.
Ars Technica: ‘Seemingly benign “Jekyll” app passes Apple review, then becomes “evil”’
Ars Technica reports that researchers discovered a way to inject undetectable malware into otherwise safe iOS apps, bypassing Apple's supposedly stringent review process. Although Apple made changes to its operating system in response to the findings, it is as yet unclear whether the vulnerability has been thoroughly eradicated.