The Fallacy of Security Through Obscurity

post-thumb

An age-old adage in cybersecurity has been that you expose as few details about the software you are running to the attacker as possible. This supposedly makes it harder for the attackers to exploit you, as they cannot directly know which version of a given software you are running. Furthermore, they will have to spend more time and resources on reconnaissance, or they will move after easier prey.

Security through obscurity is the practice of relying on secrecy or the concealment of system details as the primary means of securing a system, rather than implementing robust security measures.

I and many others, for example Juhani Eronen, have been trying to dismantle this myth, but it seems to be a sticky one. In reality, when a new vulnerability emerges, or an exploit for a known one, the attackers will seek out the base population of the potentially vulnerable machines and simply try to exploit them. They couldn’t care less about your OPSEC policy of not exposing the software stack or version to the outside world. To them your interface is just another door to try and see if it pops a shell back at them.

Security Through Obscurity Hinders the Defenders

Even if attackers do not really need to care about software banners, for the defenders it is valuable information in trying to bridge the time gap between finding out that you’re vulnerable and being able to remedy the vulnerability.

Scanners Love Banners

Regardless, if you are a using vulnerability scanner such as Nessus or a search engine such as Shodan, you will be faced with a dilemma on how to interpret the results for a given machine or interface.

The challenge for benign scanners is that they cannot really try to verify a given result through exploitation. This in turn means that the interpretation of the result is often based on the software banner and the potential version information in it.

That would be a valid approach if you could trust the banner to represent the correct information. In reality, what makes this information unreliable is based on three variables:

  1. how long a given piece of software is supported by the provider
  2. whether the version number is visible in the banner
  3. whether the provider is backporting patches to the software without changing the version number.

This of course mostly applies to open source software, but let’s face it, even most commercial pieces of software contain open source components, which need to be fixed in the same way for the commercial packages as well.

Long Term Support, LTS, FTW?

  • If backporting a patches is one of the main culprits for version inaccuracy, why is it so commonplace to do it?

One common reason for this is the quite outdated idea of Long Term Support, LTS. In practice this means that the provider, be it a Linux distribution or a commercial company, promises its users a stable platform, which they can depend on staying put for a decade for example.

I would argue that this idea too is outdated and needs to be changed, since the threat has evolved. Ten years in Internet time is an epoch and so many things can and will happen in that time. If you are using an LTS stack, then you will face software rot and difficulties in running modern applications on it.

Long gone should be the LAMP days, where you had to stick with a specific major version because otherwise your dependency house of cards would crumble. If you still cling to that approach, you’ll likely encounter increasing challenges, as the only constant today is change – both in supported version numbers and the speed at which things evolve.

Only the Two Latest Releases are Supported

That is why many providers only support the latest two releases for example. This applies to both operating systems and software applications running on them – even legacy software such as wordpress. Fitting these two realities of long term support and constant change is not simply achievable, unless the updated long term support model starts rolling with the tide. This has already started happening in operating system space and for some it has always been the case. For instance, OpenBSD supports only the two most recent releases, with a new version being issued every six months.

Even if my example is a bit esoteric for most people, it is a sound paradigm for building security into your software stack. Far fewer things die out in six months of Internet time, than in a decade.

An Idealistic End to the Debacle

One of the first things that will need to evolve is compliance-based security and certification that isn’t able to cope with constant change. For example, if certification takes years, then your certified end product will be fit for the museum.

Another thing that hinders security from being built in is the idea of security through obscurity. Vulnerability management through scanning your attack surface and trusting the version numbers in the banners is broken beyond repair.

Here’s a good example of a machine that is running an obsolete version of Centos 7 and according to Shodan exposes 581 vulnerabilities. This number of course is inaccurate and even if we know that this server is likely to be vulnerable, but which ones of the vulnerabilities have not been patched during its life-cycle and which have not? Moreover, which ones of the vulnerabilities are actually exploitable? And finally, which vulnerabilities have emerged after the end of 2024-06, which was the official EOL date for this specific Centos version?

Intuitively we know that there most likely are actual vulnerabilities on this host, but which ones are they and which ones of those 581 should we pay attention to first?

OK, I agree, the only viable solution for this machine is redeployment because of the EOL status. If, however, the system was still supported and you would have to help the owner to address the vulnerabilities without trying to exploit them, how easy would it be to do it based on this information alone?

I’m not blaming Shodan, Nessus, or any other scanners out there. My frustration is directed at the real issue: static deployments and the outdated belief in security through obscurity are what prevent white hats from effectively helping defenders based on version numbers in software banners.

DevSecOps is an approach to software development that integrates security practices directly into the DevOps process. It emphasizes the need for continuous, automated security checks throughout the entire software development lifecycle, from coding and building to testing and deployment, ensuring that security is a shared responsibility across all stages and by all team members. This approach aims to enhance the overall security posture of the software without slowing down the development and deployment processes.

In other words, changes need to happen on both the developer and cybersecurity sides of the fence. So in a sense I am promoting a DevSecOps-oriented approach, but strictly speaking that only applies to parties putting software components together, so that the users can use them. As a sysadmin, your most difficult task is to be able to choose your poisons wisely and be allowed to do so as you see fit. That unfortunately often isn’t a business reality either. At minimum, it is a good idea to identify the legacy systems and not expose them directly to the Internet.


Give Us Feedback or Subscribe to Our Newsletter

If this write-up pushed your buttons one way or another, then please give us some feedback below. The easiest way to make sure that you will not miss a post is to subscribe to our monthly newsletter. We will not spam you with frivolous marketing messages either, nor share your contact details with nefarious marketing people. 😉


comments powered by Disqus