How to Identify Attack Surface that Must be Addressed
Some time in January 2022, I promised to Lari to write up some thoughts on attack surface management. I thought I’d perhaps have material for a single blog post. Now two posts later, we will still have to dig into some of the most difficult problems in the process. If you haven’t read my earlier posts, the first covered asset discovery and the second focused on exposure assessment.
Should you have adopted an attack surface identification process such as the one I have outlined in my previous posts, by this point you will have a lot of data.
In a larger assignment, I usually end up using a couple of online services, half a dozen open source tools, and numerous ad-hoc scripts. The result is a hot mess of JSON files, tool-specific text files, files with HTTP headers, and HTML content. Some integrated scanning frameworks or third-party services might make things easier for you.
For an attack surface reduction assignment, the next step will be to summarise the data and end up with decisions or recommendations based on the findings.
This will leads us answering difficult questions, such as:
- What kinds of services should there be in the network in the first place, i.e what does the security policy say about permitted network services?
- Are all the services there for a reason?
- Is there a business case or a justification for each of them?
- Which of the identified services have known vulnerabilities in them?
- If we won’t have time to address all the findings, which ones do we prioritise?
Let’s go through these questions in roughly the stated order. I don’t claim to have all the answers, but I hope to be able to point you in the right direction.
Let’s Leverage Your Information Security Policy
The dictionary definition for an information security policy states that it defines what it means for an organisation to be secure.
In my experience, policies can range between:
- high-level statements of intent
- implicit agreements amongst developers and administrators on how systems are built
- rigid multilevel policy frameworks dictated by compliance requirements and governance standards.
In any case, policies can be your friend as they sometimes give you both the rules for systems exposure and the leverage within an organisation to actually make security improvements.
Things to Look For in a Policy
In terms of attack surface, look for the following kinds of provisions in an information security policy:
- Access to services should be logged and/or protected through access-control methods
- Multifactor authentication is required in the following situations: …
- All information transmitted from [communications equipment] must be encrypted by a strong encryption algorithm to minimise the risks of eavesdropping on the communications and man-in-the middle attacks
- All servers and applications using SSL or TLS must have the certificates signed by a known, trusted provider.
- Verify that the usage policies include a list of company-approved products.
I grabbed the examples above from published policies, such as the SANS Security Policy Templates.
Network Information Security Should Not Be a No Man’s Land
If you don’t find any policies in place that are related to exposed systems and services, now would be a good place to start considering them. Looking at simple scan results might already highlight the need for them. But as always, the introduction of new policies will get more difficult when the systems are already in production. As a side note, answering some of the policy-related questions might require further scanning or gathering more data, for example on discovered X.509 certificates.
Looking at any of the applicable policies, chances are that you already have some findings that lead you to suspect policy violations. You also probably have suspicious technical findings such as different-looking versions of the same products and API endpoints that seem overly promiscuous. There might be good reasons for all of this, but it’s difficult to tell without knowing the context of the system. This is where system owners are worth their weight in gold.
ICT Systems Need Love and Care (Patching)
All systems offering open services need regular maintenance in order to remain safe. This might seem self-evident, but unfortunately my experience is that most people consider security administration a tedious and risky task. They hope that someone else (the famous bicycle repair man) will take care of it.
As the proverb goes:
If it ain’t broke, don’t fix it.
Unfortunately, this train of thought is all too similar with organisations and security patching in general.
Gain Insights through Clear Responsibilities
In the first post of this series, I used the fictitious Acme Ltd as an example of a company that had lost track of their assets. Going back to that example, it’s inevitable that they would have also lost track of their security responsibilities. After all, fixing running systems is not a revenue-generating activity and it is easy to ignore as long as everything is running smoothly.
In my experience, taking care of security sometimes gets assigned to a single heroic figure that eventually burns out and/or finds a better employer. If internal roles or outsourcing do not explicitly include security responsibilities, tasks like patching often become a hot potato. It’s indicative of a bad security culture if more time is spent on arguing over responsibilities than implementing any actual fixes.
Why am I dwelling on this point yet again? It’s a question of motivation. Exposing security problems within your organisation is a virtue in itself. However, if you cannot get any of the problems fixed within a reasonable time, you will quickly tire of the pointless exercise and find something more rewarding to spend your time on.
In contrast, a dutiful system owner will be able to explain:
- which services are legitimate
- what is their purpose
- what data each system handles
- which actions it performs.
- and will be able to react to any findings when necessary to do so.
Armed with this knowledge, you can already form a common understanding on the threat model of a given system. By now, you should also be able have an informed opinion of the risk a given finding has for a system or service.
How to find out if you are vulnerable?
In the scanning efforts so far, we have mostly talked about scanning for software versions, even if I have mentioned vulnerability testing tools in passing. As vulnerabilities are fixed by introducing new versions that include patches, identifying software versions is a good start to determine if a service has known vulnerabilities.
Many services do publish version information quite helpfully in their responses, even if it can be in any shape or form. Sometimes it’s included in a protocol banner or header, but it can also be included in the response content, or as a response of a dedicated query. As an example, the Atlassian Confluence wiki server adds a version string in the footer of the HTML, whereas the Microsoft Exchange email server returns it in an XML reply for a query to a specific path. Consequently, it is equally likely that you won’t get this information at all.
Version masking FTW, but for whom, and can you work around it?
Configuring services to omit version information has been long considered to be a good operational security (OPSEC) measure. In my opinion, it currently harms defenders more than attackers. When version information is not readily available, defenders will have to go the extra mile to determine if the system has been patched, while all the attackers need to do is try any and all exploits at their disposal against the service anyway.
In some cases, the version information may not reflect the actual patch status of the service. As an example, applied hotfixes and emergency patches may not change the reported version number. This can not be considered good practice in the current threat environment. Similarly, open source operating system distributions that backport security patches may also have misleading version numbers.
Even if you don’t have any version information, vulnerability can in some cases be determined with a proof of concept exploit. It is more common than not that these exploits have been developed with remote code execution in mind, in which case using them is generally not advisable. But in some cases, there are limited exploit scripts that can be used safely to determine if the vulnerability can be exploited in the first place. Bear in mind that some published exploit scripts contain backdoors for attackers in the first place, so you really need to know what you’re doing if you decide to take this route.
Prioritise, Even Roughly, When You Must
If there are more things to fix than you have time for, how can you prioritise your findings? Tomi Tuominen stated very elegantly in his Sphere 22 talk that companies should focus on threats that put their operations or their very existence at risk. This is a risk assessment process that you must run together with your system owners, business leaders and risk management.
As a general rule of thumb, I’d say that the rough order of prioritisation should be as follows:
- Critical violations of company policies, such as missing authentication to sensitive data or systems.
- Known vulnerabilities, especially ones with publicised exploit code. Resources such as the CISA Known Exploited Vulnerability (KEV) catalogue or FIRST Exploit Prediction Scoring System (EPSS) data are your friends here.
- Other major policy violations might go here, such as services that do not employ up-to-date security features (especially in terms of encryption or authentication).
- Software versions that are no longer supported. If they are not yet vulnerable, they will be, as researchers find new problems with the software itself or its supply chain (libraries, frameworks etc).
- Risky services. As you can see, here is where it starts getting a bit murky. This does include management interfaces, which really are a big factor in current attacks.
- Unnecessarily exposed services. If it’s not strictly necessary for your business, it just increases your attack surface. Additionally, services such as network time (NTP) and name resolution (DNS) can be abused in reflected distributed denial of service (DDoS) attacks against third parties. You should consider limiting Internet access to these kinds of services.
Try the Proper Fix First, but Sometimes You Must Compromise
For an infosec specialist like me, the proper way to address undue attack surface is to either patch the service to the latest supported version, to introduce a more secure alternative to a risky service, or to take it down altogether. This way of thinking does keep things simple. Threats on the Internet change quickly. You never know when an attacker finds a way to misuse an issue that was previously considered low severity, or to daisy chain it with other similar issues for a catastrophic effect.
Consider the ProxyShell and ProxyNotShell vulnerabilities that affect Microsoft Exchange. Both had chains of several vulnerabilities that resulted in total attacker control of the target when used together. Many organisations thought they would not be affected by the vulnerabilities as they had already started to use the cloud-based Exchange Online. For some configurations the best practice was still to retain an on-premise Exchange server. There is no need to expose it to the Internet, but many organisations have neglected to remove the access to it. In the case of ProxyNotShell, the mitigation rules were improved five times, which made it hard for administrators to keep up with them.
Post Auth Vulnerabilities Need Your Attention as Well
Many organisations deprioritise fixing vulnerabilities whose exploitation requires special access to the resource or valid user credentials. This only helps them until the first client machine is compromised or the first user reuses their password. Weak passwords are quickly discovered by attackers with techniques such as phishing, account brute forcing and trying known passwords related to a leaked email account. Similarly, compromised client machines can be used to attack systems in any internal or otherwise limited networks.
Mitigate if You Cannot Fix
In my experience, the possibility to fix all the discovered issues is a rare treat in real environments. Instead, the associated risks are mitigated by a number of means. Access to the vulnerable resource can be limited, or different forms of filtering and monitoring can be put in place to hamper exploitation attempts and improve response to them. At the end of the day, risk reduction efforts should translate into increased resources or technical capabilities, rather than wishful thinking.
What have we learned?
Having gone through all these challenges in a series of three write-ups, it is clearer, at least to me, why attackers are having a field day with vulnerabilities. Trying to keep abreast of all the code running in your system is difficult enough by itself. The problem gets exponentially more difficult with technical debt, which you accumulate if you don’t address it every chance you get. To top things off, it is often hard for organisations to find incentives to invest time and money to maintain their security posture and even harder to improve it.
Nevertheless, best organisations have found a recipe for minimising the problems inevitably caused by their Internet exposure, regardless of their size or industry. I hope these write-ups have managed whet your appetite for some of the secret ingredients used in attack surface management.
Give Us Feedback or Subscribe to Our Newsletter
If this post pushed your buttons one way or another, then please give us some feedback below. The easiest way to make sure that you will not miss a post is to subscribe to our monthly newsletter. We will not spam you with frivolous marketing messages either, nor share your contact details with nefarious marketing people. 😉