Practical Methods for Assessing Your External Attack Surface

post-thumb

Attackers Window Shop for Your Network Attack Surface

The previous post Public Exposure by Lari Huttunen made good points on reducing your external attack surface. This subject has become more important than ever as the interest of attackers on network-based attacks has grown in recent years. A host of vulnerabilities in internet-facing services have been successfully used by attackers to gain an initial foothold at their victims.

The trend has not gone unnoticed. As an example, the US Cybersecurity and Infrastructure Security Agency (CISA) has issued multiple directives to mitigate network-based risks in the last three years. Their rationale:

the average time between discovery and exploitation of a vulnerability is decreasing as today’s adversaries are more skilled, persistent, and able to exploit known vulnerabilities.

At the same time, many if not most organisations struggle with asset and vulnerability management. They simply are not aware of all the systems they own and cannot keep up with the pace that patches are published. It is often cited that patching every vulnerability is impossible.

The Importance of Being Patched

A majority of the vulnerability-related risk can be mitigated just by patching the vulnerabilities currently used by attackers. As an another pick from CISA, just focusing on the Known Exploited vulnerabilities may help in reducing the most significant risks while keeping the workload tolerable:

Based on historical data, we anticipate that less than 4% of the total number of vulnerabilities identified in a calendar year will be escalated to the Known Exploited Vulnerability (KEV) catalog.

Less Attack Surface Means Less Patching

Below I want to help you to get started on the methods and sources of information that you may need to start reducing your public exposure.

This work can basically be divided into three stages:

  1. Identify your domain and IP address assets.
  2. Research the attack surface, i.e. open services, related to these assets.
  3. Determine whether there is something that needs fixing within these services.

After that, the fixes typically need prioritisation, for example by fixing the widely exploited vulnerabilities first. This blog post, however, will concentrate on finding assets, the first stage in the work plan.

The First Step of Attack Surface Reduction - Finding Your Assets

Some of the most effective means for finding assets belonging to an organisation are not technical. One method you might easily disregard is simply interviewing employees. In addition to the assets themselves, you can get a head start in asset prioritisation by asking which assets, systems or products they are most concerned about.

Another non-technical approach to asset discovery is to follow the money. Invoices related to ICT purchases are a good starting point.

  • Which services or licences are you paying for?
  • Are there contracts that we have signed to agree on the implementation of these services?

These kinds of questions should be easy to answer, but your mileage may vary. The documentation you will find in this manner probably lacks the details on the used assets, but they will give you better ideas on where to start looking for them.

Using Public Records For Asset Discovery

The benefit of technical asset tracking measures is that they are usually simple to automate and run in a continuous fashion. To start off this search, you will need a list of names to search for. The names should include any brands related to your organisation, including those of any acquired companies.

The simplest place to start the search are any public records of digital assets. Data on IP addresses and autonomous system (AS) names is easily searchable through the public WHOIS databases. There are some limitations to using this data, however, as it is often unmaintained, old and riddled with typos. In practice, when you buy services from internet service or hosting providers, these records may show the provider instead of your organisation. In this case, you will need to find out which of their addresses you are using by resolving the related domain names or enumerating the IP addresses from the actual servers, which is often easier said than done.

Discovering Assets Through Domain Name Enumeration

In the case of domain name discovery, the simplest case is when you are in direct control of your domains. This is rarely the case in practice. If your organisation were to exclusively use your own domain name servers, their zone files would identify all your domains.

Often, the best practical way for finding domains owned by your organisation is to search through data sets available on the internet. Just looking through the results of search engines will give you a decent overview of the extent of your exposure. For example, the Google search site:company.example will give you results from the webpages of all the indexed subdomains. The sites for recent products and campaigns should be at the top of the results. Automating these searches might not be straightforward.

In addition, there are data sets and services concentrating purely on domain data. Some DNS top level domains support searching for domains allocated to an organisation. Passive DNS databases are based on actual queries made by people, so they reflect current internet usage by a certain user population quite well. Some of the data can be used for free, while the price of others may vary wildly. Certificate transparency logs are generated whenever a website owner requests a certificate from the Certificate Authority. You will find domains registered by someone from your company by searching for the related certificates.

Once you have enumerated a list of domains, you can try to discover more domain names and third-party services by performing active DNS queries. As an example, there are many tools for guessing subdomains based on word lists. Text records (TXT) are particularly useful, as they are used by many third-party services to ascertain domain ownership. If you see a TXT record related to Google, you can be fairly sure that Google services have at least been used at some point. Similarly, Sender Policy Framework (SPF) records show which services can send email on your behalf.

Centralised Management as an Information Source

Of course, the systems themselves contain a lot of data on the IP addresses, domains and provided services. How this information can be discovered is highly dependent on the systems themselves and going through them is not feasible in the scope of this post.

You will probably come a long way just by going through any central management systems such as Active Directory, Puppet, Chef, Ansible, Saltstack, Docker, Kubernetes or Terraform. The management interfaces and APIs of cloud systems are also a gold mine of information related to assets and their network exposure. Also, exceptions in firewall rules might give you indications of third-party services, which you otherwise would have missed.

Digital Asset Tracking Services to the Rescue

In addition to doing all the hard work of asset discovery yourself, this information is commercially available. Buying packaged asset information likely belonging to your organisation may help you in kickstarting the asset identification work. You should still take proper care to weed out any potential false positives in the identified assets. For example, it is easy to deduce that a server is under your responsibility by the fact that one of your domain names is pointing to it. On the other hand, the server behind the domain name might be related to a third party service you are using, such as load balancing, hosting or content delivery. These are usually resources that are shared among multiple customers and not all of the observations related to these resources will be relevant to you.

The siege of Château Gaillard between September 1203 and March 1204 is a good example of determined attackers exploiting a weakness in the defenders' threat model. As the story has it, a French soldier clambered up the latrine chute and let in his compatriots to take over the castle. Photo (c) Lari Huttunen.
The siege of Château Gaillard between September 1203 and March 1204 is a good example of determined attackers exploiting a weakness in the defenders' threat model. As the story has it, a French soldier clambered up the latrine chute and let in his compatriots to take over the castle. Photo (c) Lari Huttunen.

An Example - Finding out the Internet Exposure of Acme Ltd

One could easily think that it’s simple to map the internet exposure of an organisation. In my experience, it rarely is. Let’s illustrate the case with a fictitious Acme Ltd, a medium-sized technology company, incorporated, say, 10-20 years ago. You are charged with finding out the internet exposure of Acme. Below I’ll walk you through how the story might go from this starting point.

Web Services Under Your Control - or Not?

Let’s start by looking at the company web pages, which are at:

https://acme.example/

and the customer support portal at

https://support.acme.example/.

Simple, right?

Maybe not, as you quickly find that both the web pages and portal point to a third-party content delivery network and both of them use separate backend services. The company web pages point to a server of a third-party hosting company, whereas the support portal is in a small server room still operated by Acme Ltd.

Nobody seems to know the background of these arrangements, as the original designers of these systems have long since retired. The current IT team refers to these systems as legacy and state that they will be given an overhaul as soon as they have time from their other tasks. The first plans for this overhaul were drafted two years ago, so it seems that there are quite a lot of important tasks at hand.

“Rabid” Development

So if the support portal is legacy, what is the current state of the art? Why, all the new product lines use cloud-based management and all the product interfaces are developed as cloud-native applications.

Except Acme Anvil, of course, the brand new internet-connected anvil, which is produced by an Acme subsidiary in Lichtenburg. The Acme Anvil uses an another cloud provider and a completely dissimilar application stack as its backend. You try to raise a discussion among the developers on the development and testing environments that they use, but few seem willing to talk to you. You’re left under the impression that each product team, or even individual developer uses whichever systems they find useful.

Nobody Expects the Legacy

Going through the product catalogue, it becomes apparent that many of the Acme offerings are actually rebranded versions of products formerly sold under the brand of Bonk Business. Consequently, you find an article in the corporate news section that Acme Ltd acquired Bonk Business five years ago.

How about the internet exposure of these products? The founders of Bonk have by now left Acme, but their old documentation mentions that these products connect back to backend.bonk.example over UDP on port 1337. After many false turns, you find the server for backend.bonk.com, which is still alive and kicking in a broom closet in the old Bonk offices. As a result, the resident Acme employees were relieved to find the source for the background hum that was driving them crazy.

What About the Marketing Team?

Looking at the server, it turns out that it still hosts marketing materials for Bonk products (transmuter.bonk.example and buybonk.example). The marketing department states that they use the appropriate cloud system to run all their new marketing campaigns, except for the Lichtenburg branch, which uses the other cloud, mentioned above.

When going through marketing emails sent in the recent months, it is easy to discover a dozen or so campaign pages that are not hosted in either of the company clouds. Some of them are:

  • hosted by advertising agencies
  • within a yet another cloud provider
  • on traditional hosting platforms.

Some of the pages are defunct and one of them, acmesignal.example, is even parked by the domain registrar. You try to find out who is responsible for the maintenance of these domains and systems, but to no avail. By looking at the SPF records of the marketing domains, you find some hints of the third-party services used for sending out marketing materials.

How to do Better?

As detailed in the Acme story above, there are many kinds of difficulties related to public exposure and this blog post is by no means a comprehensive account of all of them.

Running an organisation is a complex affair and tracking digital assets is simply not on top of anyone’s priority list. Let’s face it, keeping track of things is hard: crew changes, lack of documentation, mergers and acquisitions, technical platform and product management are in my experience typical situations where assets fall off the grid.

Naturally, the described situation is not an ideal starting point. Going through the technical debt accrued over the years is only the first step in asset management. A longer term solution to keeping abreast with public exposure would require wide changes to how new services are introduced. This includes getting management support, as well as fixing issues in processes and organisational culture.

What’s Next?

This blog post started off with making the case for attack surface reduction. Attackers are constantly searching through your assets for juicy flaws even as you read these lines. The first step in reducing your attack surface is to make sure that you have the means to catalogue all the assets that you own. The example case illustrates some reasons, many of them non-technical, why you might have lost track of some of your business-critical assets. In my next write-ups, I will move on from asset identification towards researching external attack surface.


comments powered by Disqus