Tools for Sysadmin - Managing IT Assets the Right Way
Managing IT Assets: Where do I begin?
In the previous write-up, I looked at IT asset management from a bird’s eye view. Below, we will dive into the tools and techniques for managing IT assets from a technical perspective.
Which tool is the most important one to have?
First and foremost, what you need to have is knowledge.
This naturally includes the actual knowledge of your profession, but also knowledge of the resources and environments you have to work in.
IT asset management, in a minimal technical sense, should answer these questions:
- What do you have?
- How can it be accessed?
- Where is it?
- What does it do?
Access is a wider, more sensitive question; documenting access procedures with passwords and pin codes is not something that should be done as an asset management function, or even be considered. On the other hand, it’s good to keep in mind while finding out what you have: are there access methods or routes that shouldn’t be there? As the network is what establishes the basic structure of the environment, you will need to have documentation on this.
Sysadmin Tools for IT Asset Discovery
Establishing basic situational awareness differs from admin to admin, but in a pinch, inspecting ARP tables and running ping and port scans with nmap on a network segment being inspected usually gives quite a good idea of what you’re working with. For larger networks, it’s just a case of there being more manual work, most of which should be scriptable.
Pro tip: scan a port with nmap from multiple different network segments to make sure that you have your firewall rules in order.
There are automated and refined products out there for everything, but obtaining and deploying one in a hurry is not something you want to waste time on. Pretty much every operating system out there has either a sufficient or even an excellent set of tools for taking a look at the system and the network. As these are typically there, getting to know them is beneficial. Just like in photography, the best camera is the one you have with you when you need it. Initially, using text files that you can conveniently grep and awk is probably the best way to start building up inventory. MAC addresses from a router’s ARP table say something about the hardware, even if the hosts themselves like to keep quiet when probed. Switches naturally are a wonderful source of information, as long as you can harvest that information conveniently.
You might end up with something that is almost equal or even equal to what your DHCP leases tell you, or you might not. It pays to confirm; overlapping information doesn’t bite, but it might reveal something forgotten or surprising. Of the basic tools for figuring things out, netstat combined with the methods of checking the network interface configurations and routing table(s) gets you well underway in configuring the network environments and hosts that a specific host is or has been communicating with.
Cloud Resources Management
You don’t always have such close access to your resources. In these cases, you should at least focus on having a convenient way of getting an inventory of what you have without going through the tedious process of using some web service and doing a copy/paste marathon. If your service provider doesn’t offer any usable API or ready-made scriptable tools, you’re likely heading into trouble with your operation. If you’re now in the process of deciding or reviewing your service provider platforms, take into account how convenient the tools are to install and use. It might save you from plenty of stress in the future. Getting an easily parsable JSON list of assets and their details can simplify life and reduce the risk of operative mistakes by, for example, creating an early warning alert on expiry. XML is fine, naturally, if you dislike human readability.
Monitoring IT Infrastructure
Once you have documented the planned state of the things, you have a good basis for establishing monitoring. While monitoring doesn’t exactly feel like asset management, it plays a role in providing the almost real-time status of your assets (at least the most important ones) and metrics are always nice to have.
Hardly any other solutions outside of architectural planning tools and documentation offer better possibilities for modeling service dependencies and, in some cases, even simulating the effects of service outages (not all monitoring tools do that either). The risk here is that the dependencies have to be modeled correctly in order for the simulation to provide meaningful results; knowledge and expertise are required.
Hostiles and Relics of a Forgotten Past
Along with the knowledge of what you have, you get means to determine what you shouldn’t have. If there are extra services open for access, they might be either unintended and therefore unmanaged routes for intruders, or even set up by hostile parties. Network traffic sampling and system log monitoring and analysis, along with simply checking what ports the hosts listen to, are valuable methods with respect to charting the servers and services.
Get to know your systems and their habitat.
Naturally you have to be armed with actual intimate knowledge of the services to be able to spot the anomalies. The ability to click a button and wave hands while making noise and waiting for software to work its magic is simply not a sufficient skill set for determining the security and state of any setup. No application will be able to tell the difference between a required and unknown extra server, unless you as an operator can set the criteria.
First of all, price is not an indication of value.
Even a small company could need a surprisingly wide palette of tools, let alone a medium- or multinational-scale operation. There are many brilliant pieces of software that are completely free to use. Some companies release their products for free and sell support to those who need it. Quality of the product simply can’t be extrapolated from the price tag.
I have a strong bias towards free and open source software. I also do like to see people and companies being paid for their work, but I see this more as a question of convenience. Even individuals or small operations that are just starting out have access to full featured tools for safe and effective operations. Paying for services is another thing. Free services simply don’t exist. You will pay for them, one way or another. Supporting valuable software projects is something that should be taken into serious consideration in every organization, as in reality, a major part of current businesses rely on open source software. Think of it as an investment.
Good tools are OS agnostic, open in the sense of being clear on their function, code trustworthiness (open source), easy to adapt (scriptability, standard file formats, open APIs, etc.) and most of all, available when you need them. Naturally, there are proprietary products that will remain forever a secret when it comes to their code or even access, and not every product is an everyday tool in the first place.
Monitoring Tools in Networking
For keeping the cabling, servers, devices, racks, networks and addresses in order, Netbox has proven itself over the years, improving itself in features and always being straightforward to use, at least when it comes to address and rack management.
For monitoring needs, Prometheus and Grafana are very nice when it comes to metrics and visualization. Nagios Core has been a great tool for functionality monitoring and has served as a base for many other monitoring tools. Nagios plugins are actually common for many monitoring tools; their commercial offering expands way beyond the basic functions of Core, log and network analysis and visualizations. Perhaps it’s something to look into, even if you’ve previously found the Core somehow unsatisfactory. Tcpdump is invaluable for everyday diagnostic work, as is Nmap for the moments when you have to figure out what is actually accessible to you remotely. It isn’t at all uncommon to have servers offer very different sets of services depending on where you’re looking from. With security in mind, the view from the internet is very valuable indeed.
Having an outside view of your assets and related security issues is very useful. At Arctic Security, we’ve put together an asset discovery and assessment service available for any organization free of charge. As always, go with something that you can live with. Having to heavily modify a tool or use it for something that it wasn’t intended for is the fast way to headache. Put the effort into, rather, understanding the ideology of the product, making your decisions and going with the flow if the tool suits your needs.
Expenses and Dependencies
In order to keep expenses at bay, build a palette of needs and see how you can cover them. You will likely find that you need to balance your budget. Amassing expenses over the years by always going with the latest and the greatest that marketers have to offer will result in a very high licensing cost stack and a looming budget-cutting sword. As with many things, a step back can provide a better view. Are things balanced? Depending on the operation, you will likely have some very expensive products, some moderately expensive ones and some that are barely measurable by their cost effect.
There are rarely silver bullets that solve all of your problems. If, however, you happen to encounter such a wonder, you are either very lucky or have left out some requirements. On the other hand, a sufficient solution is easier to reach than the perfect one. Balancing workload, software selection and needs is a complex nonlinear equation. Luckily getting a partial solution is acceptable here and managing assets is only one small aspect of your everyday work.
The Security Aspect of IT Asset Management
Tracking your assets enables you to both eliminate aging assets when it’s time for them to go and keep purchased services from expiring. As you know what you have, you can quickly determine if there is something you shouldn’t have. This is a great help in minimizing your attack surface. The Admin, Network Specialist or whatever title they might carry in the organization they work in, will need both intimate understanding of the environment in which the systems reside. Without understanding the mechanisms and relations, no amount of pretty reports or pie charts will help anyone.
Always keep in mind different viewpoints; admins, network people, developers, users, janitors, guards, management and cleaning personnel all have some kind of view on the systems and services. Ignoring any of these views will open the doors for an attacker. Plan on systems that lighten your daily workload, but keep your hands dirty enough to be able to do something meaningful in an emergency. Knowing your environment, having tools that genuinely support your work and manageable workload are the key to keeping your systems safe and productive.
All in all, basic system tools are useful, powerful and give you plenty of crucial information. Understanding the functionality and information available with these tools forms a basis for evaluating and utilizing solutions that automate the gathering and visualization of this information. There is no software that makes anyone a specialist in any field. The human ability to comprehend, adapt and use tools is essential.
Give Us Feedback or Subscribe to Our Newsletter
If this post pushed your buttons one way or another, then please give us some feedback below. The easiest way to make sure that you will not miss a post is to subscribe to our monthly newsletter. We will not spam you with frivolous marketing messages either, nor share your contact details with nefarious marketing people. 😉