Linux incident response is a topic which is often overlooked. There is a belief that the operating system is more “secure” than other platforms, but this is only partly true. The reality is attackers compromise Linux machines on a regular basis and, while it isn’t yet the year of “Linux on the desktop” it is very likely that a corporate webserver, database server or other customer-facing platform is running a variant of the OS. Added to this is the prevalence of Linux variations running in AWS or GCP.
As a result of this, it is inevitable that sooner or later you will need to respond to an incident where your open-source OS skills are put to the test.
The good news is that the basics are the same. PICERL is a good framework to use. Preparation really, really matters. You need to build good processes. You also need to practice them regularly. Once you are into the incident, the normal workflow should still be followed. The big difference, however, is in how you do things. You may also find that a lot of the EDR tools you’d use in Windows DFIR aren’t available or simply don’t work. Even when they do, the data may be different.
A good example is when it comes to “evidence of execution.” In a Windows environment, we have access to prefetch, shimchache and other useful data repositories. There isn’t any real equivalence in Linux, which forces responders to be more inventive.
Linux Response – Preparation
As always in IR, if you get the preparation right, the response will work better. There are some key points you need to decide in advance because your decision will dictate how you respond.
The priority has to be your incident response plan. This needs to include who is responsible for the platform and who needs to be involved in the incident response team. If you have Linux admins, you probably need to include them as their knowledge will be invaluable. Your plan is also where you decide how you want your response to run and how much “evidential” care needs to be spent on the collection steps.
You also need to make sure your infrastructure is ready to help you respond. There are entire books on forensic readiness, but the key points to consider are:
- Sync time across the network. This is crucial if you want to be able to make sense of events.
- Normalise everything to UTC – this is vital if you are a global org or have endpoints in different timezones.
- Make sure logs are generated and sent to a SIEM (or equivalent for review). You need to check system logs, firewall logs, IDS logs, email logs and application logs are all being collected.
- Ensure backups are being created on a regular basis, ensure they are being tested for usability and ensure that some are kept offline.
- Baseline. Baseline everything you can. Gold images, reference hashes of installed applications etc. Hash as much as possible – your future self will thank you. Of course, you need to maintain the hashes after patches.
Linux DFIR – Responding!
Once you’ve triggered the incident process it is time to turn your plan into action. Just to reiterate, the high-level steps are basically the same on any platform and the workflow is reasonably straightforward. A good workflow is:
- Snapshot the scene/capture images
- Confirm the incident
- Analyze volatile information (typically memory)
- Analyse filesystem
- Build a timeline
- Carve deleted data & recover filesystem artefacts
- Close the incident (report / lessons learned)
In this blog post, we are going to look at some of the differences you need to consider when you run this workflow on a Linux host.
Confirming the incident
Incident response, on any OS, is a costly & resource-intensive activity. It is important that you limit the number of times you trigger full DFIR by thorough confirmation. When you respond, the first thing you want to do is find out what the attackers might have changed. In Linux, this includes, but isn’t limited to, unusual processes; hidden files & directories; altered system files; modified log entries and strange ports listening or with established connections.
Lots of this is easiest to find on the live file system. If you have an EDR tool which can give you access this will help but you may need to consider having to SSH in and run commands directly as part of your preparation phase. Where possible, taking a snapshot and mounting it is a better option.
Assuming you have access to the running system, start by looking at the running processes.
You can also use lsof as a way to find backdoors. Remember, both commands are noisy so consider piping to less or using grep to find specifics. This can be very effective at finding subverted code such as fork()ed processes which have been renamed.
COMMAND PID USER FD TYPE DEVICE NODE NAME smbd 871 root 6u IPv4 9001 TCP *:2003 (LISTEN) smbd 871 root 6u IPv4 9001 TCP *:443 (LISTEN) initd 11201 root 3u IPv4 10112 TCP 10.10.10.1:64213->184.108.40.206:1111 initd 11201 root 9u IPv4 10112 TCP 10.10.10.1:1111->220.127.116.11:4444
In the example above the Samba server is listening on some strange ports – 2003 and 443. Also, the initd process has an active TCP connection to two external IP addresses, again with suspicious ports.
Combining ps and lsof gives an incident responder the ability to drill deeply into what is running on the suspect system. In turn, this helps confirm that something is amiss.
Linux Memory Analysis
Volatile data is crucial for incident responders. In practical terms, this means RAM as getting the actual cache data & registers is often too challenging to be realistic. We’ve talked about how important memory analysis is in the past, so we will assume that you understand the basics.
This post is about Linux and, unfortunately, it can be difficult to capture a useable memory sample and even harder to analyse it. The issue is largely down to having the correct “profile” to allow your tools to know what structures exist in memory. With Windows, tools like Volatility (2.x or older) rely on pre-built profiles. With Volatility 3 (and rekall) the profile isn’t needed, but the tool still needs to know how to read the memory sample. With a Linux image this becomes complex at best.
Capturing the image.
The most important bit is how you capture the image. If you are running a Virtual Machine, then it might be as simple as taking a snapshot and using the memory file. However, you still need to get the right profile information. An example of this is on the Volatility github pages.
If you need to dump memory from the live system, the most used tools are:
It is worth noting that lots of commercial forensic platforms struggle with Linux memory, so it is worth practising the manual methods to make sure you can respond in a timely fashion.
There is a very useful tool which automates a lot of the capture & profile creation steps: LMG by Hal Pomeranz, who is widely regarded as the leading expert on Linux IR.
Capturing Disk Images
If you have an EDR platform or Linux-friendly forensics tool, then capturing a disk image should be reasonably simple. If your suspect device is a virtual machine, then you can use the VMDK files (or equivalent). However, if you find yourself needing to respond manually, there are some useful tools you can use.
You can create a bit for bit copy of the disk for analysis in pretty much any tool. This retains deleted data so you can recover lost files. You can use dd for this, but a better tool is dc3dd which allows you to create a checksum at the same time. The syntax is pretty simple:
dc3dd if=/dev/sda of=/mnt/usb/diskimage.raw hash=sha512 hlog=/mnt/usb/hash.txt
This will take a copy of sda and put it on a device at /mnt/usb. However, there are other tricks you can use.
Another example is if you want to take a disk image and send it over the network to your evidence machine. You can use netcat on both ends for this (or cryptcat if you want to use an encrypted tunnel).
dc3dd if=/dev/sda hash=sha512 | nc 10.10.10.10 5555
This will send the data to a listener which can, in turn, simply store the data to a local file.
Timelines – Linux Variations
The general process is the same as on Windows and the analysis of inode data is very valuable. The main point here is that there is a difference in how timestamps work. Windows has four timestamps in $STANDARD_INFO and four in $FILE_NAME. Most Linux environments have three – MAC – with EXT4 introducing a Born-on time to more closely resemble Windows.
Modification Time, also referred to as mtime. This is the last time data was written to the file.
Access Time, also referred to as atime. This is the last time the file was read.
Change Time, also referred to as ctime. This is the last time the inode contents were written.
Born-on time, also referred to as btime. EXT4 file systems also record the time the file was created.
Incident responders can use this to hunt across a file system to find things the attacker may have changed. For example, if you think an attack took place in the last week, you can run:
find / -mtime -7
This will return every file with a modification date in the last 7 days.
Attacker behaviour and profiling
To finish off, we are going to look at some of the more common files you should check as you profile a suspect system:
/etc/hosts: this shows any static IP assignments and can identify attackers trying to create routes in plain sight.
/etc/passwd: Look for unexpected accounts, especially UID 0 accounts.
/etc/shadow: Look for any unexpected modification which may indicate attackers have changed a legitimate password.
/etc/sudoers: shows users with the ability to run commands with elevated privileges.
/etc/group: check for changes to group memberships. GID27 is traditionally the SUDOERS group so special attention needs to paid here.
(user path)/.ssh/authorized_keys: Check to see if anything has been added. Attackers add keys to maintain access.
/etc/inittab: Attackers can add code here to have it execute when initd restarts.
Directory names starting with .: This is a technique to try and hide entire directories where the attacker can store tools/data.
Regular files in /dev: The /dev folder should hold devices. If you find any regular files in there its worth a closer look.
It is also worth looking at the modification times of binaries – anything changed recently is interesting, largely because Linux patching tends to be a lot less frequent than windows. When you build your timeline you should also check if files have a timestamp that is out of place for its inode number as this is often a sign of timestomping.
If your system uses a package management tool, you should check to see if anything is different from the “official” version. Changes should be considered for investigation.
Lastly, you should always check for SUID/SGID binaries to see if anything unusual has been created.
Linux Forensics – Summary
To summarise, your overall DFIR approach should be largely unchanged. You still need to have a plan and when you respond you still need to follow a suitable methodology. The biggest difference is that responders tend to have less direct exposure to Linux and, as a result, are less comfortable with the files and folders you need to analyse.
You should address this during the preparation phase of your IR cycle. Build response plans, checklists, train your team etc. It will all be useful at some point.