Linux DFIR may feel like it is a complicated and arcane process, but it doesn’t need to be. Yes, there are challenges around memory collection and lots of modern EDR tools perform badly, but this should never be an issue for a good incident responder. The biggest issue tends to be that IR in this environment is rare, so you are less likely to have a “go-to” mental list of commands and steps to follow.
This guide will help you solve that.
However, there are some important points to bear in mind. Most importantly, Linux is a very configurable platform so you may find that the system you are responding to is very, very different from other machines even if they run the same base OS.
This means that while you should go in with a plan, you also need to be open-minded enough to adapt things on the fly – especially if you discover your tools are producing unexpected output. If at all possible, test this on your Linux machines while there isn’t an incident, during the preparation phase, and this will allow you to fine-tune things to maximise the chance of success.
With that out of the way, let’s look at some key parts of the process.
Linux DFIR Workflow
As with everything in IR, it is really important to have an idea of the workflow you want to follow. The more you can document this, the better quality your evidence is. Even if you never intend to set foot in a court, a good evidence process means you can be more confident about findings and you can trust that you haven’t overlooked anything important.
NOTE: This guide is based on a responder who has direct access to the live system and will be working locally. This is not a guide for dead box/image-based forensics. The activity here will change the state of the target system and will generate log entries/history records. This reinforces the need to keep detailed notes so that the investigator’s activity can be eliminated from the evidence.
Example Collection Workflow
Set up documentation. You can keep notes by hand but Linux also includes the script
command which logs output of each command as you type it and saves the data to file when you exit. You can invoke script -a
to save the file to a separate device if you want to retain off-disk evidence. You can find out more about this often-overlooked command on the script man page.
Ensure you have trusted tools. Remember if you are on a compromised system, the attackers can modify binaries and you have no way to trust the output. Even simple tools like ls can be compromised effectively. Ideally, you will bring your own tools, either from bootable media or via statically linked binaries. If you have to use commands on the operating system bear in mind the risk and try to find multiple ways to validate the output.
Gather data. Your organisation may have specific requirements here but we would recommend something along the following lines:
- Document system name and the start date/time of the review.
- Dump memory for analysis at a later date. On Linux this can be complex but it is outside the scope of this post.
- Get OS details.
uname -a
andlsb_release -a
are useful commands here. - Confirm who is logged in. You can use
w
for this. - Review bash history. Its worth capturing a copy of this early on so you can read each user’s
.bash_history
file later on, this is especially important if you may be adding commands to it. - Get the system environment details. Run
env
and save the output to a text file. - Get networking information. This is where
ifconfig
andarp -a
are useful. - Check network connections. You can use
netstat
orlsof
here, or both. It is worth saving this to a text file as it can be verbose. - Log running processes. It is worth starting with
ps -aux
here. - Log loaded modules. Start with
lsmod
but consider usingmodinfo
if more detail is needed. - Check scheduled tasks. Look in the
crontab
and associated folders. - Check auditing. For example,
auditd
on CentOS - Check for binaries with SUID bit set. You can capture this with `find / -perm -4000 2>/dev/null` and if anything unexpected appears it is worth investigating.
- Validate group memberships. You can
cat /etc/group
andcat /etc/passwd
to ensure there are no unexpected accounts or memberships. The sudo group is often targeted by attackers.
Combined with the memory image, you probably have enough information here to get a good understanding of what has happened on the system and allow you run triage as part of the Linux DFIR workflow. But to reiterate, it is important to tailor this to your environment.
If you need greater detail, you should consider taking a disk image and importing it into a forensic tool or deploying applications from TSK to do a more detailed file system analysis.
Finally, as we said, for all forms of DFIR (including Linux DFIR) it is good practice to keep notes. As a result, as well as running script, its worth making sure the output of all your tools is saved to a text file with a common naming convention. For example, you might want to run lsb_release -a >> lsb_release_YYYYMMDD.txt
or lsb_release -a >> CASENUMBER_lsb_release.txt
. This will allow you to recheck output without having to rerun tools and help when you review evidence at a later date.