Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Linux Forensics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 105

LINUX FORENSICS

Hal Pomeranz

Copyright © Hal Pomeranz


This material is distributed under the terms of the
Creative Commons Attribution-ShareAlike 4.0 International License
http://creativecommons.org/licenses/by-sa/4.0/

Download updates from https://tinyurl.com/HalLinuxForensics

Please support continued development of this material by taking one of my training


classes or donating (US$50 is suggested) via PayPal (paypal.me/halpomeranz) or
Patreon (patreon.com/halpomeranz).

Hal Pomeranz
hrpomeranz@gmail.com
@hal_pomeranz

1
WHO IS HAL POMERANZ?
Started as a Unix Sys Admin in the 1980s
Independent consultant since 1997
Digital forensics, incident response, expert witness
Have done some interesting Linux/Unix investigations

hrpomeranz@gmail.com
@hal_pomeranz

From the "official bio":

Hal Pomeranz is an independent digital forensic investigator who has consulted on


cases ranging from intellectual property theft, to employee sabotage, to organized
cybercrime and malicious software infrastructures. He has worked with law
enforcement agencies in the United States and Europe, and with global corporations.

While perfectly at home in the Windows and Mac forensics world, Hal is a recognized
expert in the analysis of Linux and Unix systems, and has made key contributions. His
EXT3 file recovery tools are used by investigators worldwide. His research on EXT4 file
system forensics provided a basis for the development of open source forensic
support for this file system. Hal has also contributed a popular tool for automating
Linux memory acquisition and analysis. But Hal is fundamentally a practitioner, and
that's what drives his research. His EXT3 file recovery tools were the direct result of
an investigation, recovering data that led to multiple indictments and successful
prosecutions.

Raised in the Open Source tradition, Hal shares his most productive tools and
techniques with the community via his GitHub and blogging activity.

2
LINUX IS EVERYWHERE
Cloud instances
Embedded devices (“IoT”)
Android
ChromeOS

Whether they realize it or not, people interact with Linux systems every day. The
Internet runs on Linux, whether it’s core DNS services, popular web sites, and file and
video sharing. The embedded devices is their homes—DVRs, network equipment,
smart appliances—are often running Linux. And of course Android devices make
Linux the dominant OS platform in terms of installed devices by a huge margin.

Because the owners of the Linux devices may not fully understand the operating
system and how to secure it, many of these devices are easily compromised. As this
equipment becomes more powerful and more connected, it presents an opportunity
for attackers. We have already seen powerful botnets like Mirai, and extensive
cryptocurrency mining operations running on compromised Linux systems.
Ransomware is increasingly targeting Linux infrastructures.

The goal of this course is to provide an introduction to Linux system forensics, with a
primary focus on Linux servers. We will cover memory and file system analysis, and
key Linux artifacts that are useful in many sorts of investigations. The course uses
Open Source forensic tools, but the investigative techniques are applicable to any
forensic tool chain.

3
WHAT’S DIFFERENT ABOUT LINUX?
No registry
Have to gather system info from scattered sources

Different file system


Important metadata zeroed when files deleted
Access time updates are intermittent
Older file systems lack file creation dates

Files/data are mostly plain text


Good for string searching & interpreting data

While Windows tends to concentrate configuration information in the registry, things


are much less centralized in Linux. Every application and Linux subsystem tends to
have separate configuration files and installation directories. So part of Linux
forensics is knowing where the most important artifacts are.

Linux has its own file systems. EXT4 is most common, but Red Hat is now using XFS as
its default file system. ZFS is another option with a decent installed base. While the
EXT family of file systems tends to have decent forensic tool support, support for XFS
and ZFS is much less available. Linux file systems have different timestamp rules from
Windows NTFS, and recovery of deleted data is more challenging because Linux file
systems zero out file metadata upon deletion.

On the plus side, most of the data in Linux is plain ASCII text. So searching for and
correlating data tends to be easier than in other OSes.

4
MEMORY FORENSICS

Memory analysis is a powerful forensic technique. But there are some unique
complications when it comes to doing memory forensics on Linux.

5
WHY MEMORY FORENSICS?
Size matters – faster acquisition/analysis, less storage
See more –
Cached file information
Volatile process, network data
Rootkit indicators
Encryption keys

Memory analysis is a key forensic technique for all types of investigations. Disks
continue to get larger and larger, making traditional "dead box" analysis less and less
practical. It is much easier to collect, analyze, and store 32GB of RAM compared to 2
Terabytes of disk.

Key artifacts for your investigation can be found in memory. Process executables,
shared libraries, program data are all stored in a structured fashion in memory.
Network connections are tracked. Even encryption keys are available. Aggressive
caching and "memory mapping" of files allows the investigator to find many file
system artifacts.

Rootkit hiding techniques, process injection and hollowing, and other types of
malicious activity become obvious with the right memory analysis tools.

6
QUICK OVERVIEW
Acquisition tools:
AVML – Free, file output only
LiME – Free, kernel driver, output to file or network
F-Response – Costs money, agent for disk/memory access

Analysis tools:
Volatility – Free

Historically, accessing system RAM on Linux has required loading a kernel driver. This
is the approach taken by LiME, which is a loadable kernel module that can output
memory to a file on disk or over a network connection. F-Response is deployed as-
needed and runs as an agent that can be accessed over the network. But it uses its
own kernel driver to access RAM.

AVML leverages the /proc/kcore interface that exists on many Linux


distributions. This is convenient because it reduces the amount of administrative
hassle and modification of the current memory image caused by loading a kernel
driver. However, if /proc/kcore is unavailable, you will have to revert to using
LiME or some other driver-based solution. Also, AVML writes its memory image into
the local file system, so unless you write to a network share or removable media you
are potentially overwriting evidence. We will explore work-arounds later in this
course.

LinPmem is another alternative to AVML, which actually pioneered the


/proc/kcore technique that AVML uses. However, LinPmem has libc and other
dependency issues that make it less easy to deploy. Also it prefers AFF4 format and
requires extra command line flags to make a raw memory dump suitable for analysis
with Volatility. AVML writes in Volatility’s preferred format (LiME) by default.

7
Linux analysis capability was added in Volatility 2.2 (Oct 2012) and has continued to
improve.

AVML - https://github.com/microsoft/avml
LiME - https://github.com/504ensicsLabs/LiME
F-Response - https://www.f-response.com/
LinPmem - https://github.com/Velocidex/c-aff4/releases

Volatility - https://github.com/volatilityfoundation/volatility

8
TOO MANY KERNELS!

May need to load kernel version specific driver

Volatility needs kernel version/distro specific "profile"

If you need to load a driver to access RAM, that driver has to match the specific
kernel version it is being loaded into. Usually this means building the driver on a Linux
system with a kernel identical to the system you will be investigating. This is one
reason why AVML is preferred for memory capture– it can use the existing
/proc/kcore device and avoid having to load a new kernel driver.

Volatility needs a kernel-specific profile in order to properly parse and analyze the
captured memory. While the Volatility project is collecting profiles for standard Linux
operating systems, if the target system does not already have a precompiled profile,
you may need to build one manually on an identical system to the target machine.
And note that the same Linux kernel version number across two different Linux
distributions-– say Red Hat vs Ubuntu– is a completely different kernel often requiring
a different Volatility profiles.

Unlike Windows or MacOS, Linux kernel versions vary widely across even a single
organization. It may be the system that you are investigating is the only system with
that particular kernel version. This means you may have to build your kernel module
and/or Volatility profile on the system you are investigating, with all of the obvious
forensic consequences of that activity.

Collected Volatility profiles for various Linux distros and versions are found at:
https://github.com/volatilityfoundation/profiles/tree/master/Linux

9
WORST CASE SCENARIO
Obtain driver source code
Build driver for target system (Where?)
Obtain administrative access on target system
Determine RAM capture destination:
Portable device: attach and mount OR
Network: configure remote destination
Load driver
Initiate capture

Assume you have a Linux distribution that doesn't support /proc/kcore so AVML
is unable to easily access RAM. That means you're going to have to compile and
install a driver like LiME.

So first you get the driver source code. You need to compile the driver on a system
with the same kernel as the target machine. If the target machine is the only available
system with that kernel version, then you are going to be compiling the driver on the
target. But does the target system have the right kernel build environment to compile
the driver? If not, do you try and build a look-alike machine with the right kernel
version?

Assuming you can get the driver built, you need root (administrator) privileges to
install the driver and capture RAM. And the organization that owns the system has to
be willing to let you install the driver on their (possibly critical production) system.

Plus you need someplace to write the resulting memory image. You must not write it
into the local file system of the target system, because you're possibly overwriting
evidence in unallocated clusters of the file system. So you need a USB device or other
system to receive the memory dump over the network.

"It's complicated."

10
MANUAL PROFILE CREATION

Dependencies: Volatility, dwarfdump, appropriate kernel


build environment…

Dump locations of kernel data structures


Obtain symbol table for target kernel
Create profile archive (ZIP file)
Determine appropriate profile name/location

Creating a Volatility profile manually is even more complicated– compiling a program


(again on a system similar or identical to the target system), dumping symbols with
dwarfdump, gathering files from the target system, and bundling everything up into
a ZIP file. And then there's a little bit of command-line magic to get Volatility to find
and use the resulting profile file.

11
LEVERAGE NEEDED!

"Smart people could handle these steps!"

Smart people should be doing analysis

Smart people may not be available

Smart, technically-oriented people can learn to capture Linux RAM and build profiles
for Volatility. But maybe these are the people who should be doing analysis, not
imaging systems. The Helpdesk or Law Enforcement personnel doing the imaging on-
site may not have the expertise to follow a complicated procedure.

In order to scale our collection efforts, we need a simple, repeatable process.

12
LEVERAGE

Contains 3rd-party dependencies:


AVML
LiME kernel module source
dwarfdump
Volatility

Hal's "lmg" script:


Runs AVML or builds LiME
Captures RAM to USB device
Creates Volatility profile

There is a memory collection tool for Windows called DumpIt, developed by Matthieu
Suiche. Put DumpIt on a USB drive, plug the drive into a target system, and DumpIt
collects RAM to the device it was run from. I wanted the Linux version, and the Linux
Memory Grabber (LMG) was the result.

LMG is just a shell script that automates the process of:


1. Using AVML if /proc/kcore (or /dev/crash) exists
2. Building LiME and installing the kernel module if necessary
3. Dumping RAM
4. Creating a Volatility profile
When installed with the appropriate dependencies on a USB drive, it's an easy to
deploy method for grabbing RAM for investigators.

LMG is available from https://github.com/halpomeranz/lmg

13
ISSUES OF PURITY
Attaching writable media to target
Development environment required on target
Executing programs from target OS
Creates memory artifacts of its own

The LMG README file goes into more detail, but if you are a stickler for forensic
purity, this may not be the tool for you:

• You are plugging writable media into an infected system. Not only does this change
the state of the system, the contents of the media can be manipulated by malware
running on the target.

• LMG potentially compiles LiME and other programs required to build a Volatility
profile on the target system. The obviously changes the state of the memory on
the target. It also requires that the target system have a working kernel build
environment.

• LMG uses programs from the target system's OS. These programs could be
malicious and cause problems for the LMG.

These are issues you need to consider regardless of your memory capture solution.

14
LAB – MEMORY ACQUISITION
Do it the hard way first– suffering is good for the soul!

Time to get some practical experience collecting Linux memory and building Volatility
profiles. We'll use both AVML and LiME by hand, and then let you try LMG.

You'll find the exercises as HTML files under /home/lab in your Virtual machine:
1. Launch the Firefox web browser
2. Use Ctrl-O to open a file
3. Navigate to /home/lab/Exercises and open index.html
4. Click on the link to go to the appropriate Exercise

Exercise HTML files are also in the Exercises directory on the course USB. Some
people prefer to open the Exercise in a browser on their host operating system rather
than in the virtual machine.

15
VOLATILITY

Now we're going to look at some of the more useful Volatility plugins for Linux
analysis.

16
VOLATILITY BASICS
Profile location
Profile name

$ vol.py --plugins=. --profile=LinuxCentOSx64 \


-f memory.raw linux_banner
Linux version 3.10.0-862.3.2.el7.x86_64 (builder…

Memory image file Volatility plugin choice

Never forget vol.py --help and vol.py --info

Volatility needs to be able to find your plugin. If you've downloaded one of the pre-
built plugins or manually created your own, chances are it's in some non-standard
directory. Use the --plugins option to specify the directory path. Here we're
using "." which means the directory we are currently in.

The name of the profile is also non-obvious. Suppose you create a profile as a file
named CentOS.zip. On the command-line, Volatility refers to this profile as
LinuxCentOSx64. Basically, you take the file name without the .zip extension,
put Linux on the front and the processor architecture (x64 or x86) on the back
end.

You can actually see this in the output of vol.py --info:

[sans@LAB memory]$ ls
CentOS.zip memory.raw
[sans@LAB memory]$ vol.py --plugins=. --info | grep Linux
LinuxCentOSx64 - A Profile for Linux CentOS x64
LinuxAMD64PagedMemory - Linux-specific AMD 64-bit addr…
linux_aslr_shift - Automatically detect ASLR shift
[… snip …]

You can see the profile name in the first line of output, which I have highlighted.

17
After the profile you specify the memory image file with –f.

Then you pick the plugin to run– vol.py --info gives a list of plugins. All of the
Linux plugins start with linux_, so you can filter on that pattern:

[sans@LAB memory]$ vol.py --info | grep linux_


linux_apihooks - Checks for userland apihooks
linux_arp - Print the ARP table
linux_aslr_shift - Automatically detect Linux ASLR shift
linux_banner - Prints the Linux banner information
linux_bash - Recover bash history from process mem…
[… snip …]

vol.py --help gives a summary of the command-line syntax for the tool. In
some cases there is also plugin-specific help available– for example vol.py
linux_bash --help.

The Volatility Project publishes a reference page for Linux plugins:


https://github.com/volatilityfoundation/volatility/wiki/Linux-Command-Reference

18
HATE TYPING?

$ export VOLATILITY_PLUGINS=.
$ export VOLATILITY_PROFILE=LinuxCentOSx64
$ export VOLATILITY_LOCATION=file://memory.raw
$ vol.py linux_cpuinfo
Processor Vendor Model
------------ ---------------- -----
0 GenuineIntel Intel(R) i7-4650U…

unset VOLATILITY_<blah> when you are finished

Typing all of those command-line options gets tiring. Volatility will look at
environment variables for default values. Once you set the appropriate defaults in
your environment, you command-line becomes much simpler.

When you are done working with a particular memory image, you can clear
environment variables with unset.

The environment variable settings are just defaults and can always be overridden on
the command line. For example vol.py –f memory.lime linux_banner
would use the default plugin directory and profile but operate on the file
memory.lime instead of the default we see set on the slide.

19
TRACK BOOT/HARDWARE INFO
$ vol.py linux_dmesg

[0.0] Linux version 3.10.0-862.3.2.el7.x86_64 …
[0.0] Command line: BOOT_IMAGE=/vmlinuz-3.10.0-… root=…

[163494710.0] RTC time: 20:11:31, date: 07/10/18

[2170532118.2] usb 2-1: Product: VMware Virtual USB Mouse

[4903934091145.4903] lime: loading out-of-tree module…
[4903934159339.4903] lime: module verification failed:…

You may have used the dmesg command in Linux to look at kernel messages.
Volatility can dump the contents of this message buffer with linux_dmesg.
Unfortunately the plugin is not as functional as the command-line tool– you can't
output messages by priority or produce human-readable timestamps.

Nevertheless the output is still useful. You can see the boot image and full boot
command that launched the system. You should be able to find a human-readable
time for when the system booted. You will see USB devices being inserted and
(possibly malicious) kernel modules being added.

20
PROCESS INFO

linux_psaux – full command lines

linux_pslist – PPID, start time, image offsets

linux_psenv – environment variables

linux_pstree – process hierarchies

There are multiple volatility plugins for looking at process information:

• linux_psaux gives process ID and process user ID along with the full
command-line.

• linux_pslist gives abbreviated command info but includes the process start
time and parent process ID. This plugin also gives memory offset information for
the process data, which is used by other Volatility plugins.

• linux_psenv shows environment variable settings associated with each


process

• linux_pstree shows a hierarchical view of processes. While this isn't as


useful on Linux as it is on Windows, it can help you determine if the process was
started by a user logged in over the network or on the system console.

21
LSOF FTW!
$ vol.py linux_psaux | grep avml
46838 0 0 avml centos-memory.lime
$ vol.py linux_lsof -p 15223
Offset Name Pid FD Path
------------------ ----------------------- -------- ------ ----
0xffff8d393b2d17c0 avml 46838 0 /dev/pts
0xffff8d393b2d17c0 avml 46838 1 /dev/pts
0xffff8d393b2d17c0 avml 46838 2 /dev/pts
0xffff8d393b2d17c0 avml 46838 3 /proc/kcore
0xffff8d393b2d17c0 avml 46838 4 /home/lab/…

lsof is one of my favorite Linux command-line tools. The command name stands for
"LiSt Open Files" and it shows, per-process, every open file. But since network
sockets and devices are treated like files in Linux, you also get this information as
well.

The Volatility plugin is not as functional as the command-line tool, but it's still useful.
Here I'm using linux_psaux to get the PID of the AVML process I used to dump
the memory image. Then I use linux_lsof and specify the PID with –p (if you
leave off –p you get the output for all processes).

File descriptor 0 is the standard input, and 1 and 2 are the standard output and
standard error respectively. For interactive commands, these are usually associated
with the PTY device assigned to the user's terminal, as we see here. Other files
descriptors are associated with files, sockets, and devices the program is reading and
writing from. With the Volatility plugin we can see the file names, but not whether
the file is being used for reading or writing.

22
BASIC NETWORK INFO
$ vol.py linux_netstat | grep -v UNIX
UDP 0.0.0.0:68 0.0.0.0:0 dhclient/25981
UDP 0.0.0.0:67 0.0.0.0:0 dnsmasq/1802
UDP 192.168.122.1:53 0.0.0.0:0 dnsmasq/1802
TCP 192.168.122.1:53 0.0.0.0:0 LISTEN dnsmasq/1802
TCP 127.0.0.1:631 0.0.0.0:0 LISTEN cupsd/2374
TCP 0.0.0.0:22 0.0.0.0:0 LISTEN sshd/1130
TCP 192.168.46.149:22 192.168.46.1:49907 ESTABLISHED sshd/22334
TCP 192.168.46.149:22 192.168.46.1:49907 ESTABLISHED sshd/22338
TCP 192.168.46.150:22 192.168.46.1:52591 ESTABLISHED sshd/26023
TCP 192.168.46.150:22 192.168.46.1:52591 ESTABLISHED sshd/26027

linux_lsof will show that a process has a network socket open, but not where
that socket is connected. Fortunately, Volatility also has a linux_netstat plugin
to associate sockets with processes. linux_netstat includes IP address and port
information as we see above.

linux_netstat also outputs information about Unix domain sockets, which are
used for interprocess communication within the system. If you only want to see
network sockets, use grep –v as shown on the slide to suppress the information
for the Unix domain sockets.

23
INTERFACES AND ARP CACHE
$ vol.py linux_ifconfig
Interface IP Address MAC Address Promiscous Mode
---------------- ----------------- ------------------ ---------------
lo 127.0.0.1 00:00:00:00:00:00 False
ens33 192.168.46.145 00:0c:29:a0:98:fd False
virbr0 192.168.122.1 52:54:00:68:d8:5b False
$ vol.py linux_arp
[127.0.0.1 ] at 00:00:00:00:00:00 on lo Not reliable!
[192.168.46.254 ] at 00:50:56:ed:d4:8f on ens33
[192.168.46.1 ] at 00:50:56:c0:00:01 on ens33

linux_ifconfig provides IP and MAC address information for the interfaces on


the system. There is a column indicating whether the interface is in Promiscuous
Mode (packet sniffing). However, this column is not reliable and will show False even
when the interface is in Promiscuous Mode and actively capturing packets.

linux_arp dumps the system ARP cache. This can be interesting when ARP
spoofing is being used to intercept traffic and act as a man-in-the-middle.

24
COMMAND HISTORY
$ gdb /bin/bash
(gdb) disassemble history_list
Dump of assembler code for function history_list:
0x00000000004a2490 <+0>: mov 0x248f81(%rip),%rax # 0x6eb418
0x00000000004a2497 <+7>: retq
End of assembler dump.
(gdb) quit
$ vol.py linux_bash -p 14527 -H 0x6eb418
Pid Name Command Time Command
-------- ----------- ------------------------------ -------

14257 bash 2020-01-30 20:43:09 UTC+0000 mkdir ~lab/memory
14257 bash 2020-01-30 20:44:24 UTC+0000 cd ~lab/memory
14257 bash 2020-01-30 20:44:56 UTC+0000 avml centos-mem…

The linux_bash plugin lets you extract command history from active shell
processes in the memory image.

When a bash shell is started, it reads the saved history from the bash_history
file in the user's home directory. You can see these commands in the first part of the
output– there may be 500 lines (the default bash_history length) or more of
output all with the same timestamp. This is the time that the shell was started and
read the bash_history. The commands that come after, with differing
timestamps, are the commands that were typed in this shell session.

linux_bash has a heuristic for finding the bash history entries in the bash process
memory. But it's much more reliable to give the plugin the offset to the history data
structure. The slide shows how you can use gdb to determine this value and pass it
to the linux_bash plugin with the –H option. However, you need to have a copy
of the bash executable from the target system (which is why LMG grabs a copy of this
binary along with the memory image and profile).

We will discuss bash_history forensics (and anti-forensics) in more detail later


in this course.

25
WHY IS THIS BETTER?
Command history only written to disk when shell exits
In-memory history has all commands for session

bash_history commands are not normally timestamped


Full timestamp information visible with linux_bash

linux_bash output is more valuable than looking at the bash_history on


disk because:

1. Command history is only saved to disk when the shell exits. So the command
history in memory contains commands for the current session that have not yet
been written to the bash_history file on disk.

2. linux_bash shows the timestamp for each command. By default


bash_history does not contain timestamps.

26
CACHED FILE SYSTEM INFO

linux_mount – mounted file systems and mount options

linux_enumerate_files – inode/file name mappings

linux_find_file – list files, possibly recover contents

linux_recover_filesystem – recover cached filesystems

Linux aggressively caches file information. There are Volatility modules for accessing
this information, but they have been a little brittle and unstable as new Linux kernel
versions come out.

linux_mount is very stable and useful for seeing how disk devices are mounted in
the OS.

linux_enumerate_files and linux_find_file can both give lists of file


paths and their associated inode numbers. In some cases, linux_find_file can
extract the cached file content from memory.

linux_recover_filesystem is designed to recover entire in-memory file


systems. Personally, I haven't had good luck with this plugin.

27
LAB - VOLATILITY
It's more fun if you do it yourself!

Experiment with Volatility plugins yourself. There are a lot of interesting artifacts to
look at!

You'll find the exercises as HTML files under /home/lab in your Virtual machine:
1. Launch the Firefox web browser
2. Use Ctrl-O to open a file
3. Navigate to /home/lab/Exercises and open index.html
4. Click on the link to go to the appropriate Exercise

Exercise HTML files are also in the Exercises directory on the course USB. Some
people prefer to open the Exercise in a browser on their host operating system rather
than in the virtual machine.

28
FINDING EVIL

Volatility has always had a focus on finding malicious software and rootkits in
memory. The Linux plugins are no different.

29
LINUX ROOTKITS
Modern Linux rootkits are Loadable Kernel Modules (LKM)
Volatility can help locate them!

Modern Linux rootkits are generally installed as loadable kernel modules– LKM
rootkit is the popular term for this.

For some examples of using Volatility to discover Linux rootkits and malware, see:

http://volatility-labs.blogspot.com/2012/09/movp-15-kbeast-rootkit-detecting-
hidden.html
http://volatility-labs.blogspot.com/2012/09/movp-24-analyzing-jynx-rootkit-and.html
http://volatility-labs.blogspot.com/2012/09/movp-25-investigating-in-memory-
network.html
http://volatility-labs.blogspot.com/2012/09/movp-35-analyzing-2008-dfrws-
challenge.html
http://volatility-labs.blogspot.com/2012/10/phalanx-2-revealed-using-volatility-
to.html

30
SPOT THAT MALICIOUS MODULE
$ vol.py linux_check_modules
Module Address Core Address Init Addr Module Name
------------------ ------------------ --------- -------------------
0xffffffffa0747000 0xffffffffa0745000 0x0 diamorphine
$ vol.py linux_hidden_modules
Offset (V) Name
------------------ ----
0xffffffffa0747000 diamorphine
$ vol.py linux_moddump -b 0xffffffffa0747000 -D .
ERROR: volatility.debug: ... Unable to properly re-create ELF file.

LKM rootkits often attempt to hide their modules. Volatility has two plugins–
linux_check_modules and linux_hidden_modules-– which take
different approaches to finding hidden modules. In both cases they detect the
Diamorphine LKM that I added to the memory image (for more information on
Diamorphine see https://github.com/m0nad/Diamorphine).

Using the module offset address, we should be able to extract the module from the
memory image with linux_moddump. It seems to be unhappy about this module,
however.

Note that sometimes rootkits will not hide their LKM, but instead try to camouflage
themselves using an innocuous name. linux_lsmod can be used to dump all
(non-hidden) module names. You might compare the output of linux_lsmod
from a "known good" system with the module list from the suspect machine to try
and locate the evil module.

31
LOOK FOR HOOKS

$ vol.py linux_check_idt | grep HOOKED


$ vol.py linux_check_syscall | grep HOOKED
64bit 62 0xffffffffa0745540 HOOKED: diamorphine/hacked_kill
64bit 78 0xffffffffa0745080 HOOKED: diamorphine/hacked_getdents
64bit 217 0xffffffffa0745230 HOOKED: diamorphine/hacked_getdents64

Once the malicious kernel module is loaded, it needs to intercept legitimate system
calls– a process typically referred to as hooking. There are different types of kernel
hooks and different Volatility modules for detecting them, but IDT and syscall hooks
are typical.

Both linux_check_idt and linux_check_syscall output HOOKED when


they detect a hook. So the quickest way to find evil is to filter for this keyword in the
output of each plugin.

Understanding which functions are hooked can help you understand the rootkit's
functionality. The getdents interface is used for getting directory information.
Hooking this function allows the rootkit to hide files and directories. Since Linux
makes process information available through the /proc file system, this hook can
also be used to hide processes.

Diamorphine hooks kill because the kill command is used as the administrative
interface for the rootkit. Sending different numeric signals to processes via the kill
command can hide/unhide processes and elevate a process' privilege level.

32
GOT IOCS?
$ vol.py linux_yarascan -Y 'diamorphine' -s 64
Task: systemd-journal pid 478 rule r1 addr 0x7f6ac909c778
0x7f6ac909c778 64 69 61 6d 6f 72 70 68 69 6e 65 3a 20 6c 6f 61 diamorphine:.loa
0x7f6ac909c788 64 69 6e 67 20 6f 75 74 2d 6f 66 2d 74 72 65 65 ding.out-of-tree
0x7f6ac909c798 20 6d 6f 64 75 6c 65 20 74 61 69 6e 74 73 20 6b .module.taints.k
0x7f6ac909c7a8 65 72 6e 65 6c 2e 00 00 03 00 00 00 00 00 00 00 ernel...........
Task: systemd-journal pid 478 rule r1 addr 0x7f6ac909c968
0x7f6ac909c968 64 69 61 6d 6f 72 70 68 69 6e 65 3a 20 6d 6f 64 diamorphine:.mod
0x7f6ac909c978 75 6c 65 20 76 65 72 69 66 69 63 61 74 69 6f 6e ule.verification
0x7f6ac909c988 20 66 61 69 6c 65 64 3a 20 73 69 67 6e 61 74 75 .failed:.signatu
0x7f6ac909c998 72 65 20 61 6e 64 2f 6f 72 20 72 65 71 75 69 72 re.and/or.requir
Task: systemd-journal pid 478 rule r1 addr 0x7f6ac909d732 …

Volatility has integrated Yara into a plugin. This allows you to integrate existing
Indicators of Compromise (IoCs) written as Yara signatures into your memory
analysis.

The slide shows a very simple use of this functionality. I am searching the memory
image for the pattern "diamorphine" and asking for 64 bytes of context (-s 64)
around each hit. This is better than simply string searching because the
linux_yarascan plugin associates each hit with a specific process.

If you have a directory of Yara signatures, use the –y option to tell


linux_yarascan where to find them.

33
OTHER MODULES

linux_check_afinfo – manipulating network structs to hide

linux_check_fop – look for files opened by hidden modules

linux_check_tty – look for a particular keylogging method

linux_keyboard_notifiers – another keylogging method

linux_check_creds – looks for process credential stealing

Volatility includes several other plugins that can be used to detect various rootkit
behaviors. For more details see the Volatility command reference at:

https://github.com/volatilityfoundation/volatility/wiki/Linux-Command-Reference

In some cases, the command reference contains links to blog posts and other
external documents showing how to integrate these plugins into your investigative
workflow.

34
LAB – ROOTKIT!
Always be pivoting!

How might you investigate a system if you suspect a LKM style rootkit? We'll follow a
chain of artifacts to shed some light on the situation.

You'll find the exercises as HTML files under /home/lab in your Virtual machine:
1. Launch the Firefox web browser
2. Use Ctrl-O to open a file
3. Navigate to /home/lab/Exercises and open index.html
4. Click on the link to go to the appropriate Exercise

Exercise HTML files are also in the Exercises directory on the course USB. Some
people prefer to open the Exercise in a browser on their host operating system rather
than in the virtual machine.

35
DISK ACQUISITION & ACCESS

Because memory capture and analysis can be difficult on Linux, it’s good to have a
solid foundation in disk-based analysis.

Several of the disk images used in this class were created by Ali Hadi (@binaryz0ne)
and his team at Champlain College for a workshop at OSDFCon19. They were gracious
enough to allow me to use them in this course as well. The images and their
OSDFCon presentation are on the course USB and also available from
https://github.com/ashemery/LinuxForensics

36
DISK ACQUISITION SCENARIOS
Public Cloud
Follow vendor procedures

Private Cloud
Snapshot and copy (qemu-image to translate)

Local Device
ewfacquire
dc3dd

The best advice I can give about disk acquisition is to stay flexible and give yourself as
many options as possible. No two cases are going to be alike and no solution is going
to work for every case.

In the public cloud, each provider generally publishes guidelines for how to get a disk
image of your instance. For example, here is some guidance from Amazon:
https://aws.amazon.com/mp/scenarios/security/forensics/

If you are running your own hypervisor, the easiest solution is to snapshot the guest
you wish to forensicate. This should also get you a memory dump to analyze, in
addition to the disk. However, in order to analyze the disk image from the snapshot
with your forensic software you may have to convert it into a raw disk image. The
qemu-image program is an excellent tool for converting various virtual disk formats
to raw.

If you are trying to image a physical device, free capture tools include ewfacquire
(writes compressed E01s) and dc3dd (raw images). Access Data also makes a free
command-line tool available for acquisition on Linux systems
(https://accessdata.com/product-download).

37
DIFFICULT DISK GEOMETRIES
Linux wants raw, not E01/AFF/split raw/VMDK…

Layers of configuration confuse commercial forensic tools


Encrypted volumes
Software RAID
Logical volume management

File systems often “dirty” (underplayed)

If you are analyzing disk images with the Linux and Open Source forensic toolchain,
then the images generally need to be in raw form. Common forensic formats such as
E01, AFF, and even split raw are not directly usable by many Linux commands.
Conversion utilities like libewf (https://github.com/libyal/libewf) and afflib
(https://github.com/sshock/AFFLIBv3 -- supports AFF and split raw) can help, as we
will see in a moment.

Once you have a raw disk image, however, the fun is only just beginning. Linux file
systems are often encapsulated within additional layers of complexity, including
Linux’s built-in disk encryption system (dm-crypt and LUKS) and software RAID
capabilities. Linux Logical Volume Management (LVM) is a very common “soft
partitioning” scheme that allows filesystems to be resized on the fly.

Forensic disk imaging rarely involves gracefully shutting down the system to be
acquired (shutdowns change the state of the machine). This often results in file
systems that are “dirty” (underplayed)—meaning they have consistency issues that
must be resolved when the file system is mounted. Mounting such file systems in
read-only mode for forensics can be challenging, but there are work-arounds.

38
LAYERS OF COMPLICATION

E01 Image Files


Assemble
Raw Image /boot Software RAID or
(Physical disk) (unencrypted) AES dm-crypt/LUKS volume
Activate

LVM2 Volume
Map logical devices

Unencrypted / (swap) /usr /var /home


Disk Volumes

Consider a typical scenario:

1. You are given E01 files that somehow need to become a raw disk image that you
can analyze. We will use libewf for this.

2. The raw disk image contains a small unencrypted /boot file system, but the
majority of the disk is an encrypted volume or part of a multi-disk software RAID
set that you need to get through (using Linux command-line tools or specialized
forensic software). Or it’s possible that none of this is in play—proceed to the
next layer.

3. The next layer is typically multiple volumes being managed via Linux LVM
(although again this is optional). Linux command-line tools can help here.

4. Each volume in the LVM configuration is typically a mountable Linux file system.
Or it could be a raw Linux swap partition.

Complicated, right? Let’s walk through it.

39
DEALING WITH E01
# ls
case1-webserver_meta.sqlite Webserver.E01 Webserver.E01.txt
case1-webserver_meta.xml Webserver.E01.csv
# mkdir –p /mnt/test/img
# ewfmount Webserver.E01 /mnt/test/img
ewfmount 20140608

# ls -lh /mnt/test/img
total 0
-r--r--r-- 1 root root 32G Feb 16 18:21 ewf1

The first step is to get your E01 image into something that looks like a raw file system.
libewf includes a virtual file system driver (via the Linux File System in User Space or
“FUSE” subsystem) that can create what appears to be a raw disk image from a
collection of E01s.

First change directories to where your E01 file(s) are located. You will need to
directory to mount your virtual file into– here I’m making a target directory called
/mnt/test/img. Give the ewfmount command the name of the first E01 file in
your collection (it automatically finds any additional segments) and the path to your
target directory.

After the ewfmount command runs, the target directory should appear to contain a
raw disk image file which is the same size as the original disk. The file name is always
“ewf1” and it is strictly read-only.

What is actually happening here is that the ewfmount command is running in the
background, pulling data out of the E01 files as you read from the virtual “ewf1” file.
Yes, there is some overhead for doing things this way, and that will effect the speed at
which data can be read. But it’s easier than manually converting all your E01s to raw
disk images and wasting all the disk space required to hold the raw format.

40
WHAT’S IN THE IMAGE?
# mmls /mnt/test/img/ewf1
DOS Partition Table
Offset Sector: 0 Probably /boot
Units are in 512-byte sectors

Slot Start End Length Description


00: Meta 0000000000 0000000000 0000000001 Primary Table (#0)
01: ----- 0000000000 0000002047 0000002048 Unallocated
02: 00:00 0000002048 0000499711 0000497664 Linux (0x83)
03: ----- 0000499712 0000501759 0000002048 Unallocated
04: Meta 0000501758 0066064383 0065562626 DOS Extended (0x05)
05: Meta 0000501758 0000501758 0000000001 Extended Table (#1)
06: 01:00 0000501760 0066064383 0065562624 Linux LVM (0x8e)
07: ----- 0066064384 0066064607 0000000224 Unallocated

Now that ewfmount has given us a raw disk image, let’s see what’s inside!

Here I am using mmls from the Sleuthkit (sleuthkit.org) to dump the partition table.
Although this image uses an old DOS-style partition table, mmls can also decode GPT
and a variety of other formats automatically.

mmls shows a small Linux file system at the front of the disk and a larger Linux LVM
partition in a DOS-style extended partition (there are no signs of full disk encryption
or software RAID—hooray!). This is very typical for Linux. The small file system is
/boot, which contains everything necessary to bootstrap the OS kernel and get
things running. Once the OS is up and running, Linux automatically decodes the LVM
configuration. Unfortunately, we’re going to have to do that step manually.

41
MORE DETAIL
# fsstat -o 2048 /mnt/test/img/ewf1
FILE SYSTEM INFORMATION
--------------------------------------------
File System Type: Ext2
Volume Name:
Volume ID: 1e860db5dd43e2934d499ba1013b8832

Last Written at: 2019-10-05 05:41:51 (EDT)


Last Checked at: 2016-04-03 12:05:47 (EDT)

Last Mounted at: 2019-10-05 05:41:51 (EDT)


Unmounted Improperly
Last mounted on: /boot

We can get more detail on the small Linux file system at the front of the disk using
the Sleuthkit’s fsstat tool. Like all Sleuthkit commands, fsstat accepts the “-o”
flag to specify a sector offset in the disk image where the file system begins. The
sector offset is the “Start” column data in the mmls output on the previous slide.

fsstat tells us the type of file system we are dealing with– EXT2 in this case. We
also can see where the file system was last mounted. As we suspected, this is /boot.
The fsstat output also shows things like the last mounted time and whether the file
system is clean or dirty. “Unmounted Improperly” means the file system is
dirty.

42
SETTING UP A LOOPBACK DEVICE
-r is “read only” Need byte offset
-f is first available (sector data from mmls)

# losetup -rf -o $((501760*512)) /mnt/test/img/ewf1


# losetup -a
/dev/loop0: [0020]:2 (/mnt/test/img/ewf1), offset 256901120
# file -s /dev/loop0
/dev/loop0: LVM2 PV (Linux Logical Volume Manager), UUID: SA3YAl-
91Rk-W5FA-cQGz-TnXl-J4yN-awbQjd, size: 33568063488

Now we have to deal with the Linux LVM configuration. The Linux command-line tools
for this want to operate on a disk device, not a disk image file. We can fake them out
by using a virtual “loopback” device. The losetup command associates a loopback
device with a raw disk image file.

We need to point the loopback device at the start of the LVM partition by specifying
an offset in bytes. The “$((…))” syntax lets us do math on the command-line. Here
we multiply the starting sector offset from the mmls output by our 512 byte sector
size (also shown in the mmls output). We tell losetup to just grab the first
available loopback device name (“-f”) and make the device read-only (“-r”). The
read-only switch is actually redundant, since ewfmount only permits read-only
access to the ewf1 file. But it’s good to develop careful habits.

But how do we know which loopback device losetup used? “losetup -a“
displays all currently configured loopback devices and where they are pointing. Ours
is the first loopback device, /dev/loop0 (that’s a zero not an oh).

The file command tells us that the loopback device is pointing to a Linux LVM v2
Physical Volume (“LVM2 PV”). So we are on the right track!

43
ACTIVATE LVM
# pvdisplay /dev/loop0
--- Physical volume ---
PV Name /dev/loop0
VG Name VulnOSv2-vg
PV Size 31.26 GiB / not usable 0

# vgscan
Reading all physical volumes. This may take a while...
Found volume group "RD" using metadata type lvm2
Found volume group "VulnOSv2-vg" using metadata type lvm2
# lvchange -a y VulnOSv2-vg
# lvscan | grep VulnOSv2-vg
ACTIVE '/dev/VulnOSv2-vg/root' [30.51 GiB] inherit
ACTIVE '/dev/VulnOSv2-vg/swap_1' [768.00 MiB] inherit

pvdisplay gives more detail about the LVM physical volume. Of particular interest
is the volume group’s name– we’re going to need this for later commands. In the
example on the slide, the volume group name is “VulnOSv2-vg”.

vgscan automatically scans disk and loopback devices for LVM metadata. The
command finds the “RD” volume group from my local Linux analysis workstation as
well as the “VulnOSv2-vg” volume group from our forensic image.

Activate an LVM volume group with “lvchange –a y”. Activation assigns each of
the different volumes within the LVM configuration to a Linux device node. We can
see the various node names in the output of lvscan. By default, the device node
path will always contain the volume group name.

The device nodes you see on the slide are the actual Linux file systems. If you wanted
to acquire an image of the raw file system, then use ewfacquire or dc3dd on
/dev/VulnOSv2-vg/root. But I’m more interested in mounting this file system
so that I can find and extract artifacts with standard Linux command-line tools.

44
CHECK THE FILE SYSTEM
# fsstat /dev/VulnOSv2-vg/root
FILE SYSTEM INFORMATION
--------------------------------------------
File System Type: Ext4
Volume Name:
Volume ID: 46c34db340bee5aa35423fd055183259

Last Written at: 2019-10-05 05:41:50 (EDT)


Last Checked at: 2016-04-03 12:05:48 (EDT)

Last Mounted at: 2019-10-05 05:41:50 (EDT)


Unmounted properly
Last mounted on: /

Here I’m using fsstat to confirm the device nodes were set up properly. Looks like
an EXT4 file system, last mounted as “/” (the root file system).

Note that the fsstat output says the file system was “Unmounted properly”.
So mounting it should be easy. Unfortunately, this turns out not to be the case, as we
will see on the next slide.

45
DIRTY, DIRTY FILE SYSTEMS
# mkdir /mnt/test/data
# mount -o ro,noexec /dev/VulnOSv2-vg/root /mnt/test/data
mount: wrong fs type, bad option, bad superblock on /dev/mapper/Vuln…
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
# dmesg | tail
[13458…] EXT4-fs (dm-6): INFO: recovery required on readonly filesystem
[13458…] EXT4-fs (dm-6): write access will be enabled during recovery
[13458…] Buffer I/O error on device dm-6, logical block 0
[13458…] lost page write due to I/O error on dm-6

[13458…] JBD2: recovery failed
[13458…] EXT4-fs (dm-6): error loading journal

When I attempt to mount the file system, I use the “ro” (“read-only”) switch even
though the loopback device and the underlying “ewf1” file are also set to read-only.
Practice good forensic habits!

Another good habit when analyzing Linux disk images from Linux systems is to use
the “noexec” flag, which is a software switch that prevents executing programs
from the mounted disk image. You wouldn’t want to inadvertently run malware from
the image you were investigating!

Unfortunately, the mount command fails. Digging into the matter with dmesg, it
appears that the file system is underplayed (despite what fsstat told us). Note that
the EXT4 driver is trying to make the file-system writable in order to clean up the file
system– despite our “ro” option! Happily both the loopback device and ewfmount
are blocking any changes, so our mount command just errors out.

As bad as this looks, there is a work-around which we can use to get the file system
mounted. More on that on the next slide.

46
THE DIRTY SECRET
# mount -o ro,noexec,noload /dev/VulnOSv2-vg/root /mnt/test/data
# ls /mnt/test/data
bin dev home lib media opt root sbin sys usr
boot etc initrd.img lost+found mnt proc run srv tmp var
#
# mount -o ro,noexec,loop,offset=$((2048*512))
/mnt/test/img/ewf1 /mnt/test/data/boot
# ls /mnt/test/data/boot
abi-3.13.0-24-generic memtest86+.bin
config-3.13.0-24-generic memtest86+.elf
grub memtest86+_multiboot.bin
initrd.img-3.13.0-24-generic System.map-3.13.0-24-generic
lost+found vmlinuz-3.13.0-24-generic

The trick is to also use the “noload” option, which tells the file system driver to
ignore any incomplete transactions in the file system journal. Usually the file system
is in good enough shape to mount, even ignoring the unfinished changes in the
journal.

The first mount command mounts the root file system on our target directory using
the LVM device node name we set up earlier via the vgchange command. The
mount command is silent if everything works, but we can use “ls” to get a directory
listing of the top-level directory.

We can also mount the /boot partition directly. We need to set up a loopback
device for this, but the mount command will accept “loop” and “offset” options
and set up the loopback device for us. If you recall, /boot is an EXT2 file system, and
EXT2 does not have a file system journal. So the “noload” option is not necessary
here.

47
TEARDOWN
# umount /mnt/test/data/boot
# umount /mnt/test/data
#
# vgchange -a n VulnOSv2-vg
0 logical volume(s) in volume group "VulnOSv2-vg" now active
#
# losetup -d /dev/loop0
#
# umount /mnt/test/img

Once you are done investigating, you will want to unmount and discard all of the
various file systems and devices that you have created during this process.

Essentially we do everything in reverse order, using slightly modified commands:

1. Unmount any mounted file systems. We have to umount …/boot before the
OS will let us umount the root file system that /boot is mounted on top of.

2. “vgchange -a n” to deactivate the VulnOSv2-vg volume group and


discard the device nodes associated with the file systems.

3. “losetup -d“ deletes our loopback device which was pointing at the
beginning of the LVM2 volume.

4. Finally, we umount the virtual ewf1 file that ewfmount created under
/mnt/test/image

48
COMMANDS BY LAYER

ewfmount umount

mmls /boot Software RAID


fsstat (unencrypted)
file
losetup -rf mdadm –S
[ mdadm –IRs ] losetup -d

cat /proc/mdstat
pvdisplay
vgscan
vgchange -a y vgchange -a n

lvscan / (swap) /usr /var /home


fsstat

dd umount
mount

Here’s a summary of the commands used to set up and tear down each layer of a
Linux disk configuration. Note that I’ve included commands for interacting with a
Linux software RAID configuration. You’re going to get some practice with that in the
lab exercise!

If you’re looking for a similar chart for dealing with a disk image that includes a Linux
encrypted volume, please see this presentation:

http://deer-run.com/~hal/CEIC-dm-crypt-LVM2.pdf

49
LAB – DISK MOUNTING
Let’s try something a little more challenging…

Let’s try this with multiple disks in a software RAID configuration!

You'll find the exercises as HTML files under /home/lab in your Virtual machine:
1. Launch the Firefox web browser
2. Use Ctrl-O to open a file
3. Navigate to /home/lab/Exercises and open index.html
4. Click on the link to go to the appropriate Exercise

Exercise HTML files are also in the Exercises directory on the course USB. Some
people prefer to open the Exercise in a browser on their host operating system rather
than in the virtual machine.

50
“QUICK HIT” DISK ARTIFACTS

Now that we have our file systems mounted. Let’s do some quick triage and perhaps
find some evil!

51
IMPORTANT DIRECTORIES
/etc [%SystemRoot%/System32/config]
Primary system configuration directory
Separate configuration files/dirs for each app

/var/log [Windows event logs]


Security logs, application logs, etc
Logs normally kept for about 4-5 weeks

/home/$USER [%USERPROFILE%]
/root
User data and user configuration information

There are potentially interesting artifacts all over the Linux file system, but the most
important items tend to cluster in a few directories. Although things are not exactly
the same, I’m also trying to give you the closest Windows equivalents to some of
these directories.

/etc is where system and application configuration data tends to live. Applications
will typically put their configuration files in directories under /etc. For example,
/etc/apache2 or /etc/httpd for the web server configuration.

Critical system logs live under /var/log. We will have a lot more to say about logs
later on in this course.

User home directories are generally found under /home. The exception is that the
home directory for the “root” (administrative) user is /root.

Also look out for what’s happening in /tmp and /var/tmp. Exploits that do not
gain system-level privileges will often write payloads into these directories. You’ll be
finding a lot of cryptocurrency miners running out of /tmp!

52
BASIC SYSTEM INFO
Linux distro name/version number:
/etc/*-release

Computer name:
/etc/hostname
Also log entries under /var/log

IP address(es):
/etc/hosts (static assignments)
/var/lib/NetworkManager (DHCP)
/var/lib/dhclient or …/dhcp

It’s often important to know which version of Linux the system is running. Not only do
some artifacts change location depending on the version of Linux, knowing the Linux
version can also inform you as to which vulnerabilities your adversary might be
exploiting. Linux systems generally have a file called /etc/<something>-release
that contains version information. It’s /etc/redhat-release on RedHat
Enterprise Linux, Fedora, and CentOS. It’s /etc/lsb-release on Debian and
Ubuntu.

The system hostname is usually found in /etc/hostname. Standard Linux log


messages also include the hostname. Older logs (possibly recovered from
unallocated) might show if the system’s name has been changed.

If the system uses a statically assigned IP address, it is usually found in the


/etc/hosts file. DHCP lease information is typically found in
/var/lib/NetworkManager on recent Linux systems, and
/var/lib/dhclient or /var/lib/dhcp on older versions of Linux. Note that
Linux systems will often keep a long history of historical DHCP lease information–
possibly as far back as the initial system install! This is great for putting the system at
a particular place at a particular time.

53
INSTALLATION DATE/TIME
Linux OS does not generally track installation date/time

Create time of /lost+found is good proxy for system install

Timestamps on SSH host keys typically indicates first boot


Key files are /etc/ssh/ssh_host_*_key

Unlike Windows, which tracks system installation date/time in the registry, Linux
systems generally do not save information regarding the system installation date. So
we are left with using proxies to infer the installation date.

The “lost+found” directory at the top of each file system is created when the file
system is made– generally during the system install. Linux file systems did not have
creation dates until EXT4, but since the lost+found directory is generally
untouched once it is created, the last modified (mtime) on the directory is usually
sufficient. On EXT4, you can see the creation dates using debugfs. Here’s an
example that uses the file systems we mounted in the last section:

# debugfs –R ‘stat /lost+found’ /dev/VulnOSv2-vg/root


[…]
ctime: 0x57013f5c:00000000 – Sun Apr 3 12:05:48 2016
atime: 0x57013f5c:00000000 – Sun Apr 3 12:05:48 2016
mtime: 0x57013f5c:00000000 – Sun Apr 3 12:05:48 2016
Crtime: 0x57013f5c:00000000 – Sun Apr 3 12:05:48 2016
[…]

54
The SSH host keys found under /etc/ssh are usually created the first time the
system boots. So timestamps on these files are another way to assess the age of the
system:

# debugfs -R 'stat /etc/ssh/ssh_host_rsa_key’


/dev/VulnOSv2-vg/root
[…]
ctime: 0x571239be:dadad5d0 -- Sat Apr 16 09:10:22 2016
atime: 0x5d986563:2c09dc10 -- Sat Oct 5 05:41:55 2019
mtime: 0x571239be:dadad5d0 -- Sat Apr 16 09:10:22 2016
crtime: 0x571239be:dadad5d0 -- Sat Apr 16 09:10:22 2016
[…]

So we have a system image that was installed on April 3, 2016 but apparently not
booted until April 16. This was a virtual machine image that may have been cloned
and booted multiple times from a common baseline image.

Note that while Linux file systems store timestamps internally in UTC, Linux
command-line programs default to displaying times in whatever the default time zone
for your analysis workstation might be. But you can have commands display in
whatever time zone you feel like by using the TZ environment variable:

# date
Wed Feb 26 13:33:17 EST 2020
# ls -l /etc/passwd
-rw-r--r-- 1 root root 2095 Jan 29 16:12 /etc/passwd
# export TZ=UTC
# date
Wed Feb 26 18:33:34 UTC 2020
# ls -l /etc/passwd
-rw-r--r-- 1 root root 2095 Jan 29 21:12 /etc/passwd

55
DEFAULT TIME ZONE
System logs written in default time zone for machine

/etc/localtime stores default time zone data

Binary file format:


Use "zdump" on Linux
“strings -a /etc/localtime” often works
Look for matching file under /usr/share/zoneinfo

And speaking of time zones, it is important you know the default time zone for the
system you are investigating. Linux log files and other important artifacts contain
timestamps written in the local time zone for the machine.

The system default time zone is stored in the /etc/localtime file. This file is in a
binary format. While running “strings” on the file will often give you clues, the
easiest thing to do if you are running from a Linux analysis host is to use the zdump
command:

# zdump /mnt/test/data/etc/localtime
/mnt/test/data/etc/localtime Wed Feb 26 19:40:14 2020 CET

It looks like our sample image was set to Central European Time (CET).

56
POST-EXPLOITATION GOALS
Back doors

Persistent malware

Now that we have a good idea of the basic configuration of the system, let’s go
hunting for evil.

In general, attackers will want some sort of back-door access into the compromised
system and a way for their malware to be started automatically after the system
boots. Note that neither one of these is necessarily a given—I’ve seen cryptocurrency
miners dropped onto systems opportunistically with no particular care given to
persistence. I supposed the attackers feel that they could just re-compromise the
system and drop another miner.

57
COMMON BACK DOORS
Custom malware installs
New or replacement binaries
Web shells

Account modification
New (admin) accounts added
Application role accounts unlocked
Enhanced “sudo” access privileges
$HOME/.ssh/authorized_keys entries added

Back doors could take the form of custom malware implants. A web shell is often the
easiest route, particularly if the attacker is exploiting a web app vulnerability to gain
access. Another common back door in the Linux universe is a replacement SSH
service with a hard-coded username/password for gaining admin access.

Another back-door approach is leveraging existing accounts– particularly application


accounts like www or mysql that are normally locked. If the attacker sets a re-usable
password on these accounts, they could use them to access the system remotely.
Creating an authorized_keys entry in the user’s home directory is another way
of opening up access to the account.

Any account with user ID zero has admin-level access. Normally there should be only
a single “root” account with UID 0 in the password file, but multiple UID 0 accounts
are allowed. “sort -t : -k 3 -n /etc/passwd” will sort the passwd file
numerically by UID, so it will be easy to see UID 0 accounts, even if your attacker adds
them in the middle of the file.

Note that the sudo command also gives admin privileges. Look for modifications to
/etc/sudoers or groups this file refers to in /etc/group such as “admin” or
“wheel” group entries.

58
PERSISTING MALWARE
Service start-up scripts
/etc/systemd/system, (systemd)
/usr/lib/systemd/system
/etc/init* (traditional and Upstart)

Scheduled tasks
/etc/cron*
/var/spool/cron/crontabs
/var/spool/cron/atjobs

Attackers may use the normal service start-up mechanisms to restart their malware.
On modern Linux systems that use Systemd, service startup configuration is found
under /usr/lib/systemd/system and /etc/systemd/system. Older
systems use configuration files under directories named /etc/init*.

Look for recent changes to files under these directories. Note that in some cases
these files may invoke other scripts that might have been modified by the attacker.
This is much less obvious than the attacker modifying the start-up configuration files
themselves.

Scheduled tasks can also be used to start persistent malware. There are multiple
places to look because Linux systems operate multiple task-scheduling systems in
parallel. Again, attackers may modify scripts invoked by legitimate scheduled tasks
rather than creating or modifying the scheduled tasks directly.

59
RECENT MODIFICATIONS
find /mnt/test/data -mtime -7
Display files modified in the past week

find /mnt/test/data -newer /mnt/test/data/etc/passwd


Display files modified after target file

ls –lArt /mnt/test/etc
Directory listing sorted by mtime, oldest first

So it’s a good idea to look for any recent modifications to the system. Yes, an attacker
with admin-level access can reset file timestamps, but it’s amazing how often they
don’t bother.

The first find command will walk downwards from the given directory and display
all files and directories modified (“-mtime”) less than seven days (“-7”, “more than
seven” would be “+7”) ago.

While find options like “-mtime” only work on one day granularity, the “-newer”
option lets you discover files that were modified after some other file. This is helpful
if you can pinpoint early modifications by the attacker in the file system. You can also
create your own timestamped file using “touch -t YYYYMMDDhhmm.ss
<filename>” and then use that as an argument to “-newer”. That enables you to an
establish an exact base time you want to search forward from with more granularity
than “-mtime”.

It is often convenient to list a directory in order of last modification rather than


alphabetically. “ls -rt” is a reverse sort by time (oldest to newest), “-l” gives file
details (that’s an el for “long listing”), and “-A” shows “hidden” files whose name
starts with a period.

60
LAB – DISK TRIAGE
How quickly can you find evil?

Get some practice profiling systems and quickly finding artifacts of compromise.

You'll find the exercises as HTML files under /home/lab in your Virtual machine:
1. Launch the Firefox web browser
2. Use Ctrl-O to open a file
3. Navigate to /home/lab/Exercises and open index.html
4. Click on the link to go to the appropriate Exercise

Exercise HTML files are also in the Exercises directory on the course USB. Some
people prefer to open the Exercise in a browser on their host operating system rather
than in the virtual machine.

61
TIMELINE ANALYSIS

Timeline analysis is a fast way to find intrusion artifacts during an investigation.

62
ALL HAIL TIMELINE ANALYSIS!
Attackers leave breadcrumbs all over:
Program installation and execution
File modification
User account usage

A timeline puts the breadcrumbs in chronological order


Helps tell the story of your compromise
Directs you to important evidence

Many attacker activities during an intrusion leave tracks behind in the file system. For
example:

• An exploit may drop a web shell onto your system. The creation date on the file
containing the web shell helps date the start of the incident.

• The attacker may then use the web shell to download additional malware, which
will have its own set of timestamps.

• Next the attacker succeeds at privilege escalation and suddenly root-owned files
on the system begin being updated.

• The attacker modifies configuration files, leaves behind back-doors, etc.

A timeline shows you these changes in chronological order and helps tell the story of
what happened. It directs you to files that were modified or added by the attacker
that you may have not seen yet.

63
STANDARD TIMESTAMPS
Last modified time (M)
Last time the file contents were changed

Last access time (A)


Last time the file was viewed/executed*

Metadata change time (C)


Last inode update (chown, chmod, …)

Creation time (B)


Date/time of file creation (EXT4 only)

Timestamps are created using the four standard file timestamp types:

Last modified (mtime) – The last time the content of the file was changed. For
example, when a new file is created or you use an editor to make changes to a file.

Last access time (atime) – The last time the contents of a file have been read. If the
file is a program or script, atime usually represents the last time the program was
executed. However, Linux systems generally do not update atimes every time the file
is read, as we will discuss below.

Metadata change time (ctime) – The last time metadate about the file is updated. For
example, changing the file owner with chown or the file permissions with chmod.

Creation time (btime) – The date the file was created. Creation time is generally
referred to as the “btime” (born-on date) to distinguish is from the metadata change
time (ctime). However, some Linux commands (like debugfs) refer to this
timestamp as “crtime”. btime was only added to EXT file systems with EXT4 (it is also
found in modern versions of XFS).

64
File system developers have realized that updating atime on every single file access is
inefficient, because it means you have to write the update into the file system even
when the file is just being read (or executed) over and over again. Windows NTFS
stopped updating atimes back in Windows 7.

Linux systems typically use a file system option known as “relatime”. With this option,
atimes are updated on file access if either:

1. The atime is older than the mtime or ctime (hence a “relative atime update” or
“relatime”)—this is designed for programs like mail readers that want to know if
the file has been accessed since it was last updated.

2. The atime was last updated more than 24 hours ago.

So atimes in Linux are only updated on an occasional basis, but are still sometimes
useful. For example, atime updates on programs that are not commonly used or on
malware dropped by the attacker can still be important artifacts of execution in your
timeline.

65
TIMELINE CAVEATS
Timestamps are ephemeral
You only get the last modified time, change time, etc
Normal system usage will update timestamps
Admin users may change timestamps at will

Analyst needs understanding of typical system behaviors

Timelines are a guide to evidence, not evidence themselves

It’s important to understand the limitations of timelines. Remember that you only get
the last modified or access time on a file. It’s possible that the attacker modifies
/etc/shadow to set a password on an account like the “postgres” database user.
But then a regular user might come along and change their password, updating the
mtime on the file. You’ve lost a potentially useful piece of information—when the
attacker updated /etc/shadow– and you now have a “hole” or “gap” in your
timeline.

Also, timestamps on files can be updated arbitrarily by the superuser. The touch
command allows root to set the atime or mtime to any time desired. debugfs gives
the ability to update any timestamp (for examples see
http://blog.commandlinekungfu.com/2010/02/episode-80-time-bandits.html).

Even without attacker anti-forensics, reading a timeline requires and understanding


of typical system behaviors. You see an atime update on debugfs that was probably
due to enemy activity, but what does that command do for the attacker in the context
of the incident? What does it mean if the “set-UID” bit is turned on for an
executable?

So it takes an experienced technical analyst to understand what the timeline is saying.


It’s unlikely that you’ll be using your timeline as direct evidence. But it’s a great guide
to help you find evidence!

66
HOW TO TIMELINE

1. Collect raw data into a body file

2. Create chronological output (usually as CSV)

3. Jump to key pivot points for analysis

To create a timeline you first need to extract the raw timestamp data into a file. These
files are often referred to as body files. The name comes from an early Open Source
forensic toolkit called the Coroner’s Toolkit (TCT). TCT contained a program called
graverobber for extracting timeline information. And what do grave robbers
steal? They steal bodies of course! The name body file has stuck even though we
don’t use graverobber anymore.

Once you have your body file(s), we need a tool to create the sorted timeline.
Timelines are often created as CSV files, which are easier to search, filter, and
annotate. Some analysts use MS Excel to read their body files, but a better option is
Eric Zimmerman’s free Timeline Explorer tool (TLE). TLE is much faster, especially with
large timelines, and has powerful sorting, filtering, and tagging capabilities. For all of
Eric’s great tools, visit https://ericzimmerman.github.io/

But where do you start looking? Hopefully your earlier triage will give you some
places to start. For example, in a previous lab exercise we investigated attacker
changes to the /etc/passwd and /etc/shadow files. So jump to the last mtime
update on these files and look at what else was happening around that same time. Or
look for the creation time of malware the attacker might have left behind. We call
these kinds of markers pivot points– they are the starting points for your analysis.

67
STEP 1 – COLLECT DATA
# mkdir /cases/timeline
# cd /cases/timeline Recursive – process all files/directories
#
mactime format & mount prefix
#
# fls -r -m / /dev/mapper/VulnOSv2--vg-root | gzip >bodyfile-root.gz
#
# fls -o 2048 -r -m /boot /mnt/test/img/ewf1 | gzip >bodyfile-boot.gz

Sector offset (from mmls) Recognized file system type

Body files are quickly generated with a Sleuthkit tool call fls. Standard arguments
include:

• “-r” to recursively read through the entire file system (rather than just dumping
information from the top-level directory, which is the default). You want to be sure
to collect evidence from all files and directories.

• “-m <mntpt>” to specify the output format of fls should be in mactime format
(which is simply a pipe-delimited text file). We will be using mactime in the next
step to make our timeline. The <mntpt> argument to -m is the path the file system
is normally mounted on—see the second example on the slide where we are
dumping data from /boot. The mount pathname will be added to the front of the
file paths in the fls output so that the path names are consistent with the way
the file system was used on the live machine.

• “-o” lets you specify a sector offset into a full disk image to find the start of the
file system

68
You must also specify a raw file system of a type TSK tools can recognize. The EXT4
/boot file system can be accessed directly from the raw disk image created by
ewfmount (and if TSK is compiled with libewf support, it can read the E01 files
directly). But TSK doesn’t understand Linux LVM, so we must first associate the logical
volumes with disk devices that fls can read.

Note that because mactime format body files are just plain ASCII text, they
compress very well. So were are gzip-ing them to save space.

While some analysts will concatenate all of their body file data into a single large file,
I prefer to dump each file system as a separate body file. That way, if I mess up one
command, I only have to rebuild that one body file. Otherwise the bad data from my
one wrong command might pollute the file with all of my other good data.

69
STEP 2 – BUILD TIMELINE

-d for CSV (delimited) output


# zcat bodyfile-* | -y for ANSI dates in UTC
mactime -d -y
-p /mnt/test/data/etc/passwd Specify location of
passwd/group files
-g /mnt/test/data/etc/group to see names not
2019-10-01 >timeline.csv UIDs and GIDs

Save output
Build timeline from
to file
this date onwards

Once we have all of our body file data collected, we feed it into the mactime tool to
produce our timeline. Here I’m using zcat to uncompress the body files I made in
the previous step and piping the uncompressed output into mactime.

Useful arguments to mactime include:

• “-d” to produce delimited (CSV) output


• “-y” for ISO 8601 date output in UTC (2019-10-05T11:31:37Z)
• “-p” and “-g” to specify the location of the passwd and shadow files from the
image you are analyzing so you see the right user and group names in the output

You may optionally specify a single date as we are doing in the example on the slide
or a date range (2019-10-01..2019-11-01). Single date means create the timeline
using only timestamps from that date forwards. Range of dates means only output
times within the dates specified.

Output normally goes to the standard output (the terminal). Here we are using
output redirection to save the output in a file.

70
STEP 3 – ANALYZE!
Questions to answer:
How/when did the attacker breach the system?
How/when did they gain root access?

What are you looking for?


Suspicious file/directory creation or modification
Evidence of program installation and/or execution

Use your pivot points to begin your analysis

Once you have your timeline, the rest of the work is analysis. Ultimately, intrusion
analysis tries to answer at least two important questions– how did they break in and
how did they get admin privileges? “What did they take?” is another question that is
often asked. The kind of evidence you can see in the timeline is changes to the file
system– attackers adding files or directories, modifying or replacing existing files,
making permissions changes, etc.

To find the evidence, think about possible pivot points in the timeline based on what
you already know from your triage:

• If the attacker is running custom malware, look for the btime of the malicious
executable and possibly its installation directory.

• Our attacker added accounts, modifying /etc/passwd and /etc/shadow–


start at the mtime updates on these files.

• Maybe you have an IDS alert or information from your logs that indicate attacker
activity. Jump to these times in your timeline and see what was happening in the
file system.

71
LAB – TIMELINE ANALYSIS
Jump in and swim!

The best way to learn timeline analysis is to try it yourself… with a little expert
guidance!

You'll find the exercises as HTML files under /home/lab in your Virtual machine:
1. Launch the Firefox web browser
2. Use Ctrl-O to open a file
3. Navigate to /home/lab/Exercises and open index.html
4. Click on the link to go to the appropriate Exercise

Exercise HTML files are also in the Exercises directory on the course USB. Some
people prefer to open the Exercise in a browser on their host operating system rather
than in the virtual machine.

72
CORE LOG ANALYSIS

Logs are an essential part of the forensic analysis of any operating system.

73
LINUX LOGS
Generally found under /var/log

Logs are primarily text


Easy to modify and manipulate

Logging is discretionary
Amount and format of logs left to developers

Linux logs are generally found under /var/log. This is largely convention,
however– they could be written anywhere in the file system and you will find them
other places on other Unix-like operating systems.

Unix logs are usually simple text files. It is very easy for attackers who have obtained
admin access to edit or simply remove log files. Attackers have even created tools to
modify the common binary log formats on Linux which we will be discussing shortly.
So it is a good idea to ship copies of your system to some other protected storage
area.

Linux logging is discretionary– the software developers decide what they are going to
log and the format they are going to log it in. This can make automated log analysis
frustrating, because the logs are so free-form. And of course attacker tools are not
going to provide helpful logging information because they don’t have to.

74
LAST LOGIN HISTORY
wtmp – User logins and system reboots [read with last]
File may be truncated weekly or monthly

btmp – Failed logins [read with lastb]


Often not kept due to risk of password disclosure

lastlog – Last login for each user [read with lastlog]


Varying formats make decoding tricky

The /var/log/wtmp file stores a record of login sessions and reboots. It is in a


special binary format, so you have to use the last command to dump out
information:

# last -if /mnt/test/data/var/log/wtmp


mail pts/1 192.168.210.131 Sat Oct 5 07:23 - 07:24 (00:00)
mail pts/1 192.168.210.131 Sat Oct 5 07:21 - 07:21 (00:00)
mail pts/1 192.168.210.131 Sat Oct 5 07:18 - 07:19 (00:00)
mail pts/1 192.168.210.131 Sat Oct 5 07:13 - 07:18 (00:04)
reboot system boot 0.0.0.0 Sat Oct 5 05:41 - 13:42 (155+…)
root tty1 0.0.0.0 Wed May 4 13:36 - down (00:01)
vulnosad pts/0 192.168.56.101 Wed May 4 13:35 - 13:36 (00:00)
root tty1 0.0.0.0 Wed May 4 13:34 - 13:34 (00:00)
reboot system boot 0.0.0.0 Wed May 4 13:33 - 13:37 (00:03)
root pts/0 192.168.56.101 Wed May 4 13:01 - down (00:06)
vulnosad pts/0 192.168.56.101 Wed May 4 12:57 - 13:00 (00:03)
reboot system boot 0.0.0.0 Wed May 4 12:56 - 13:07 (00:10)
[…]

The “-i” flag shows IP addresses rather than hostnames, and “-f” allows you to
specify a file path that is not the default /var/log/wtmp file. last shows the
newest logins first.

75
The first part of the output shows remote logins by the “mail” account from IP
address 192.168.210.131. Then we see a system reboot in the log. The next line is a
login by “root” on the local text-mode console of the system– “tty1” (if the login had
occurred on the graphical console you would see “:0” in the IP address column).

The btmp file stores information about failed logins, but it does not exist by default.
Many administrators choose not to enable btmp logging because it can sometimes
disclose user passwords– how many times have you accidentally typed your password
into the username field? If you have a btmp file, you can read it with the lastb
command:

# lastb -if /mnt/test/data/var/log/btmp


mail ssh:notty 192.168.210.131 Sat Oct 5 07:20 - 07:20 (00:00)
root ssh:notty 192.168.210.131 Sat Oct 5 06:52 - 06:52 (00:00)
root ssh:notty 192.168.210.131 Sat Oct 5 06:52 - 06:52 (00:00)
root ssh:notty 192.168.210.131 Sat Oct 5 06:52 - 06:52 (00:00)
root ssh:notty 192.168.210.131 Sat Oct 5 06:52 - 06:52 (00:00)
[…]

We can see a failed login for user “mail” and multiple failed “root” logins, all
originating from IP address 192.168.210.131.

The lastlog file stores last login information for each user on the system. The file
can appear to be huge, but it is actually a sparse file– the offset to any user record is
their UID times the size of the lastlog record. You read the file with the lastlog
command, which simply goes line by line through the password file and dumps the
lastlog record for each UID it finds there. That means if you are not using the
password file from the system the /var/log/lastlog file was taken from, or if
you are but there were existing user accounts that have been deleted from that
password file, then you may not be seeing all the data in the file.

The biggest problem, however, is that the format of the lastlog file is highly
variable. The version of Linux you are running as well as the processor architecture
that the lastlog file was written on can affect the size of the lastlog records
and impact your ability to read the file. If the lastlog program on your analysis
workstation fails to read the lastlog file, you can use the lastlog program in
the image you are analyzing, but only after temporarily disabling the “noexec” flag
we set earlier:

76
# mount | grep vg-root
/dev/mapper/VulnOSv2--vg-root on /mnt/test/data type ext4 (ro,noexec,noload)
# mount -o remount,ro,noload /dev/mapper/VulnOSv2--vg-root /mnt/test/data
# mount | grep vg-root
/dev/mapper/VulnOSv2--vg-root on /mnt/test/data type ext4 (ro,noload)
# /mnt/test/data/usr/bin/lastlog -R /mnt/test/data
Username Port From Latest
root tty1 Wed May 4 19:36:39 +0200 2016
daemon **Never logged in**
[…]
mail pts/1 192.168.210.131 Sat Oct 5 13:23:34 +0200 2019
news **Never logged in**
[…]
vulnosadmin pts/0 192.168.56.101 Wed May 4 19:35:16 +0200 2016
mysql **Never logged in**
webmin tty1 Wed May 4 10:41:07 +0200 2016
sshd **Never logged in**
postfix **Never logged in**
postgres **Never logged in**
# mount -o remount,ro,noexec,noload /dev/mapper/VulnOSv2--vg-root
/mnt/test/data
# mount | grep vg-root
/dev/mapper/VulnOSv2--vg-root on /mnt/test/data type ext4 (ro,noexec,noload)

We can use the “remount” option to the mount command to change mount options on the
fly, without actually mounting the file system again. Once we allow execution from the file
system, we can invoke the lastlog command from our forensic image. The “-R” flag tells
the command to chroot() (“change root”)—in other words, treat the /mnt/test/data
directory as if it was the root of the file system. This allows the lastlog command we are
running to find both the passwd file and the /var/log/lastlog in the expected
locations.

77
SYSLOG
Syslog is the background service that receives/routes logs

Destination is usually local log files


Default is restart logs weekly, keep four previous weeks

Can also route logs to other hosts over the network


Always a good idea to aggregate longer term log history

The primary logging service on Linux systems is Syslog. It runs as a background


service and receives log messages from various processes on the system (and the OS
kernel) and then routes the logs messages to different destinations.

Typically, log messages are stored in text files on the local system. There is an external
“log rotation” process that is responsible for making sure the log files don’t grow
forever and fill up the disk. Log rotation usually happens weekly– the old log file is
renamed and sometimes compressed, and a new log file is started. Traditionally,
Linux systems will keep four weeks of old log files in addition to the log file that is
currently being written to by Syslog. So you’ll find about a month worth of logs under
/var/log. If /var/log/secure is the primary file name, you’ll find the older
logs in files named secure.1 through secure.4, with secure.4 holding the
oldest log messages.

However, there is also a Syslog network protocol that allows routing logs over the
network to a Syslog service on another host. This is useful for aggregating your logs
together into a SEIM tool or other log analysis platform—collected logs have huge
value during an investigation. Having a copy of your logs on a different system also
helps protect them from attackers destroying the logs on the local machine.

78
SYSLOG CONFIGURATION
Type of log messages by
“facility” and “priority”
Local file destinations
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
#cron.* /var/log/cron.log
#daemon.* -/var/log/daemon.log
kern.* -/var/log/kern.log
#lpr.* -/var/log/lpr.log
mail.* -/var/log/mail.log

auth,authpriv.* @loghost Send logs to


*.notice;auth,authpriv.none @loghost remote host

Here is part of a typical Syslog configuration file (look for the config files as
/etc/rsyslog* or /etc/syslog-ng*, or /etc/syslog.conf on older
Unix systems). The left column describes what the administrator wants to log and the
right column is the destination where the log messages should be sent.

The left column uses a combination of “facility.priority” to select log messages. The
facility tells something about where the log message came from. For example,
“authpriv” messages are authentication or security messages that are supposed to be
kept private (for administrators only). Messages like these are used to track user
logins, logouts, and privilege escalations and therefore are very interesting to us.
Priority ranges from debug (lowest) all the way up to emergency (highest). The “*” is
a wildcard that means match any facility or priority.

The destination for the log messages can be a file path or a remote hostname (or IP
address) given as “@<hostname>”. When writing to a log file, Syslog normally tries to
flush the log messages immediately to disk. The “-” sign in front of a log file name
means that the logs are less critical and can be buffered before writing to disk. This is
much more efficient in terms of file system performance.

79
SAMPLE LOG MESSAGES
Oct 5 13:13:53 VulnOSv2 sshd[2624]: Accepted password for mail from
192.168.210.131 port 57686 ssh2
Oct 5 13:13:53 VulnOSv2 sshd[2624]: pam_unix(sshd:session): session
opened for user mail by (uid=0)
Oct 5 13:14:04 VulnOSv2 sudo: mail : TTY=pts/1 ; PWD=/var/mail ;
USER=root ; COMMAND=/bin/su -
Oct 5 13:14:04 VulnOSv2 sudo: pam_unix(sudo:session): session opened
for user root by mail(uid=0)
Oct 5 13:14:04 VulnOSv2 su[2721]: pam_unix(su:session): session
opened for user root by mail(uid=0)
Oct 5 13:18:48 VulnOSv2 sshd[2624]: pam_unix(sshd:session): session
closed for user mail

Process [and PID]


Host where log originated
Timestamp in local time zone

Here are some typical log messages from a Linux log file. Each log message starts with
a date/time stamp in the system’s default time zone, the name of the host where the
log message was generated, and the process name and usually the process ID
number of the software that generated the log. The rest of the log message is left up
to the developers, and you can see that the format of the log messages varies widely.

The first two lines of log messages above show user “mail” logging in via SSH from IP
address 192.168.210.131 (port 57686 is the source port on the remote system) at
13:13:53 on Oct 5. The next three lines show user mail doing “sudo /bin/su –”,
which will give them an administrative shell. The last line shows the SSH session
closing at 13:18:48 (use the process ID of the SSH process to match the session
openings and closings in a busy log file).

Notice that the standard date/time stamps do not include the year. I’m assuming that
the original Unix developers believed you wouldn’t keep log messages around longer
than a month, so the year was unimportant. But if you do keep logs for a long time
(or if you are recovering old log messages from unallocated), figuring out the year
data becomes a factor. Sometimes you will see log messages (particularly kernel logs
during the system boot) which contain the year in the text of the log message.

80
The format of the date/time stamp is very regular and can be searched for. This is a
useful trick when trying to recover deleted log messages from unallocated. You can
use the Unix regular expression '[A-Z][a-z]* *[0-9]* *[0-9]*:[0-9]*:[0-9]* *’ to search
for the standard date/time stamp (uppercase letter, lowercase letters, spaces,
number, spaces, number, colon, number, colon, number, spaces).

Another overlooked part of the log messages is the host name. I’ve had cases where
suspects changed the host name of their machine. I was able to determine the old
host name for the system from older log messages.

Notice that the Syslog facility and priority of each log message is not logged. This
information is only associated with the log message while it is being transmitted.

81
USEFUL LOGS
auth,authpriv.* – All things security-related

kern.* – USB and other device info, firewall logs

cron.* – Scheduled task execution

daemon.* – Other applications and services

Important security messages go to authpriv (auth on older Unix systems). Look for
these messages first.

kern messages will contain information about devices on the system, including USB
devices as they are plugged in (for more details about USB forensics in Linux see
http://blog.commandlinekungfu.com/2010/01/episode-77-usb-history.html).

kern messages also can contain logs from the Linux netfilter firewall, aka IP Tables:

Mar 3 20:54:10 caribou kernel: [559355.051255] [UFW BLOCK]


IN=eth0 OUT= MAC=44:8a:5b:6d:c6:f2:34:db:9c:67:00:8a:08:00
SRC=35.241.56.184 DST=192.168.1.13 LEN=40 TOS=0x00 PREC=0x00
TTL=106 ID=0 DF PROTO=TCP SPT=443 DPT=37871 WINDOW=0
RES=0x00 RST URGP=0

These logs are dense and difficult to read, but they are very regular and can easily be
parsed into a more readable format. A little Google-ing will turn up many tools that
can parse these logs.

Scheduled task logs (cron.*) and logs from various system services (daemon.*)
can also be useful.

82
LAB – LOG ANALYSIS
It may not be glamorous, but…

There is all sorts of useful information waiting for you in your logs!

You'll find the exercises as HTML files under /home/lab in your Virtual machine:
1. Launch the Firefox web browser
2. Use Ctrl-O to open a file
3. Navigate to /home/lab/Exercises and open index.html
4. Click on the link to go to the appropriate Exercise

Exercise HTML files are also in the Exercises directory on the course USB. Some
people prefer to open the Exercise in a browser on their host operating system rather
than in the virtual machine.

83
ADDITIONAL LOGS

Syslog style logs and wtmp/btmp/lastlog are common, but there are other types
of logs you may run into on Linux systems that can be very useful in investigations.

84
OTHER USEFUL LOGS
Web server logs
Often document the initial compromise

Kernel audit logs


Optional mandatory logging, very detailed

Other application logs


Databases, web proxies, …

Given the number of web server exploits, you will be spending a lot of your life
looking at web server logs. You web server logs can document when the breach
occurred and where the attackers originated from. They may also possibly supply
some details about the nature of the exploit.

Linux systems may have kernel-level auditing enabled. This is similar to Windows
Sysmon. The information is incredibly detailed but can be difficult to understand. Plus
it needs specialized configuration in order to provide the most useful information. If
you are the administrator of a Linux system, you might want to look into enabling this
logging.

Although we won’t have time to cover them here, logs from other services like
databases and proxy servers can also be useful. Proxy servers tend to write logs by
default. Database logging often needs to be increased to be useful—for example
logging individual database queries is not normally enabled by default, but is
incredibly useful after an incident. Even proxy logs can be enhanced by adding
information such as browser user-agent and query string information.

85
WEB LOGS
Remote user and authenticated user
(both usually “-”)
Source of request
World’s most annoying
time and date stamp

Returned result code


192.168.210.131 - - [05/Oct/2019:13:17:48 +0200]
"GET /jabc/scripts/update.php HTTP/1.1" 200 223
Bytes sent
"http://192.168.210.135/jabc/scripts/"
"Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"

Request method, path, and protocol


HTTP Referer and User Agent (optional)

Linux web servers– whether Apache or Nginx or something else– tend to use a
standard log format developed for the NCSA server in the early days of the web.
There are a lot of things I don’t like about this log format, but it’s what we get by
default. Happily there are a lot of tools that can parse these logs and do useful things
with them.

Web logs are typically found in directories under /var/log like


/var/log/httpd or …/apache* or …/nginx.

Here’s a breakdown of the fields:

• IP address or hostname of the remote host– If you are in control of the web server,
try to turn off DNS lookups and always log the raw IP address.
• Remote user and authenticated user– The remote user was supposed to be
determined using the old “ident” protocol, which nobody supports anymore. The
authenticated user is only known if the user is using HTTP Basic or Digest auth or
some other built-in authentication strategy in your web server (hint: this never
happens in modern web apps). So these fields are almost always “-”, indicating no
information.

86
• Date/time stamp– This has got to be one of the most unhelpful date/time stamp
formats ever. It’s not sortable—day of month first and month abbreviations instead
of numbers? Who does that? It’s written in system local time, but at least the time
zone offset is provided (+0200 here means two hours ahead of UTC).
• Request method, URI path, protocol version
• Result code– 200 is success, 3xx is a redirect to another URL, 4xx is client error
(like “404 Not found”—the client asked for something that didn’t exist), 5xx is
server error (can sometimes indicate an exploit that causes the server to blow up).
• Bytes sent— Note that this is bytes sent not the actual size of the requested
object. For example, a large file transfer may have been interrupted in the middle
and the client is coming back to get the rest of the object they are missing.
• HTTP Referer (optional)– HTTP referer is the web page that contained the link we
clicked on to get to this web page. In the case of embedded elements like images,
style sheets, and javascript, the referer is the web page those elements are used
in. HTTP referer information may not be present in the default log format, but if it’s
your web server, you should definitely make sure referer logging is enabled.
• User-agent string (optional)– The user-agent string from the software making the
web request. Useful for tracking malware that uses unique user-agent strings. Like
referer, this field is optional and may need to be enabled in your logs.

There is one more important thing to note about timestamps in web logs. The
timestamp is set at the time of the web request, but the log message is only put into
the log file when the web server finishes processing the request. That means that it is
possible to see web log messages with timestamps out of chronological order when
you have web requests that take a long time to complete.

87
DON’T FORGET ERROR LOGS!
[…]
PHP Notice: Use of undefined constant
aygiTmxlbiIsICRsZW4pOyAkbGVuID0gJGFbJ2xlbiddOyAkYiA9ICcnOyB3aGlsZS
Aoc3RybGVuKCRiKSA8ICRsZW4pIHsgc3dpdGNoICgkc190eXBlKSB7IGNhc2UgJ3N0
cmVhbSc6ICRiIC49IGZyZWFkKCRzLCAkbGVuLXN0cmxlbigkYikpOyBicmVhazsgY2
FzZSAnc29ja2V0JzogJGIgLj0gc29ja2V0X3JlYWQoJHMsICRsZW4tc3RybGVuKC…
[Sat Oct 05 13:17:48.483593 2019] [:error] [pid 1789]
[client 192.168.210.131:41888] PHP Warning: system():
Cannot execute a blank command in
/var/www/html/jabc/scripts/update.php on line 2,
referer: http://192.168.210.135/jabc/scripts/
[…]

Also check your web server error logs. These logs tend to have no regular format, as
you can see from the sample messages above. But they will collect information about
some web exploits launched at your server.

In the first log message on the slide you see some base64 encoded exploit code. The
second line shows the attackers trying to exploit the update.php script.

88
LINUX KERNEL AUDITING
Kernel-level activity monitor can see everything
System booting
User logins and privilege change/escalation
Scheduled task execution
SELINUX security policy violations

With additional configuration can log


File access, modification, execution
Any specific system call(s) across all processes
User keystrokes
Locally defined tags or keywords for later searching

Linux kernel auditing is an optional type of logging that may be enabled on some
servers. I recommend enabling it on servers you control. If enabled, you will find the
audit logs under /var/log/audit by default. Kernel auditing is mandatory– it
happens independent of the application and is not left up to the software developer,
but instead is configured by the system admin.

With no special configuration, kernel auditing will log user login/logout activity and
privilege escalations, as well as scheduled tasks taking on various user roles. If you
are running SELinux (even in “passive” or non-blocking mode), the SELinux logs end
up in your audit logs as well. Note that attackers may forget to edit user login history
in the audit logs when they are trashing your Syslog-style logs—comparing the two
logs is sometimes enlightening.

However, you can also enhance logging levels to log file access, process execution,
track specific system calls (the “ausyscall --dump“ command will give you a list
of system calls you can trace), and even perform keystroke logging (look for
documentation on the pam_tty_audit module). Sample configurations can be
found in the CIS Benchmark Guide for Red Hat systems (cisecurity.org) and
https://github.com/bfuzzy/auditd-attack

Another useful note is that you can add your own keywords for individual rules in
your audit configuration. A good set of unique keywords can make searching your
audit logs much easier during an incident or a hunt.

89
ALL HAIL AUSEARCH!
# ausearch –if /mnt/evidence/var/log/audit -c useradd
----
time->Thu Feb 20 13:26:44 2020
type=PROCTITLE msg=audit(1582223204.906:342):
proctitle=2F7573722F7362696E2F75736572616464002D64002F7573722F706870002D6D0
02D2D73797374656D002D2D7368656C6C002F62696E2F62617368002D2D736B656C002F6574
632F736B656C002D4700776865656C00706870
type=PATH msg=audit(1582223204.906:342): item=0 name="/etc/passwd"
inode=135568 dev=fd:00 mode=0100644 ouid=0 ogid=0 rdev=00:00
obj=system_u:object_r:passwd_file_t:s0 objtype=NORMAL
cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0
type=CWD msg=audit(1582223204.906:342): cwd="/var/mail"
type=SYSCALL msg=audit(1582223204.906:342): arch=c000003e syscall=2
success=yes exit=5 a0=55d79f171ce0 a1=20902 a2=0 a3=8 items=1 ppid=9425
pid=9428 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0
tty=pts1 ses=3 comm="useradd" exe="/usr/sbin/useradd"
subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="auth-files"

Audit logs are just raw text files, but the best way to search them is with the
ausearch command. This is because ausearch converts the Unix epoch style
timestamps in the audit log messages into a human-readable timestamp (you see the
raw epoch timestamp in each log message too
“…msg=audit(<epoch>:<auditID>)…”). Also ausearch shows you all messages
associated with the events you are looking for—like in the example on the slide
where there are multiple audit associated with a single useradd command.

The “type=PROCTITLE” message contains a hex-encoded copy of the user’s


command line. You can read it out as follows:

$ echo 2F7573722F736269… | xxd -r -p | tr \\000 ' '; echo


/usr/sbin/useradd -d /usr/php -m --system --shell
/bin/bash --skel /etc/skel -G wheel php

Use echo to pipe the hex encoded data into xxd, which will convert it back into
ASCII character data. The various command line arguments are null-delimited, so use
tr to convert the nulls to spaces. The last echo command adds a newline to the end
so you can read the command-line more easily.

90
The “type=PATH” message shows the file the useradd command is changing.
Here it’s the /etc/passwd file, but there’s another similar set of audit messages
showing changes to /etc/shadow and so on.

The “type=CWD” message shows that the user was in the /var/mail directory
when they ran the useradd command (“CWD” is short for “current working
directory”).

The “type=SYSCALL” message shows the program being executed, the user’s
actual user ID that they logged in with (auid=1000– figure out the user name by
looking in the passwd file for UID 1000) and the UID the command ran as (remember
uid=0 is root or admin level privileges). We also see here the “auth-files”
keyword that the admin chose to use for changes to files like /etc/passwd.

ausearch –c <cmd> searches for a particular command name


ausearch –f <file> searches for messages related to a file (e.g. “ausearch –f
/etc/passwd”)
ausearch –ua <uid> searches for messages related to a specific user ID
ausearch –k <keyword> lets you search for a specific keyword tag configured by
the system admin

If you are looking through audit logs in a non-standard directory path (like a mounted
forensic image), use the “-if” option to specify the file or directory of files you wish
to search rather than the default /var/log/audit.

Here is a quick listing of the more useful “type=…” messages found in audit logs:

• USER_AUTH, USER_LOGIN, USER_START ,USER_END, USER_LOGOUT – user


interactive logins (SSH sessions also use CRYPTO_KEY_USER, CRYPTO_SESSION)
• USER_CMD, PROCTITLE, PATH, CWD, SYSCALL – process execution and user activity
• ADD_USER, ADD_GROUP – account admin activity
• AVC – SELinux messages
• TTY, USER_TTY – keystroke logs (if enabled)
• LOGIN, USER_ACCT, USER_START, USER_END, CRED_ACQ, CRED_DISP, CRED_REFR
– related to scheduled task start/stop
• SYSTEM_BOOT, SYSTEM_RUNLEVEL, KERN_MODULE, NETFILTER_CFG
• DAEMON_START, SERVICE_START, CFG_CHANGE – system boot and startup
messages

91
OTHER TOOLS
aureport
Generate summary reports for different event types
Get detailed breakdowns with ausearch –a

aulast
aulastlog
Produce output like last and lastlog using audit logs

In addition to ausearch, there is also the aureport command which produces a


summary report of different sorts of activity. For example we could get a summary of
all “type=SYSCALL” messages with “aureport –s” and then get more detail
with ausearch:

# aureport –s –if /mnt/evidence/var/log/audit

Syscall Report
=======================================
# date time syscall pid comm auid event
=======================================

121. 02/20/2020 13:26:44 9428 1544 useradd 1000 342

# ausearch –if /mnt/evidence/var/log/audit –a 342
----
time->Thu Feb 20 13:26:44 2020
type=PROCTITLE msg=audit(1582223204.906:342):…

aureport shows the audit ID number as the last field of each line item.
“ausearch –a” lets you search by audit ID number.

92
If keystroke logging is enabled, you can dump the keystroke logs with
“aureport --tty“.

aulast and aulastlog are like the last and lastlog commands we covered
earlier. But instead of using the wtmp and lastlog files, aulast and
aulastlog use audit log entries. This may be useful when attackers trash your
wtmp file but forget about the audit logs.

93
LAB – MORE LOG ANALYSIS
Adding more context

Adding web logs increases visibility into our intrusion!

You'll find the exercises as HTML files under /home/lab in your Virtual machine:
1. Launch the Firefox web browser
2. Use Ctrl-O to open a file
3. Navigate to /home/lab/Exercises and open index.html
4. Click on the link to go to the appropriate Exercise

Exercise HTML files are also in the Exercises directory on the course USB. Some
people prefer to open the Exercise in a browser on their host operating system rather
than in the virtual machine.

94
USER ARTIFACTS

Tracking interactive user behavior is important during an investigation. There are


multiple artifacts that can help.

95
COMMAND HISTORY
$HOME/.bash_history

Simple text file


Can easily be deleted or modified

Commands only written to file when shell exits


History in currently running shells is only in memory
bash_history may not be in chronological order

New file dropped on each shell exit


Search for previous versions in unallocated

The standard Linux command shell is bash and command history from the shell is
saved in the file .bash_history in the user’s home directory. The history file is
just a simple text file and can be easily deleted. More worryingly, I can edit
bash_history with a text editor and my modifications will be preserved even
when the history gets updated with commands from later shells.

Note, however, that when a shell exits a brand new bash_history gets created
and the old file rarely gets overwritten immediately. That means you can find plenty
of older versions of bash_history floating around in unallocated (search for
common command strings like “cd /” or “rm –f”). Use the diff command to
compare the old bash_history you recovered against the current version and
look at what changed.

Commands typed into the current shell are only saved when that shell exits. So the
only way to get the command history from currently running shells is with the
linux_bash plugin in Volatility. Also, commands may end up in bash_history
out of chronological order. If I ran commands three days ago but never ended my
bash session, those commands will go into bash_history after the commands I
ran yesterday in a different shell that was closed down.

More information on bash_history forensics (and anti-forensics) at


http://www.deer-run.com/~hal/DontKnowJack-bash_history.pdf
A video of this presentation is https://www.youtube.com/watch?v=wv1xqOV2RyE

96
NO TIMESTAMPS!
When did a command happen?
Can’t tell from bash_history!

Use other artifacts to pin down when commands occurred:


Log entries
File timestamps
Process start times

The real difficulty is that there are no timestamps by default in bash_history (you
can enable timestamps by setting the HISTTIMEFORMAT environment variable).
Just looking at bash_history, you have no idea when the commands were
executed. This is another huge point in favor of Volatility’s linux_bash plugin
because it always shows timestamps.

However, you can often associate commands in bash_history with other artifacts
on the system. You might see a useradd command in /root/.bash_history.
Audit log entries could tell you when that useradd command ran, as might the last
modified time on /etc/passwd and /etc/shadow. If the user runs commands
via sudo, you have those logs as well. You won’t be able to pinpoint every command
execution in bash_history, but if you can figure out several execution times, you
can use these to “bracket in” chronologically the commands executed in between.

Once you have approximate time information, you can go back to /var/log/wtmp
or your SSH or audit logs and figure out who was logged in at those times and from
what remote IP address. This lets you attribute blocks of commands to particular user
sessions.

97
SSH ARTIFACTS (1)
$HOME/.ssh/authorized_keys INBOUND

Public keys that can be used to log into this system

Good place for attacker back doors

Key “comment” may give clues to source of key

SSH is the standard remote login protocol for Linux and Unix systems, and there are
multiple SSH artifacts of interest.

The authorized_keys file holds public keys used for user authentication. One
possible back-door for attackers is to add their own public key to
authorized_keys—particularly the authorized_keys file for the root
account. This gives them a legitimate login path into the system, in a place some
admins wouldn’t think to look.

Here’s a sample authorized_keys entry:

ssh-rsa AAAAB3NzaC1y…qz3K0KvgmVbQ== hal@caribou

You have the key type (in this case an RSA key), the base64 encoded public key, and a
comment. By default the comment contains the username of the user and the
hostname of the machine where the key was made. In some cases, this could be
useful attribution data. Note, however, the comment text could be easily changed
with a simple text editor.

98
SSH ARTIFACTS (2)
$HOME/.ssh/known_hosts OUTBOUND

Public keys of system user has connected to from this host

Does not necessarily imply successful remote login

The known_hosts file tracks the public keys of systems a user has connected to
from the local machine. Note that this does not necessarily mean a successful login
to these remote systems– just that an SSH session was connected. Use the logs on
the remote system (if available) to determine if the user successfully logged in.

By default the IP address of the remote system is clearly visible in each


known_hosts entry. However, SSH does have a client option HashKnownHosts
which hides the remote IP information using a one-way hash function. There is a
brute-forcing tool that can be used to try and guess the hashed information:
https://github.com/halpomeranz/known_hosts_bruteforcer

99
SSH ARTIFACTS (3)
$HOME/.ssh/config OUTBOUND

Can contain details about how to connect to other systems

$HOME/.ssh/id_* OUTBOUND

Public/private key pairs for connecting to other systems


Correlate with authorized_keys entries from remote systems

A user’s SSH config file can give details about usernames, keys, port numbers, and
other settings necessary to connect to remote systems:

Host jumpbox
Hostname jumpbox.sysiphus.com
ServerAliveInterval 120
Port 443
IdentityFile /home/hal/.ssh/id_rsa-jumpbox

If I typed “ssh jumpbox” on my command line, I would be connecting to a


machine called jumpbox.sysiphus.com via port 443/tcp rather than the default
22/tcp that SSH normally uses. Authentication would try to use the key in the
id_rsa-jumpbox file rather than my default key (typically
$HOME/.ssh/id_rsa).

Note that in addition to the encrypted private key in id_*, you will also find public
keys in id_*.pub files. You should be able to match these public keys to
authorized_keys entries on remote systems. Use the known_hosts entries to
figure out which remote systems the user is connecting to.

100
FILE ACCESS/EDITING
$HOME/.lesshst less is a “one screen at a time”
text viewing application
Search terms
Shell escape commands

vim is a standard
$HOME/.viminfo Linux text editor

Recently accessed files (with position in file)


Command history
Search terms

less is a program like more, showing you one screenful of output at a time (there’s
an obscure joke here that “less is greater than more” because the less program has
more functionality than more). The less program has its own history file that tracks
search terms the user has typed in and commands the user ran via shell escapes from
the less program. However, lesshst does not track which files the user is looking
at.

vim is a standard Linux text editor. The viminfo file contains many useful artifacts,
including recently edited file names along with the last position where the user was
in the file. viminfo contains a history of search terms like lesshst, along with a
history of vim commands the user type at the “:” prompt.

Linux text editors will also create backup copies of edited files. vim makes files with a
*.swp extension, but other editors use a trailing tilde (“passwd~”). Running diff
on the backup file vs the current version will quickly show changes made between
the two versions.

101
DESKTOP ARTIFACTS
$HOME/.local/share/recently-used.xbel

Timestamped history of files opened with GUI applications

$HOME/.local/share/Trash/files
$HOME/.local/share/Trash/info

Files deleted with GUI are placed in “files”


“info/*” files store original paths of deleted files

If users are using the Linux desktop, then they may be using the Linux file browser,
called Nautilus or Nemo. The recently-used.xbel file is an XML formatted file
that tracks files opened recently through the file browser, including the app used to
open the file.

Files moved to the Trash folder via the GUI end up in …/Trash/files. The
corresponding files under …/Trash/info say where the file originally came from.

102
WEB BROWSER ARTIFACTS
Firefox and Chromium are common browsers

Browser artifact formats don’t change from Windows/Mac

Files under user home directories


Firefox: $HOME/.mozilla/firefox/*.default*
Chrome: $HOME/.config/chromium/Default

Desktop users may be using web browsers– Firefox and Chrome/Chromium are
common on Linux. The good news here is that these web browsers create exactly the
same history and cookie artifacts as the Windows and Mac versions. You could use
any of the popular web browser forensic tools to extract and view this information.

103
LAB – USER (MIS)BEHAVIOR
Users’ sordid histories

User artifacts can be a gold mine…or just an empty hole.

You'll find the exercises as HTML files under /home/lab in your Virtual machine:
1. Launch the Firefox web browser
2. Use Ctrl-O to open a file
3. Navigate to /home/lab/Exercises and open index.html
4. Click on the link to go to the appropriate Exercise

Exercise HTML files are also in the Exercises directory on the course USB. Some
people prefer to open the Exercise in a browser on their host operating system rather
than in the virtual machine.

104
THANK YOU!
Any final questions?
Thanks for listening!

hrpomeranz@gmail.com
@hal_pomeranz

I hope you learned a lot from this material and had some fun along the way.

If you have questions or feedback in the future, please don't hesitate to contact me:

Hal Pomeranz
hrpomeranz@gmail.com
@hal_Pomeranz

Download updates from https://tinyurl.com/HalLinuxForensics

Please support continued development of this material by taking one of my training


classes or donating (US$50 is suggested) via PayPal (paypal.me/halpomeranz) or
Patreon (patreon.com/halpomeranz).

105

You might also like