Digital Forensics and Incident Response#

Digitial Forensics and Incident Response (DFIR) is the scope of “what happens ~~if~~ when things go wrong”. To put it formally - Post Incident Response. The two sub-categories, Digital Forensics (DF) and Incident Response (IR) could broadly be “finding out what happened” and “having a plan to fix it” respectively.


Digital Forensics#

Digitial Forensics is the process of collecting, preserving & investigating digital artifacts in order to find the root causes of incidents. It can also be used in legal proceedings so there are formal processes & tools required to prove a case. I’m not a lawyer, and I have no formal law training. In this book I share my own experiences and thoughts but don’t take them as legal advice under any circumstances. Take my comments as hearsay and we’ll both be happier…

When working on Digital Forensics, there are standard stages to follow when investigating an incident. The below is based on the EC Council Digital Forensics framework.
In this chapter, we will be focusing on the Windows environment, simply because that’s where my knowledge is. I’ll expand it If I ever get to try out Linux DFIR

Identification#

First, you must know what you want to retrieve & store. This is more than ‘grab everything and look at it later’. Depending on the type of incident, some artifacts are far more valuable than others and these need to be priorities. Complimentary to this, the volatility of data is also key in understanding what you need to retrieve first. In general, you want to retrieve the most volitile artifacts first. For example, you may follow the order of CPU Cache, RAM, Network Capture then HDD dump. Note, at this stage we are just determining what needs to be captured.

Preservation#

This is where we copy the data using tool that do not damage the underlying artifacts. If investigation is for legal investigations, there are specific tools that must be used. For example, you may require a ‘write blocker’ when copying files from a HDD. The purpose of this stage is to store data for later investigation WITHOUT modifying anything. We will then work on copies of the data in later stages in order to preserve the original capture. This stage is also called ‘acquisition’ by other frameworks.

Some useful tools here are

KAPE - Kroll Artifact Parser and Extractor. Kape has a GUI and CMD version and only exists as a ‘live’ capture tool
Autopsy - Autopsy is a GUI only tool that can run on a live device or using an image
FTK Imager - FTK Imager is another GUI only tool that runs in a live or image processing mode

For Memory preservation, there are specific tools used for memry extraction on physical devices.

FTK Imager (as above, it can do both)
Redline
DumpIt.exe
win32dd.exe (64bit)
fastdump

If running in a virtual environment, you can grab the memory file directly as well these files are

Vmware -> .vmem
HyperV -> .bin
Parallels -> .mem
VirtualBox -> .sav (this is partial only)

Analysis#

This is where the ‘meat’ of the forensics comes in. Here a copy of the ‘original copy’ is used for investigation of the artifacts. This is where we try to understand what occured and gather evidence for later documentation. Why a ‘copy of the original copy’? Simply because we don’t work on the original data in order to preserve it’s integrity.
As we would expect from a ‘meaty’ topic, there are plenty of tools to assist with this stage. Don’t rely on the tools completely though, they will only present the data that has been found. It is up to the analyst to determine what is relevant, what is unexpected, and what can be ignored. This can only been done efficiently if you also have the understanding of how the system works. As such, I’ve included topics on some ‘living off the land’ (LotL) tools, if you want to solidify this understanding.

Living off the Land Tools

Windows Event Viewer
Windows Registry
Process Explorer (Part of SysInternals)

Memory Analysis

Volatility

Documentation & Presentation#

Once the investigation is concluded, the analysis needs to be recorded and documented. I generally dont like splitting this to a separate section as I believe documentation should be a part of all the previous steps. Record what is done, when it is done and by who. Keep as much detail as possible so any steps can be replicated if needed. You’ll need the paperwork to back yourself up.
For the presentation side, this is dependant on the reason for your investigation. I can’t really comment beyond this.


Incident Response#

Prepare#

EDR / SIEM / IDS / IPS / Playbooks etc

Detect#

Detect and Analyze. SIEM & EDR (IDS / IPS feed too SIEM).

Web Page Defacing in Splunk#

Click “data summary” on the “Home” search page (Under how to search). This will give you the hosts, sources & Source Types
Search the index with the url
index=botsv1 [url]
Check the “Selected Fields” -> Sources to see what data we are working with
As we are investigating a page deface, start with http
Src_ip may be useful, high data from one source is odd
see http_XXXX (user agent, etc). URI and uri_path may help too. This will help narrow down suspicious source addresses
Can look at IDS / IPS / Firewall logs from here

Respond#

Contain, eradicate, Recover

Learn#

Lessons Learned, Post-incident activity & Improvements