Plaso Super Timelines with Docker

Kevin Stokes
9 min readJul 20, 2024

--

Introduction

Plaso, is a powerful utility for digital forensic investigations, allowing the analysis and correlation of timestamped events in a timeline format. Using the Plaso Docker image simplifies its setup and deployment, ensuring a consistent environment across different systems and avoids many of the challenges of manual setup.

In this tutorial, we’ll explore how to set up and utilize the Plaso log2timeline tool within a Docker container on Windows, using PowerShell to craft and execute commands. We’ll not only cover the basics but also delve into some enhanced functionalities that make Plaso a practical tool for forensic investigators.

Prerequisites

Before we get started, make sure you have the following ready:

  • Windows Operating System: Plaso and Docker run smoothly on Windows, which is a common platform for digital forensics work.
  • Docker Desktop: Install Docker Desktop to manage your containers through a user-friendly interface. Docker will serve as our isolated environment for running Plaso.
  • Basic Understanding of Docker and PowerShell: Familiarity with Docker concepts and basic PowerShell commands will help you follow the setup and examples more easily.
  • Sufficient Disk Space and Permissions: Ensure you have enough disk space for Docker images, containers, Plaso output, and the necessary permissions to install and run Docker applications.

Pulling the Plaso log2timeline Docker Image

Step 1 — Open PowerShell: Start by opening your PowerShell interface. You can do this by searching for “PowerShell” in your Windows search bar and selecting “Run as Administrator” for elevated permissions, which is often necessary when working with Docker.

Step 2 — Pull the Docker Image: To download the Plaso log2timeline Docker image, use the following command in your PowerShell prompt. This command pulls the latest version of the Plaso Docker image from Docker Hub, which contains all the necessary components per-configured for Plaso to run efficiently.

docker pull log2timeline/plaso

Depending on your internet speed, this might take a few minutes. Running this command on occasion will keep the Docker image updated with the latest version.

Step 3 — Verify the Installation: After the image has been downloaded, it’s a good practice to ensure that it was pulled correctly and is working as expected. You can verify this by checking the version of the Plaso log2timeline tool installed in the Docker image. Use the following command to check the version:

docker run --rm log2timeline/plaso log2timeline --version

Here’s what each part of the command does:

  • docker run --rm: This command tells Docker to run a container and remove it once the task is complete. It's a great way to run single commands without leaving unused containers afterward.
  • log2timeline/plaso: Specifies which Docker image to use.
  • log2timeline --version: This instructs the container to execute the log2timeline command with the --version option, which outputs the version of the log2timeline tool.
Log2timeline version output

Plaso Brief

Let’s stop here and take an overview of the Plaso tools. There are several that make up Plaso:

  1. image_export — Tool to export data from images and devices.
  2. log2timeline (L2T) — The parser that will extract the events from your data and place it in a Plaso storage file.
  3. pinfo — Provides info about the Plaso storage file.
  4. psort — Post-processor to handle filter, sorting, and export of data from the Plaso storage file.
  5. psteal — Essentially, a stand-alone tool that runs both log2timeline and psort.

Focus of This Blog

For the purposes of this blog series, our focus will be primarily on using log2timeline for data extraction and psort for data exporting. These tools form the backbone of most investigative workflows using Plaso.

Further Reading

Command line setup

We’ll be utilizing PowerShell to assist with explaining and organizing commands. These commands can sometimes get long, so multi-line PowerShell examples will be provided as we go for concise application, as well.

Here is a sample command for log2timeline itself. This is a multi-line PowerShell command format using the tick (`) character:

# Sample log2timeline command (PowerShell)
docker run `
-v /data/:/data `
log2timeline/plaso log2timeline `
--storage-file /data/evidence.plaso `
/data/evidence
  • log2timeline: Calls the log2timeline tool.
  • --storage-file /data/evidence.plaso: Specifies the Plaso file name and location for the parsed data.
  • /data/evidence: Specified the location of the evidence files to extract and parse data from.

Using Plaso

Now that we have gone over examples of how to use the Plaso Docker image. I’ll show how this looks in practice. For this demonstration, I’ve downloaded the BSides Amman 2021 Challenge #5 image from Ali Hadi. If you’re interested, you can find Ali’s datasets here.

Processing with log2timeline

Let’s discuss organization a moment, to better understand how we will need to map the local folders to the Docker container. Here is the current data structure on my local drive:

D:\CASES
└───BSidesAmman21
├───Evidence
│ BSidesAmman21.E01

└───L2T

I have the E01 evidence file inside an Evidence folder for the BSidesAmman21 case. I’d like to have any output from log2timeline placed in the L2T folder.

Now, let’s put together the command for log2timeline to parse the BSidesAmman21.E01 file. To organize this correctly, we will map the full path to the host’s BSidesAmman21 folder to the container’s data folder. This will make sure the container has access to anything within the host’s BSidesAmman21 folder.

Here is what that mapping results will look like in the command line:

# Command format -v host-path:container-path
docker run -v D:/Cases/BSidesAmman21/:/data

# Host and container structure of the mapping
D:/Cases/BSidesAmman21/ --> /data
D:/Cases/BSidesAmman21/Evidence --> /data/Evidence
D:/Cases/BSidesAmman21/L2T --> /data/L2T

# Full command in PowerShell
docker run --rm `
-v D:/Cases/BSidesAmman21/:/data `
log2timeline/plaso log2timeline `
--storage_file /data/L2T/BSidesAmman21-l2t.plaso `
/data/Evidence/BSidesAmman21.E01

When running, the terminal window shows the command correctly accessing the BSidesAmman21.E01 file.

Log2timeline processing the E01 file
D:\CASES
└───BSidesAmman21
├───Evidence
│ BSidesAmman21.E01

└───L2T
BSidesAmman21-l2t.plaso

Now, we can see the new tree command output shows the BSidesAmman21.l2t.plaso file correctly placed in the L2T folder.

Custom log2timeline processing

What we did above was fairly basic processing that actually took additional time and parsed data that we may not have been interested in, depending on the use case. Let’s look at some other features of L2T to see what we can customize.

The command line help output for Plaso tools is quite long, though very organized. I’ll let you run these commands yourself, to see what’s available:

# Display log2timeline help
docker run --rm -v /data/:/data log2timeline/plaso log2timeline --help

You may also be interested to take a look at the supported plugins and parsers available with the info argument

# Display log2timeline plugins and parsers
docker run --rm -v /data/:/data log2timeline/plaso log2timeline --info

From the L2T help list, here are some arguments we’ll look at:

# extraction arguments:
-f FILE_FILTER
List of files to include for targeted collection of files
to parse. Plaso comes with a few of these, we will used
filter_windows.yaml

--hashers HASHER_LIST
Define a list of hashers to use, "md5,sha256,all,none"

-z TIME_ZONE
preferred time zone of extracted date and time values that
are stored without a time zone indicator. The time zone is
determined based on the source data where possible otherwise
it will default to UTC. Use "list" to see a list of available
time zones.

--vss_stores
Volume Shadow Snapshots (VSS) are not processed.

# processing arguments:
--workers WORKERS
Number of worker processes. The default is the number of
available system CPUs minus one, for the main process.

# storage arguments:
--storage_file PATH
The path of the storage file. If not specified, one
will be made in the form <timestamp>-<source>.plaso

Let’s now expand on the prior PowerShell for some more custom adjustments to our L2T processing.

# Custom L2T PowerShell command
docker run --rm -v D:/Cases/BSidesAmman21/:/data `
log2timeline/plaso log2timeline `
-f filter_windows.yaml `
--hashers none `
-z UTC `
--vss_stores none `
--workers 10 `
--storage_file /data/L2T/BSidesAmman21-l2t-custom.plaso `
/data/Evidence/BSidesAmman21.E01

As you can see in the terminal screen above, we see the filter_windows.yaml filter file and 10 workers (00–09) present. Additionally, the processing time here took 3 minutes and 32 seconds. Comparatively, the full basic processing we first ran took well over an hour.

Pinfo info

Running the pinfo info command will provide us the stats about the completed processing and Plaso file. Here is a sample:

# Pinfo command (PowerShell)
docker run --rm `
-v d:\Cases\BSidesAmman21/:/data `
log2timeline/plaso pinfo `
/data/L2T/BSidesAmman21-l2t-custom.plaso
# Sample Output
************************* Events generated per parser ********
Parser (plugin) name : Number of events
--------------------------------------------------------------
amcache : 889
appcompatcache : 370
bagmru : 45
bam : 37
explorer_mountpoints2 : 13
explorer_programscache : 2
filestat : 1828
lnk : 400
mrulist_string : 8
mrulistex_shell_item_list : 2
mrulistex_string : 10
mrulistex_string_and_shell_item : 11
mrulistex_string_and_shell_item_list : 1
msie_webcache : 649
msie_zone : 48
network_drives : 3
networks : 2
olecf_automatic_destinations : 174
olecf_default : 10
prefetch : 1008
setupapi : 92
shell_items : 345
userassist : 70
usnjrnl : 179061
windows_boot_execute : 2
windows_run : 8
windows_sam_users : 17
windows_services : 591
windows_shutdown : 2
windows_task_cache : 500
windows_timezone : 1
windows_typed_urls : 6
windows_usb_devices : 4
windows_version : 3
winevtx : 64672
winlogon : 4
winreg_default : 288375
Total : 539263

Post-processing with psort

After generating a Plaso storage file with log2timeline, the next step is to use psort, to refine and extract useful timeline data into a format that can be easily read and analyzed. Here’s how you can perform these tasks using PowerShell.

The basic command for psort involves specifying the Plaso storage file and the desired output format. Here's that basic command in PowerShell:

# Sample psort command (PowerShell)
docker run --rm -v /data/:/data `
log2timeline/plaso psort `
-o dynamic -w /data/timeline.csv `
/data/evidence.plaso
  • psort: Calls the psort tool.
  • -o dynamic: Specifies the output format (dynamic is a flexible format for CSV output). Check what is available with -o list.
  • -w /data/timeline.csv: Writes the output to output.csv in the mounted /data directory.
  • /data/evidence.plaso: The Plaso storage file from the log2timeline output to process.

Custom post-processing with psort

Now that we’ve learned some of the basics, let take a look at additional options for post-processing. The psort help command will list out all the command line arguments available:

# Display psort help
docker run --rm -v /data/:/data log2timeline/plaso psort --help

From the psort help list, here are some arguments we’ll look at:

# Filter Arguments:
--slice DATE_TIME
Date and time to create a time slice around. Controlled by
the --slice_size option. The date and time must be specified
in ISO 8601 format including time zone offset,
for example: 20200619T20:09:23+02:00.
--slice_size SLICE_SIZE
Defines the slice(r) size, in minutes (or events).
The default value is 5.

# Output Arguments:
--additional_fields ADDITIONAL_FIELDS
Defines additional fields to be included in the output besides
the default fields. Output formats that support additional
fields are: dynamic, opensearch and xlsx.

--output_time_zone TIME_ZONE
time zone of date and time values written to the output.
Use "list" to see a list of available time zones. For
output formats: dynamic and l2t_csv.

# Output Format Arguments:
-o FORMAT
The output format. Use "-o list" to see available formats.
-w OUTPUT_FILE
Output filename.
--fields FIELDS
Defines which fields should be included in the output.

Let’s now expand on the prior PowerShell for some more custom adjustments to our psort processing.

# Custom psort PowerShell command
docker run --rm `
-v D:/Cases/BSidesAmman21/:/data `
log2timeline/plaso psort `
--slice 2019-02-15T12:00:00+00:00 `
--slice_size 720 `
--output_time_zone UTC `
-o json_line `
-w /data/L2T/BSidesAmman21-l2t.jsonl `
/data/L2T/BSidesAmman21-l2t-custom.plaso

The slice argument here will filter for data on Feb 2, 2019 at 12pm, using the ISO 8601 format of 2019-02-15T12:00:00+00:00. Adjusting by the slice_size, we are having psort pull data from 12 hours before and after the slice argument. The output format chosen here was json_line.

psort processing

Showing our final organization, we now have the BSidesAmman21-l2t.jsonl file in out L2T output folder.

D:\CASES
└───BSidesAmman21
├───Evidence
│ BSidesAmman21.E01

└───L2T
BSidesAmman21-l2t-custom.plaso
BSidesAmman21-l2t.jsonl

We should now verify our output and, if needed, make some adjustments to our command, such as a different slice date for any other dates of interest.

Conclusion

Using the Plaso log2timeline Docker image streamlines the setup process and provides a consistent environment for forensic analysis. By following the steps outlined in this blog post, you can quickly get started with Plaso and begin analyzing your data effectively.

Thanks for reading! If you found this guide helpful, please connect with me on LinkedIn.

--

--

No responses yet