Welcome to Forensic Mania 2019 – Episode 1. If you’re new to #ForensicMania, catch the full lowdown here.
To recap, we are testing the latest versions of four of the big commercial forensic tools against the MUS2018 CTF image.
Side note_ Following my intro post, promises were made by certain Magnet folk (you can run but you can’t Hyde). So I reprocessed with the newly released version of Axiom, v2.10. If said promises aren’t kept, we might need to roll back to version 1.0.9 just for fun.
Today we’ll be running through processing the MUS forensic image with the four tools.
Analysis Workstation Details
For these tests, we will be using a Dell workstation, with the following specs:
- Intel Xeon Gold 6136 CPU.
- 128GB ram.
- Windows 10 Pro.
- OS Drive: 2.5″ AData SSD.
- Case directories and the MUS2018 image file was located on separate Samsung M.2 SSDs.
How does the scoring work
The scoring for this section kept the adjudication committee deadlocked in meetings for weeks, grappling with the question: “How do you score forensic tools on processing, in a fair manor“. After a few heated arguments, the committee realised that this was not the NIST Computer Forensics Tool Testing Program, but a blog. With that pressure off, they created a very simple scoring metric.
First, to get everyone on the same page, consider the following: Say MasterChef Australia is having a pressure test, where each of the Top 25 need to bake a lemon meringue tart. Best tart wins an immunity pin.
Being the first contestant to separate your egg yolks from the whites is pretty cool, might even get some applause from the gantry. But, the proof will always be in the pudding, which is when you start whisking your whites for the meringue. If you did a messy job during the separation, you ain’t going to see firm glossy peaks forming, no matter how hard you whisk.
This then is typically where Maggie Beer and George comes walking over to your bench and drops a comment like “a good meringue is hard to beat“.
You get the point.
The Scoring System
In this round, the tools will be judged in two categories, each with 5 points up for grabs. These two categories are:
1_ Processing Progress Indication. We’ll be looking at how well the tool does at providing accurate and useful feedback during processing. “Does it matter?” you may ask… Well, it is the year of our Lord 2019. I can track the Uber Eats guy on my phone until he gets to my door. Similarly, I expect a forensic tool to at least provide some progress indication, other than just “go away, I’m still busy”.
2_ Time to Completion. Yes, the big one. Pretty straight forward. How long did it take to complete the processing task.
Points will be awarded in the form of limited edition (and much coveted across the industry) #ForensicMania challenge coins:
Side note_ I initially planned on putting a bunch more categories in adjudicating the processing phase (things like how customizable are the processing options, ease of use, can it make waffles etc) but it got a bit too complex and subjective. These tools have fairly different approaches to processing data, so let’s leave the nitpicking for next week when we start analyzing data.
This means there is a total of 10 points up for grabs in Episode 1.
Setting up processing
In order to keep these posts within a reasonable readable length, I’m not going to delve into each granular step that was followed. For each tool, I’ve provided the main points of what was selected in processing, as well accompanying screenshots.
- Full Searches on partitions, Unpartitioned space search on the unpartitioned space of the drive.
- Keyword Search Types: Artifacts. Note: Axiom does not have the functionality to do a full text index of the entire drive’s contents, but only indexes known artifacts.
- Searching of archives and mobile backups.
- Hashing (MD5 and SHA1). Limited to files smaller than 500MB.
- Enabled processing of the default custom file types.
- All computer artifacts were selected
- File Signature Analysis
- Picture Analysis
- Video Analysis
- Hashing (MD5 and SHA1)
- File Carving: All available file types were selected
- Advanced Options: All available options were selected (see screenshots)
- File Signature Analysis
- Thumbnail Creation
- Hash Analysis (MD5 & SHA1)
- Expand Compound Files
- Find Email
- Find Internet Artifacts
- Index text and Metadata
- System Info Parser (All artifacts)
- File Carver (All predefined file types, Only in Unallocated and Slack)
- Windows Event Log Parser
- Windows Artifact Parser (Including Search Unallocated)
For FTK, I used their built-in ‘Forensics’ processing profile, but tweaked it a bit.
- Hashing (MD5 & SHA1)
- Expand all available compound file types
- Flag Bad Extensions
- Search Text Index
- Thumbnails for Graphics
- Data Carving (Carving for all available file types)
- Process Internet Browser History for Visualization
- Generate System Information
To give each tool a fair chance, the MUS image was processed twice with each.
Results: Processing Progress Indication.
Here are the results for each tool’s ability to provide the user with adequate feedback regarding what is being processed:
Axiom’s processing window is quite easy to make sense off. It shows which evidence source is currently processing (partition specific), as well as which ‘search definition’ it’s currently on. During the testing, the percentage progress indicators also seemed to be reliable.
In the category of “Processing Progress Indication”, the adjudication committee scored Axiom: 5 out of 5.
BlackLight also has a great granular processing feedback window. For each partition, it shows what it’s busy with processing as well as progress indicators. These were deemed reliable with the tests.
In the category of “Processing Progress Indication”, the adjudication committee scored Blacklight: 5 out of 5
EnCase’s processing window seems a bit all over the show. More like something you’ll look at for diagnostic info, not processing progress. It was a bit difficult to gauge what it was actually busy with. It does have a progress indicator showing a ‘percentage complete’ value, however, this was quite unreliable. When processing the MUS image, it hit 99% complete quite quickly and then continued processing for another hour at 99%, before completing. This happened with both tests. I again processed the same image on a different workstation and got similar results.
In the category of “Processing Progress Indication”, the adjudication committee scored EnCase: 3 out of 5.
FTK’s processing window is quite straight forward. Perhaps too much so. It does have an overall process bar, although not entirely accurate, and shows which evidence item (e01) it’s currently processing. However, because you have no idea what it’s actually busy with processing, it remains a waiting game to see how many files it discovers, processes and indexes. And once you think it’s done, you get a surprise with a couple hours of “Database Optimization”.
In the category of “Processing Progress Indication”, the adjudication committee scored FTK: 3 out of 5.
Results: Time To Completion.
These are pretty straight forward. How long did it take to process the MUS image with the above noted processing settings?
Axiom took 52 minutes and 31 seconds to process the MUS image. Following this, the ‘building connections’ process took another 17 minutes and 25 seconds.
This gave Axiom a total of 1 hour, 9 minutes and 56 seconds.
BlackLight took 1 hour flat to process the image. Following this, the option was available to carve the Pagefile for various file types. This added another 14 minutes and 30 seconds.
This gave BlackLight a total of 1 hour, 14 minutes and 30 seconds.
EnCase took 1 hour, 23 minutes and 25 seconds.
No additional processing required, all jobs were completed in one go.
FTK took 59 minutes and 9 seconds to process and index the image. That’s faster than all the others… But, before you celebrate: Following the processing, FTK kicked off a “Database Optimization” process. This took another 2 hours and 17 minutes! Although it’s enabled by default, you can switch off this process in FTK’s database settings. However, according to the FTK Knowledge Base “Database maintenance is required to prevent poor performance and can provide recovery options in case of failures.” Seems like it’s something you rather want to run on your case.
This gave FTK a total of 3 hours, 12 minutes and 9 seconds.
Let’s dish out some coins:
For winning the time challenge, Axiom gets 5/5
Not too much separated BlackBag and EnCase from Axiom, both gets 4/5
And, bringing up the rear, taking almost 3 times as long as the others, FTK with 2/5
Before we look at the totals for this week, here is the result of the poll from last week:
Pretty much in line with what we saw this week…
Here’s your scoreboard after S01E01 of #ForensicMania
Tune in next week to see if Axiom can keep it’s narrow lead, whether BlackLight knows what to do with a Windows image and if FTK can pick itself up by it’s dongles. We’ll start with analyzing the MUS image, so stay tuned for all the drama, first and only on The Swanepoel Method.
Side note_ It is still early days. Don’t go burning (or buying) any dongles after this post alone. The proof will be in the analysis capabilities of these tools, so check back next week.