Just over a year ago (Feb 2020) I started running weekly internal training CTFs @work.
These were aimed at the various levels of analysts in the SOC as well as the folks in Incident Response. It ultimately allowed us to test and train analysts in a question-answer style CTF, validating understanding of the tools and systems used in everyday work. One of the great things about it for me was that we were using actual data and tools from our own environment. I could see how analysts were answering questions, which for me is a great way to identify gaps in either technical knowledge or (mis)understanding of tool output.
Since then, I’ve long wanted to launch something similar in the public domain. A CTF aimed at SOC and DFIR (Digital Forensics and Incident Response) analysts. But, just to get a decent amount of data generated on which you can build a public CTF is a fair amount of work. Since the start of this year I kept coming back to the idea of running a public training CTF and have now made work of an MVP (Minimum Viable Product).
So say hallo to SocVel:
The name SocVel is derived from the well known South African term Stokvel. But more on that at a later time… MVP right.
What is the aim of all this?
For those new to the field
Most infosec vendors will have some training available to help you understand how to interpret what is on the screen when using their tools. Whether that is an AV solution, EDR, SIEM, SOAR or SNAFU. (The last one is not a real infosec term, although in this day and age, that could be deemed an acceptable way to refer to the industry.)
But, one of the main gaps I often see is the ability to link all the bits of information together. Some analysts may get overwhelmed by the noise in their environment, and struggle to identify the golden needles in a stack of more needles.
For me, it often comes down to asking the right questions about the situation in front of you, and being able to devise plans to answer those.
In addition, you need to be able to formulate these answers you’ve found during an incident to tell the story of what happened. Whether that story needs to be communicated to a colleague, a level up in the SOC, or an overworked CISO who really just wants to know if this is the big incident that finally pushes them over the edge.
If you are a veteran SOC or DFIR analyst, this is a great way for you to test your abilities as well as tooling. Challenge yourself by not having the data necessarily in the way you are used to get it from your EDR, SIEM or Triage Scripts.
What makes this different from most DFIR ‘conference’ CTFs?
There is no time pressure. Each SocVel CTF should remain open for a month or so, depending on the number of participants or general interest.
Oftentimes the time zones when CTFs are presented aren’t ideal. Yeah I know they can’t cater for the entire globe, but, doing a CTF between 01:00 and 07:00 local time on a Saturday morning is not my idea of fun.
Even if the CTF is in a respectable timeslot, the line of work most DFIR or SOC analysts find themselves in doesn’t always guarantee they’ll have the consecutive hours available to complete it.
Barrier To Entry
Sometimes CTFs are just plain whack in their asking (especially general hacking ones). Allow me to quote a post from hatsoffsecurity.com, referring to people who create CTFs:
“The challenge should be hard because the subject is hard, not because you’re being a d***”
My target market with SocVel are both experienced DFIR veterans and entry-level analysts. To that end, most questions in a SocVel CTF will have an unlockable hint available. This should be helpful enough for you to derive how to get to the answer.
You’re not going to learn anything if you get stuck at a point, and there is nothing or no one there to guide you in understanding what needs to be done.
Again, my aim for SocVel is to be a training CTF.
In an online conference CTF which took place last year, there were no limits on the amount of incorrect answers you could submit. This was the stats for the winner:
Correct Submissions: 22 (5.49%)
Wrong Submissions: 379 (94.51%)
As a strategy for winning CTF’s, that will probably get you there. If the question is: “Which browser was used by the attacker”, you just start submitting browser names until you get it right. However, I don’t want someone working on incidents that have a mere 5.49% success rate.
To combat this, SocVel will deduct points for each incorrect submission. You can still try and try again until you get it right, but it will cost you.
And with that, the first investigation (Pooptoria) is live:
The notorious threat actor Fancy Poodle has done it again! This time striking at Strikdaspoort Wastewater Treatment Plant in Pretoria, South Africa…
Do you have what it takes to solve the investigation while only using limited triage data? All before the license-dongle-wielding forensic analysts have checked their write blockers out of storage?
Well, hello and welcome to the second episode for Season 1 of #ForensicMania.
Today we are looking at answering the ‘Misc’ section of questions from the 2018 MUS CTF, putting our four tools head-to-head with some analysis work.
Why are we doing this? To give you, the reader, a view on how different commercial tools compare with digital forensic analysis.
To recap, in Episode 1 – Processing we processed our evidence file with the four tools, after which the scoring looked as follows: Axiom took a narrow lead with 10 coins, while Blacklight was chomping at its heels with 9. In third place was EnCase with 7 coins and bringing up the rear of the pack was FTK with 5.
“How will the scoring work this round” I hear the masses scream from the districts. For the ‘Misc’ section, we have 2 coins up for grabs for each question, that is, if the tool gets to the correct answer with an acceptable amount of effort, 2 coins are awarded. However, if the tool hides the answer under a rock, but you can still get to it, or if the answer is only halfway there, only 1 coin will be awarded. Finally, 0 coins for wrong answers.
This means we have a total of 22 coins up for grabs in this round.
So… will Axiom keep it’s narrow lead or trip over its connections? Does Blacklight know how to spell Shimcache? Can EnCase parse web histories? And FTK, will it fly, or just try to sell me Quin-C instead?
So many questions, so little time. Let’s dig in!
Timezone: What is the system’s timezone set to?
Correct Answer: Mountain Standard Time
Axiom parses this key as an Operating System artifact:
Under Blacklight’s “System” section, open the registry sections and it shows you the “TimeZoneInformation” registry key:
Following processing, EnCase has a ‘Case Analyzer’ option which provides various reports about artifacts identified. One of these shows the Time Zone:
Navigate to the System hive in the folder structure, right-click to open with Registry Viewer, hit the ‘Common Areas’ toggle, and Bob’s your uncle:
However, this is an obvious artifact that could be shown more easily to the investigator in the ‘System Information’ tab, than having to open it with Registry Viewer first.
File Sequence Number: What is the MFT file sequence number for the Python27\python.exe file? [This is not the MFT entry number]
Correct Answer: 1
Axiom does not parse MFT file sequence numbers.
Blacklight shows the correct value in the “Data Structure” view for the file:
EnCase doesn’t parse the $MFT. However, if you’ve attended EnCase training at some stage, you would’ve probably have received an EnScript (“NTFS Single MFT Record & Attributes”) that will do this for you. Unfortunately, as this isn’t included as stock with EnCase, it doesn’t exist for most users (and it’s also not available in the Guidance App store).
FTK doesn’t parse MFT file sequence numbers.
FileName Lookup: What is the name of the file that has MFT entry of 86280?
Correct Answer: $USNJrnl.
in the ‘File System’ view, you can filter on ‘MFT record number’ to get to the desired file:
Blacklight allows you to filter all files based on “File System ID”, which is the MFT Record Number:
EnCase shows the ‘MFT record number’ in the columns under the label ‘File Identifier’. So just show all files, and sort according to the ‘File Identifier’ to get to the answer:
You can get to this quite easily by listing all entries and sorting according to MFT Record Number in the columns.
FileTimestamp: What is the Standard Information Attribute’s Access timestamp of the Windows\Prefetch\CMD.EXE-89305D47.pf file? [UTC in YYYY-MM-DD hh:mm:ss format]
Correct Answer: 2018-04-26 15:48:40
The Access timestamp from the Standard Information Attribute is what is displayed by our tools. Check out more info about Standard Information Attributes here:
Shown nicely in the File System Information artifact
So… Blacklight shows the Volume Serial Number for a specific volume in the “Details” section under “Disk View”. However, it shows the value in Big Endian (which you can then convert to Little Endian with another tool):
So, only halfway there.
When the volume is selected in the Tree view, it shows the volume serial for you:
Head over to the file structure and navigate to the OS volume, and click on Properties:
YouTube Search: What term was searched in YouTube on 3/28/2018?
Correct Answer: “simpsons max power”.
Looking at ‘Web Related’ artifacts and applying a date filter for March 28th 2018, get’s you the answer:
Hop on over to the “Internet” tab, and you’ll get the answer:
EnCase seems to be the tool that you hope the opposing party used when reviewing your client’s web histories… Cause there’s no way a sane person will enjoy using this for analysing internet artifacts.
Find the “ConsoleHost_history.txt” file which contains the PowerShell command history, and search in the file for “SRUDB.dat”.
Search the entire case for “SRUDB.dat”, which will lead you to the “ConsoleHost_history.txt” file.
For this question we’ll go with door number 2, as I didn’t (and don’t) necessarily know this path or filename off by heart.
Searching for “SRUDB.dat” shows the “ConsoleHistory_history.txt” log listed as a Document artifact:
Blacklight does not have an index search function, but only live searches. I ran a live search for ‘srudb.dat’ which took a few minutes to get to the Powershell log with the ifind command in it
Searching for “srudb.dat” in the indexed search provided a hit for the Console_history.txt file, showing the ifind command.
Search for “SRUDB.dat” in FTK index search, which will get you bunch of hits, one of which is the “ConsoleHost_history.txt” file showing the command used.
Administrator Logon Count: How many times did Administrator logon to the system?
Correct Answer: 2018-04-11.
The ‘User Accounts’ artifact shows this for the Administrator account:
Blacklight’s ‘Actionable Intel’ section gives you the Logon Count for each local user account:
EnCase’s ‘System Info Parser’ artifact does provide info about the local user accounts, however, there’s nothing about logon count:
You can view the SAM hive’s structure from within EnCase, but again, they want you to work for it. In order to get this value in EnCase, you need to go to offset 66-67 of the F value of the user’s subkey:
This then translates to the integer value of 14.
Again, a simple artifact that should be shown to the user in a much simpler way. I’m giving EnCase a 0 for this one, as having to highlight offsets of the F value, is just not ideal.
FTK does have a ‘SAM Users’ section in their ‘System Information’ tab, but this only shows you SIDs and User Names. So, find the SAM in the tree structure. This will then show the content in readable way in the ‘Natural’ view pane, without having to open it with Registry Viewer:
Install Q: What day was the Go programming language installed on? [Answer format: YYYY-MM-DD
Correct Answer: 2018-04-11
This is recorded in the ‘Installed Programs’ artifact
Blacklight did not list the Go Programming Language under it’s Application artifact:
But, you could however find it under the “Uninstall” registry key with the built in registry viewer:
EnCase lists installed software under: Artifacts > System Info Parser > Software.
However, Go was not listed by EnCase:
By manually traversing the SOFTWARE hive in EnCase, I got to the Uninstall key for Atom (based on what the other tools showed), but for the life of me I couldn’t figure out how to get actual data to be shown in EnCase for this key:
The System Information tab shows this quite easily:
Who Installed Atom?: Which user installed Atom? [Answer is the complete SID not the username]
For this question, I’m looking for proof that the Atom installer, AtomSetup_x64.exe, was downloaded (Chrome Web History) and that the file was executed by the user (Windows OS Artefact).
After searching for “Atom” in Axiom, you can get to the install file “AtomSetup-x64.exe”. In the connections view, it shows the installer being downloaded by the ‘maxpowers’ account in Chrome and then executed by the same account via the Shimcache:
In addition to the above, there is also a SRUM Application Resource Usage entry linking the installer to the profile.
To get the SID for the profile, head over to the ‘User Accounts’ tab which shows the SID for ‘maxpowers’:
Blacklight recorded the installer being downloaded by the profile ‘maxpowers’ in Chrome:
You can then link the SID to the profile via the registry viewer:
However, there was no artifact recording AtomSetup-x64.exe being executed
I could not get to the downloading of the AtomSetup-x64.exe in it’s Chrome histories, nor any artefacts showing the execution of AtomSetup-x64.exe by ‘maxpowers’.
The ‘Internet/Chat’ tab in FTK shows the ‘maxpowers’ profile downloading the setup file:
However, FTK did not have any artefacts showing the file was executed by the user profile.
The ‘Sam Users’ section then shows you SIDs mapped to usernames.
Deletion in LogFile: The $LogFile shows at LogFile Sequence Number [LSN] 4433927454 a file is deleted. What is the name of the file that was deleted?
Correct Answer: 7z.dll
Axiom parses the $LogFile entries, so you can search for 4433927454, which will take you to the 7z.dll entry in the ‘$LogFile Analysis’ artifact
Blacklight did ‘parse’ the $LogFile, but not properly:
EnCase also ‘parsed’ the $LogFile, but doesn’t show LSN numbers:
FTK doesn’t parse the $LogFile.
And that’s it!
After a gruelling round, let’s have a look at the scoreboard for Episode 2:
Well, there you have it: Congratulations to Axiom for taking pole position once again. Taking second is BlackLight, with FTK following close behind in third.
[Update 2019-03-10] I’ve added the version numbers of Axiom, Encase and FTK used. Also added details about EnCase Firefox support update coming in next release.
So, last night, after watching the Forensic Dinner (yeah yeah it’s the Forensic Lunch, but hello time zones) I was busy with some testing for #ForensicMania.
Dealing with a simple question ‘What was searched for in Youtube on xx date’, I came to bit of a speed bump in EnCase. In short, I couldn’t get to the answer in EnCase for Youtube web histories viewed in Firefox. It was late, so I wasn’t sure if I were to blame, or EnCase. With this, I stopped with the #ForensicMania stuff and thought, let’s do some targeted testing.
The next morning (today), I decided to do a quick and simple test:
Conduct a few searches in Chrome and Firefox
Parse the web histories with Axiom, EnCase and FTK
Compare the results
I fired up Chrome and Firefox, and made sure they were up to date:
With last night’s Forensic Lunch still fresh in my mind, I Googled the following between 11:00 and 12:00 on 2019-03-09.
The same searches were done with Chrome first, and then with Firefox.
Google search: “Is lee whitfield brittish?” Result opened: “https://www.sans.org/instructors/lee-whitfield”
Google search: “How do you spell british?” Result opened: “https://en.oxforddictionaries.com/spelling/british-and-spelling”
Google search: “Where did Matt get the cool blue sunglasses?” Result opened: https://www.menshealth.com/style/a26133544/matthew-mcconaughey-blue-colored-sunglasses/
Google search: “Why is no one having lunch on the Forensic Lunch?” Result opened: https://www.youtube.com/user/LearnForensics/videos
Youtube search: “drummer at the wrong gig” Video played: https://www.youtube.com/watch?v=ItZyaOlrb7E
And then played this one from the Up Next bar: https://www.youtube.com/watch?v=RvatDKpc0SU
Google search: “Can you nominate yourself in the Forensic 4Cast awards?” Result opened: https://www.magnetforensics.com/blog/nominate-magnet-forensics-years-forensic-4cast-awards/
Following this, I created a logical image of the Chrome and Firefox histories on my laptop with EnCase. The total size for the histories were 3GB. (Yes, lots of historic stuff included there as well).
So the testing is pretty straight forward: Can I get to the above listed searches and web histories in Axiom, FTK and EnCase. Let’s see:
Parsing the logical image in Axiom gave us the following for ‘Web related’ artifacts:
Result: Great Success
Same thing, processed the image and got the following from the ‘Internet’ tab:
Again: Great Success
Now, let’s fire up the ‘2019 SC Magazine Winner‘ for ‘Best Computer Forensic Solution‘…
After processing the image with EnCase, we hobble on over to the ‘Artifact’ tab and open the ‘Internet Records’ section.
First up, Chrome histories:
Great, it works as expected.
Next up, Firefox (The browser with 840,689,200 active users in the past 365 days)
And this is where we ran into trouble: EnCase was able to parse Firefox Cookies and some cache files, but for the life of me I couldn’t get to any actual browsing histories.
I suspect that, as it’s shown on the processing window, EnCase only supports Firefox up until v51.0.0. The current Firefox version is v65.
Firefox version 51.0.0 was released to channel users on January 24th 2017. That is the same month when Ed Sheeran released his single “Shape of You”. (And now you can’t unsee the singing dentist guy covering the song)
What I’m trying to say is that Firefox v51 is old.
I’ve logged a query with OpenText about this and will update this post if and when I get feedback. (Really hoping this is something I’m doing wrong, but we’ll see.)
[Update 2019-03-10: EnCase v8.09, set for release in April, is said to have updated Firefox support]
What’s the point of this post?
Test stuff. If something doesn’t look right, test it.
You don’t need test images to test your tools. If you have a laptop or a mobile phone, then you have test data.
Don’t assume stuff. If my results above are correct, there’s a good chance you could have missed crucial Firefox data if you were only relying on EnCase.
If I’m wrong, then at least I’ll hopefully know pretty soon how to get EnCase to parse Firefox histories correctly… and someone else might learn something too.
Welcome to Forensic Mania 2019 – Episode 1. If you’re new to #ForensicMania, catch the full lowdown here.
To recap, we are testing the latest versions of four of the big commercial forensic tools against the MUS2018 CTF image.
Side note_Following my intro post, promises were made by certain Magnet folk (you can run but you can’t Hyde). So I reprocessed with the newly released version of Axiom, v2.10. If said promises aren’t kept, we might need to roll back to version 1.0.9 just for fun.
Today we’ll be running through processing the MUS forensic image with the four tools.
Analysis Workstation Details
For these tests, we will be using a Dell workstation, with the following specs:
Intel Xeon Gold 6136 CPU.
Windows 10 Pro.
OS Drive: 2.5″ AData SSD.
Case directories and the MUS2018 image file was located on separate Samsung M.2 SSDs.
How does the scoring work
The scoring for this section kept the adjudication committee deadlocked in meetings for weeks, grappling with the question: “How do you score forensic tools on processing, in a fair manor“. After a few heated arguments, the committee realised that this was not the NIST Computer Forensics Tool Testing Program, but a blog. With that pressure off, they created a very simple scoring metric.
First, to get everyone on the same page, consider the following: Say MasterChef Australia is having a pressure test, where each of the Top 25 need to bake a lemon meringue tart. Best tart wins an immunity pin.
Being the first contestant to separate your egg yolks from the whites is pretty cool, might even get some applause from the gantry. But, the proof will always be in the pudding, which is when you start whisking your whites for the meringue. If you did a messy job during the separation, you ain’t going to see firm glossy peaks forming, no matter how hard you whisk.
This then is typically where Maggie Beer and George comes walking over to your bench and drops a comment like “a good meringue is hard to beat“. You get the point.
The Scoring System
In this round, the tools will be judged in two categories, each with 5 points up for grabs. These two categories are:
1_ Processing Progress Indication. We’ll be looking at how well the tool does at providing accurate and useful feedback during processing. “Does it matter?” you may ask… Well, it is the year of our Lord 2019. I can track the Uber Eats guy on my phone until he gets to my door. Similarly, I expect a forensic tool to at least provide some progress indication, other than just “go away, I’m still busy”.
2_ Time to Completion. Yes, the big one. Pretty straight forward. How long did it take to complete the processing task.
Points will be awarded in the form of limited edition (and much coveted across the industry) #ForensicMania challenge coins:
Side note_I initially planned on putting a bunch more categories in adjudicating the processing phase (things like how customizable are the processing options, ease of use, can it make waffles etc) but it got a bit too complex and subjective. These tools have fairly different approaches to processing data, so let’s leave the nitpicking for next week when we start analyzing data.
This means there is a total of 10 points up for grabs in Episode 1.
Setting up processing
In order to keep these posts within a reasonable readable length, I’m not going to delve into each granular step that was followed. For each tool, I’ve provided the main points of what was selected in processing, as well accompanying screenshots.
Full Searches on partitions, Unpartitioned space search on the unpartitioned space of the drive.
Keyword Search Types: Artifacts. Note: Axiom does not have the functionality to do a full text index of the entire drive’s contents, but only indexes known artifacts.
Searching of archives and mobile backups.
Hashing (MD5 and SHA1). Limited to files smaller than 500MB.
Enabled processing of the default custom file types.
All computer artifacts were selected
File Signature Analysis
Hashing (MD5 and SHA1)
File Carving: All available file types were selected
Advanced Options: All available options were selected (see screenshots)
File Signature Analysis
Hash Analysis (MD5 & SHA1)
Expand Compound Files
Find Internet Artifacts
Index text and Metadata
System Info Parser (All artifacts)
File Carver (All predefined file types, Only in Unallocated and Slack)
Windows Event Log Parser
Windows Artifact Parser (Including Search Unallocated)
For FTK, I used their built-in ‘Forensics’ processing profile, but tweaked it a bit.
Hashing (MD5 & SHA1)
Expand all available compound file types
Flag Bad Extensions
Search Text Index
Thumbnails for Graphics
Data Carving (Carving for all available file types)
Process Internet Browser History for Visualization
Generate System Information
To give each tool a fair chance, the MUS image was processed twice with each.
Results: Processing Progress Indication.
Here are the results for each tool’s ability to provide the user with adequate feedback regarding what is being processed:
Axiom’s processing window is quite easy to make sense off. It shows which evidence source is currently processing (partition specific), as well as which ‘search definition’ it’s currently on. During the testing, the percentage progress indicators also seemed to be reliable.
In the category of “Processing Progress Indication”, the adjudication committee scored Axiom: 5 out of 5.
BlackLight also has a great granular processing feedback window. For each partition, it shows what it’s busy with processing as well as progress indicators. These were deemed reliable with the tests.
In the category of “Processing Progress Indication”, the adjudication committee scored Blacklight: 5 out of 5
EnCase’s processing window seems a bit all over the show. More like something you’ll look at for diagnostic info, not processing progress. It was a bit difficult to gauge what it was actually busy with. It does have a progress indicator showing a ‘percentage complete’ value, however, this was quite unreliable. When processing the MUS image, it hit 99% complete quite quickly and then continued processing for another hour at 99%, before completing. This happened with both tests. I again processed the same image on a different workstation and got similar results.
In the category of “Processing Progress Indication”, the adjudication committee scored EnCase: 3 out of 5.
FTK’s processing window is quite straight forward. Perhaps too much so. It does have an overall process bar, although not entirely accurate, and shows which evidence item (e01) it’s currently processing. However, because you have no idea what it’s actually busy with processing, it remains a waiting game to see how many files it discovers, processes and indexes. And once you think it’s done, you get a surprise with a couple hours of “Database Optimization”.
In the category of “Processing Progress Indication”, the adjudication committee scored FTK: 3 out of 5.
Results: Time To Completion.
These are pretty straight forward. How long did it take to process the MUS image with the above noted processing settings?
Axiom took 52 minutes and 31 seconds to process the MUS image. Following this, the ‘building connections’ process took another 17 minutes and 25 seconds.
This gave Axiom a total of 1 hour, 9 minutes and 56 seconds.
BlackLight took 1 hour flat to process the image. Following this, the option was available to carve the Pagefile for various file types. This added another 14 minutes and 30 seconds.
This gave BlackLight a total of 1 hour, 14 minutes and 30 seconds.
EnCase took 1 hour, 23 minutes and 25 seconds.
No additional processing required, all jobs were completed in one go.
FTK took 59 minutes and 9 seconds to process and index the image. That’s faster than all the others… But, before you celebrate: Following the processing, FTK kicked off a “Database Optimization” process. This took another 2 hours and 17 minutes! Although it’s enabled by default, you can switch off this process in FTK’s database settings. However, according to the FTK Knowledge Base “Database maintenance is required to prevent poor performance and can provide recovery options in case of failures.” Seems like it’s something you rather want to run on your case.
This gave FTK a total of 3 hours, 12 minutes and 9 seconds.
Let’s dish out some coins:
For winning the time challenge, Axiom gets 5/5
Not too much separated BlackBag and EnCase from Axiom, both gets 4/5
And, bringing up the rear, taking almost 3 times as long as the others, FTK with 2/5
Before we look at the totals for this week, here is the result of the poll from last week:
Pretty much in line with what we saw this week…
Here’s your scoreboard after S01E01 of #ForensicMania
Tune in next week to see if Axiom can keep it’s narrow lead, whether BlackLight knows what to do with a Windows image and if FTK can pick itself up by it’s dongles. We’ll start with analyzing the MUS image, so stay tuned for all the drama, first and only on The Swanepoel Method.
Side note_It is still early days. Don’t go burning (or buying) any dongles after this post alone. The proof will be in the analysis capabilities of these tools, so check back next week.
I’ve long been wanting to publish comparisons between some of the big commercial Digital Forensic tools. After recently playing around with triage ideas with the MUS2018 CTF image compiled by Dave and Matt, I thought now is as good a time as any.
As we dig in, allow me to introduce you to hypothetical Jack. (Don’t worry, Jack is not a real person, but a photo generated by some funky algorithms on https://thispersondoesnotexist.com)
Jack would like to start his own Digital Forensic and Incident Response company in sunny South Africa. We’ll refer to this hypothetical company as DFIRJack Inc. DFIRJack Inc will focus on Windows Forensics for now. Following some Googling, Jack has come to a shortlist of commercial Digital Forensic tools that he wants to put through some tests. This is to aid him in making a final decision on where he should spend his hard earned cash.
Access Data FTKv7.0.0 (Date Released: Nov 2018)
BlackBag BlackLightv2018 R4 (Date Released: Dec 2018)
Magnet Forensics Axiomv2.9 (Date Released: Jan 2019)
Opentext EnCase v8.08 (Date Released: Nov 2018)
Side note 1_ Jack always thought that Blacklight was predominantly a Mac forensics tool, but after seeing posts on Twitter by one of their new training guys punting it’s Windows Forensic capabilities, he thought it can’t hurt to give it a shot.
Side note 2_ In the midst of writing this, Magnet released Axiom v2.10. By the time that I hit publish on this post, v2.11 will most likely be uploading for release. I’ll stick with version v2.9 for now. If you work for Magnet and want to persuade me with some swag to use v2.10 in this series going forward (or whatever version you’re going to be on next week Tuesday), send me a DM to negotiate.
Jack’s research has brought him to the conclusion that a single user license (the standard license for computer analysis, no cloud or mobile extras) will cost more or less the same for either FTK, Axiom or EnCase. Interestingly enough, he can buy two BlackLight licenses for the price of one of the other three.
After making some South African market related comparisons, Jack realized that he can either buy one of the aforementioned licenses (two in the case of BlackLight), or a secondhand 1992 Toyota Land Cruiser GX with 350,000km on the clock.
This is the GX:
Jack has long dreamt of buying a GX and taking the fam to the Central Kalahari Game Reserve (CKGR) in Botswana on an overland expedition. But that’ll have to wait, as it looks like he’ll be spending that money on a license dongle. What will it be? A GX or pure forensic joy? (Jack did find it odd that the only place where he can buy the licenses for these tools were from the same companies that he’ll be competing against with DFIRJack Inc. Kind of like the Bulls only being allowed to buy their Rugby kit from the Stormers.)
In order for Jack to decide which license dongle will take the place of his GX, he opted to put these tools through some head-to-head tests.
We’ll call it Forensic Mania
Forensic Mania will run for an undefined number of rounds or blog posts. (Undefined, yes, but most likely until I loose interest and move on to a new blog idea…)
For the first series, we’ll use the MUS2018 CTF image of Max Powers to run the tests. Why this image?
There are write ups available online of the answers, so you can run and verify your answers (here and here)
It’s small enough (50GB) to throw the kitchen sink at it, and all the tools should be able to swim.
It’s a Windows 10 image. Windows 10 was released in July 2015 and brought lots of new forensic artifacts with it. Almost four years later, I’d expect that the big forensic tools should be able to exploit this.
It’s my blog, so I make the rules. Get off my lawn.
Bias alert: The forensic image was created for a CTF set to run specifically at MUS2018. Did Matt & Dave design the CTF image to benefit Axiom? Maybe. But we’ll try and be as objective as possible.
Following this series, I’m planning to run similar style tests against more real world images to see how the tools hold up.
Having seen Eric Zimmerman’s release of Kape (Or Kale as Ovie Carol calls it) I thought it could be insightful to play around with the Triage idea some more.
Basic premise for this post was this:
For an Incident Response type case, how much answers can you get to by just grabbing and analyzing selective data (triage) versus full disk images.
With remote acquisition, acquiring only a few GB’s of data instead of full images can, in some cases, make a difference of a few hours – depending on network speed. The same calculation applies when it comes to processing the data.
To run this exercise, I dusted off the evidence files from the 2018 Vegas Magnet User Summit CTF. I managed to win the live CTF on the day, but didn’t get a full score. Oleg Skulkin and Igor Mikhaylov however did a write-up of the full CTF that we’re going to use.
For this test, I created a quick and dirty condition in EnCase that only targets specific data. Things like Registry files, Event logs, Browser Artifacts, File System Artifacts etc. A good place to start with a Triage list is to have a look at the Sans Windows Forensics “Evidence Of…” poster for areas of interest.
A condition in EnCase is basically a fancy filter, allowing you to filter for files with specific names, paths, sizes etc. Not that it matters, but I named my condition Wildehond, which is the Afrikaans name for Wild Dog or Painted Wolf. Wild dogs are known to devour their prey while it’s still alive, and that’s what we’re trying to do here… (You can Youtube it at your own risk).
Running my Wildehond condition in EnCase on the Max Powers hard drive image, resulted in 2,279 files totaling 2.5GB. The mock image of Max Powers, the victim in the CTF, was originally 50GB. After running the condition I created a Logical Evidence File of the filtered triage files.
So, the question is, can you get a full score for the CTF from processing and analyzing 5% of the data?
First off, I processed the ‘full’ image in Axiom v2.9:
And selected all available artifacts to be included:
Processing ran for around 45 minutes, with another 15 minutes to build connections. That’s a round 60 minutes.
The processing resulted in about 727,000 artifacts:
Next up, I used the exact same processing settings on the 2.5GB Triage image I created with EnCase and Wildehond.
Processing took 13 minutes, with another minute to complete the connections. A cool 14 minutes in total. This left us with around 290,000 artifacts for analysis:
So yes, as expected, there is a large difference (45 minutes) in processing 2.5GB in stead of 50GB. (This difference will be a lot bigger between a real world 500GB drive and a 2.5GB triage set)
But this doesn’t mean anything if we can get to the answers, so lets go.
After running the processing, I did a side-by-side comparison between the two sets of data, and worked through the CTF questions on each side.
All of the questions were answerable on the full image processed with Axiom 2.9, except for three questions relating to the $MFT, where a tool like Eric Zimmerman’s MFTEcmd would do the trick.
This is how the two images did in providing answers:
So, with the Triage set of 2.5GB, we could answer 23 of the 28 Questions (82%… which is more than what I got for C++ at University).
However, real world incidents can differ quite a bit from question and answer style exercises, especially if you don’t know what exactly you are looking for.
For the 5 questions that could not be answered from the
Triage set, below is the reasons why:
Wiped file names:
Strangely enough, the UsnJrnl did not parse in my Triage
From the full image:
However, nothing from my Triage system.
I confirmed that the file was present in my image:
So, to troubleshoot, I used Joachim Schicht’s UsnJrnl2Csv to try and parse the UsnJrnl that was in my Triage image.
And… It liked my UsnJrnl exported from the Triage image:
So… for some odd reason Axiom doesn’t recognize the $UsrnJrnl•$J file when contained in my Triage LX01 image. Will do some more trouble-shooting to figure out why this is the case.
Browser to download Dropbox:
From the full image, the answer was quite clear: Maxthon
Yes, my Triage image contains lots of artifacts referencing Maxthon and Dropbox separately, but no immediate obvious link that Maxthon was used to download Dropbox. The main reason for this is that I did not capture Maxthon web histories (i.e. mxundo.dat) in my Triage image.
The last two questions where my Triage image came up short related to Email. As no email was targeted with my Triage, this was to be expected.
So, there you have it. In this case, you could do a pretty good job at getting a handle on a your case by only using Triage data.
Will full disk imaging and analysis not provide you with better context? Yes, perhaps… but with the likely trade-offs in Triaging, it’s worth exploring it first.
Every good blog post about time issues in forensics needs a theme song.
Today’s theme song is Ain’t nobody got time for that from the local band Rubber Duc:
Having a theme song, and more importantly, embedding the Youtube video for said theme song in your blog post, serves the following two purposes:
It keeps the reader here for 3minutes and 18seconds (when viewing it embedded on this page), which will make me and my post analytics think that they actually spent time reading through the entire article.
Gets a song stuck the reader’s head, ideal for when you go back to writing that report you’ve been putting off all week.
Now that we got that out of the way, lets get down to the business of the day:
Identifying Time changes in Windows Event Logs with L2t:
As you’d recall from my previous post, the aim of this series is to play around with quick things you can do at the beginning of an investigation, while for example, waiting for processing to complete. Specifically, those ‘nice to know’ things that takes only a couple of minutes to check…
Time changes on a system can make a simple investigation quite complex very quickly. Sample case is often where a user backdates a system before deleting / creating files.
The following steps should be enough to give you a quick view of user initiated time changes on a system. Remember, this is only to get a high level view, just enough to let you know you need to dig deeper.
First off, we start with processing only the Security and System event logs with Log2Timeline, followed by psort-ing it using the l2tcsv output format. The reason for having a look at the Security and System event logs is that Time change events are recorded in both. Often, the Security event log is quite busy, so chances are that historical events will get overwritten a lot quicker than those in the System event log. My current Security event log has 30,000 entries, with System only sitting at 10,000.
Now that we have an output file (in my case SecSysEvt.l2t.csv) which contains the L2T output from the Security.evtx and System.evtx, we can start Grepping.
We’ll do this in two sections:
Dealing with time change events in the Security Event log (this post).
Dealing with time change events from the System event log (next post)
Security Event log
When a time change occurs on a Windows 7 and later system, Event Id 4616 fires. See more about this event at Ultimate Windows Security.
So let’s get grepping:
grep Security\.evtx SecSysEvt.l2t.csv
This will gives us events in our L2T output which came from our Security.evtx file (ignoring events from the System.evtx for now). In my case I have 27,884 Security.evtx events.
Next, we want to narrow it down to only Event ID 4616. The following should do the trick:
After this, we clear out some unwanted 4616 events. In this case we are excluding events that were not caused by user action. Remember, we want to know if a user was messing around with the system time.
To accomplish this, we exclude events containing LOCAL SERVICE as well as S-1-5-18:
For this event log, there were 8 time changes, resulting from user actions. 6 by SystemSettingsAdminFlows.exe and 2 by dllhost.exe.
From what I can see on my Win10 test system, SystemSettingsAdminFlows.exe is responsible for making system time changes when a user made use of the “Adjust Date\Time” option from the taskbar. I’m doing some more testing with regards to when dllhost.exe fires on Windows 10. So far I haven’t been able to replicate it…
Remember, this is just a pointer or a flag that gets raised to let you know that it might be useful to have a deeper look at time change events on a system.
Lastly, this grep should work on Windows 7 Security event logs as well (Haven’t tested it on Win8). I ran it on a couple of test Win7 systems, and it was good enough to show a specific application installed by a user was making regular time adjustments across these systems.
Next time, we’ll look at time change events in the System event log.
I have recently been thinking through ideas for some quick and dirty initial processes one can do at the start of an investigation.
This would typically be whilst you’re doing one of the following:
Waiting on full disk (including VSS) log2timeline processing to complete.
Waiting on Axiom to run the ‘build connections’ module because you forgot to enable the option prior to the initial processing phase.
Waiting on EnCase 8.07 to finish processing, although it’s been sitting at 100% for the last 2 hours.
Trying to figure out where you last saw your FTK dongle.
This brings us to a New Blog Series:
The aim of this post (and hopefully this series) is to play around with things you can do at the beginning of an investigation, while for example, waiting for processing to complete. Specifically things that could be of value to know at the beginning of an investigation.
And, that brings us to today’s post:
Finding failed logon events.
Identifying failed logon events in the Security event log of a system could mean a couple of things:
Someone is attempting to brute force an account.
<add a list of more possible reasons here>
The above extensive list provides good reason why it could be of value to have a quick squiz through a system’s Security Event logs for failed logon attempts.
As such, I wanted to know the following relating to failed logon events:
How many (if any) failed logon attempts were recorded in the system’s security event log.
Which accounts were attempted to log on with the most, as well as the logon types.
What were the top failed source IP addresses recorded.
What date(s) did the most failed logon attempts occur on.
Side note: The sample data I used for this post came from the image provided by Dave and Matt (The Forensic Lunch) as part of the MUS CTF. More about the MUS CTF and the image, check here.
To answer these questions, here’s one quick and dirty way:
Process the Security event log with Log2Timeline (this took just over a minute to process 33,000 events from Security.evtx) :
$ log2timeline.py mus.sec.evtx.l2t securityevt/
Run psort across the output using the l2tcsv format (this took 30 seconds to run):
This is where the fun starts. Because it is expected that the output from running log2timeline / psort on a Security event log should provide the same output structure each time, the same commands should work. (I tested this with Security Event logs from Server 2012, Windows 7 and Windows 10 and seems to work on all the different outputs).
This may appear ugly, but it works.
Total Failed Logons: grep “EventID>4625” mus.sec.evtx.csv | wc -l
We can now see that there were 612 failed Type 3 logon attempts, all on May 5th 2018. It also shows us that the Administrator account was most often attempted to log in with, as well as the top IP addresses where the logon attempts came from.
Now, if you’re not going to read Phill’s blog and just opened this article because of your innate love for Tom Cruise and bad Top Gun puns, shame on you.
Son, before your ego starts writing checks your body can’t cash, let’s at least assume we all agree on the following:
A ZoneIdentifier ADS is an extra piece of information stored with downloaded files. This is done to assist Windows in determining if a file should be trusted or not. For example, an executable file downloaded from the internet will be treated with the necessary suspicion based on the zone it came from (i.e. the Internet).
Phill’s testing has highlighted two additional fields that are being stored within the Zone.Identifier:
This is a great source of information as it can assist in determining where (URL) a downloaded file originated from.
A bit of Googling revealed the following response to a Bugzilla report by a Windows Defender ATP team member regarding the addition of these fields in Windows 10:
This feature was added in Windows 10 release 1703 (build 15063).
The HostUrl and ReferrerUrl are set by Microsoft Edge and Google
Edge also sets a HostIpAddress field.
It is used for protection purposes.
Specifically, Microsoft’s Windows Defender Advanced Threat
Protection exposes this info to the SOC, who can then identify where
attacks came from, which other downloads might be related, and
I don't know which other products/tools use this feature.
(from the Windows Defender Advanced Threat Protection team)
I haven’t seen the HostIpAddress field before, so I decided to run similar tests with three browsers, identical to those used by Phill:
Firefox 60.0.2 (64-bit)
Chrome Version 67.0.3396.87 (Official Build) (64-bit)
Microsoft Edge 42.17134.1.0
For my tests, I downloaded the file RegistryExplorer_RECmd.zip with each browser from the following URL:
Firefox behaved as expected with no additional fields added to the Zone.Identifier:
Chrome added the ReferrerUrl and HostUrl as follows:
In my case, Edge also added the ReferrerUrl and HostUrl:
This is interesting as it differs from Phill’s testing. Will compare notes to see if there’s a specific reason for this.
Archives, Zone.Identifiers & ReferrerUrls
Now, if you’re one of those analysts who wont be happy unless you’re going Mach 2 with your hair on fire, you’ll like this:
If you use the built in Windows “Extract All” option to extract the downloaded archives, you get a Zone.Identifier for each extracted file:
Note: when testing the same by extracting the archive with 7zip, it did not create the Zone.Identifiers for the extracted files.
In addition to the zones, the Zone.Identifier now records the path of the parent archive where the extracted files originated from in the ReferrerUrl field:
Not only are you now able to determine from which URL a downloaded file originated, you may also be able to track an extracted file back to it’s original archive.
Copying files to an external hard drive
“But Maverick” you interject, “what happens when the files are copied to an external hard drive?”
“Fear not Goose, the lovely thing about Zone.Identifiers are that they travel oh so well.”
Copying the downloaded zip to an NTFS formatted external hard drive still kept the Zone.Identifier intact:
The same was found for the Zone.Identifiers for the extracted files:
Till next time…
Welcome to Next Time.
Thanks to Paul Bryant (see comments below the post) we have more ‘clarity’ on when Edge will add a HostIPAddress field to downloaded files.
Saving the Streams.zip with Edge:
The following DOES NOT store a HostIPAddress: 1. Clicking on a file link to directly download the file. 2. Right-Clicking on a file link > Save Target As > And directly click save without changing the path.
The following stores a HostIPAddress: 1. Right-Clicking on a file link > Save Target As > Changing the target directory and saving the file. 2. Right-Clicking on a file link > Save Target As > Changing the target directory to something else, and then changing the target dir back to the original default folder.
Here is a sample of a Zone.Identifier containing a HostIpAddress for a file downloaded with Edge, where the target directory was changed a couple of times and the then changed back to the Downloads dir:
So now, calculate how many users are on Windows 10, uses Edge as their browser, and are “Right-Clicking, Save Target Assing, Change Dirring” when they save data.
That’s how often you’ll see the HostIPAddress field in a Zone.Identifier (that I know of)
During the latter part of 2017, Apple introduced their APFS file system which is being rolled out with their High Sierra macOS.
The following section was taken from an Apple support article:
When you install macOS High Sierra on the Mac volume of a solid-state drive (SSD) or other all-flash storage device, that volume is automatically converted to APFS. Fusion Drives, traditional hard disk drives (HDDs), and non-Mac volumes aren’t converted. You can’t opt out of the transition to APFS.
Although there are a couple of articles floating around which shows ways to ‘opt-out’ of APFS, it is still likely that 99% of High Sierra systems with Solid State Drives you’re going to come across will have APFS running.
Now, picture this scenario:
You are stuck on an island with a forensic image of an APFS volume and a toolbox full of your favorite commercial forensic tools. Contained in the APFS volume is a backup of an iPhone 6s which contains a WhatsApp message with the instructions on how to make one mean coconut Mojito. You need to access said message in order to make the Mojito before sunset. Should you fail, you’ll be forced to do manual USB device history analysis for 26 Windows 7 internet café PCs, after which, you may or may not get eaten by that thing that was eating people in Lost.
So, your options:
Blackbag’s BlackLight — Yes, it works.
Autopsy — No support as of version 4.7.
AccessData FTK — No support as of version 6.4. Their online tech support noted that APFS support is planned for future releases, however no eta yet.
Magnet Forensics Axiom — No support as of version 220.127.116.1127. Jad Saliba mentioned at the Magnet User Summit in Las Vegas (May 2018) that they’re currently working on it, but no eta yet.
OpenText EnCase — Officially: Yes, Unofficially: Sort of. Although EnCase announced APFS support in version 8.07, I’ve dealt with two separate Macs where EnCase is refusing to parse the APFS volumes. I’ve put one of the images through a few tests. The image happily parses with Blackbag’s Blacklight and mounts with both Paragon‘s APFS mounter and Simon Gander’s APFS-Fuse library. OpenText Tech support is currently looking into this.
X-ways — No support in version 19.6, however, according to this tweet from Eric it should be coming soon:
After your confidence grows while scrolling through the heaps of tweets about Blackbag being ‘the only end-to-end solution for APFS’, you realize that your 30 day trial license has just expired… As you were about to accept your fate and Google “sans usb profiling cheat sheet“, you find two articles from Mari Degrazia on mounting APFS images:
As the daylight starts to fade and you try and remember how many episodes of Lost you actually watched before losing interest, you devise a new plan:
Plan B: Quick and dirty way to process APFS with Axiom and friends.
I was specifically looking for a way to get my APFS image parsed with Axiom.
The following approaches did not work:
Mount E01 with Arsenal Image Mounter > Mount resulting APFS partition with Paragon’s ‘APFS for Windows’ > Add files & folders in Axiom.
Result: It processed, but for some files Axiom wasn’t properly linking back to the actual source files to display their content. Not sure who’s fault it is, but most likely something to do with the mounting of a mounted image.
Mount E01 with Arsenal Image Mounter > Mount APFS partition with Paragon’s ‘APFS for Windows’ > Create AD Image with FTK Imager > Process AD Image with Axiom.
Result: It processed, but again had issues with displaying actual content for some of the files processed. During the creation of the AD Image, FTK Imager encountered a large volume of files it claimed couldn’t be added to the logical image, again likely due to the various mountings.
Mount E01 in SIFT with ewfmount (libewf) > mount APFS partition with APFS-fuse > Create a tar of mounted data > Process tar with Axiom
Result: Again got a similar result where Axiom processed the data, but didn’t display actual content for some files.
At this stage most island-stricken forensicators would have given up and resigned themselves to a life of USBSTORs and Volume GUIDs. But luckily, you’re not most forensicators and you try one more way:
Axiom was happy to process the DD, as well as the iPhone backup which was contained on the APFS volume in one go.
And yes, copying the mounted data to a DD container will update the creation dates of the files. If this makes you feel uneasy, remember, you also just used an ‘experimental’ driver to mount an APFS volume.
At least the thing from Lost didn’t eat you… #winning