EnCase you were hoping to parse Firefox

[Update 2019-03-10] I’ve added the version numbers of Axiom, Encase and FTK used. Also added details about EnCase Firefox support update coming in next release.

So, last night, after watching the Forensic Dinner (yeah yeah it’s the Forensic Lunch, but hello time zones) I was busy with some testing for #ForensicMania.

Dealing with a simple question ‘What was searched for in Youtube on xx date’, I came to bit of a speed bump in EnCase. In short, I couldn’t get to the answer in EnCase for Youtube web histories viewed in Firefox. It was late, so I wasn’t sure if I were to blame, or EnCase. With this, I stopped with the #ForensicMania stuff and thought, let’s do some targeted testing.

The next morning (today), I decided to do a quick and simple test:

  • Conduct a few searches in Chrome and Firefox
  • Parse the web histories with Axiom, EnCase and FTK
  • Compare the results

I fired up Chrome and Firefox, and made sure they were up to date:


With last night’s Forensic Lunch still fresh in my mind, I Googled the following between 11:00 and 12:00 on 2019-03-09.

The same searches were done with Chrome first, and then with Firefox.

Google search:Is lee whitfield brittish?
Result opened: “https://www.sans.org/instructors/lee-whitfield”

Google search:How do you spell british?
Result opened: “https://en.oxforddictionaries.com/spelling/british-and-spelling”

Google search:Where did Matt get the cool blue sunglasses?
Result opened: https://www.menshealth.com/style/a26133544/matthew-mcconaughey-blue-colored-sunglasses/

Google search:Why is no one having lunch on the Forensic Lunch?
Result opened: https://www.youtube.com/user/LearnForensics/videos

Youtube search: “drummer at the wrong gig”
Video played: https://www.youtube.com/watch?v=ItZyaOlrb7E

And then played this one from the Up Next bar:
https://www.youtube.com/watch?v=RvatDKpc0SU

Google search:Can you nominate yourself in the Forensic 4Cast awards?
Result opened: https://www.magnetforensics.com/blog/nominate-magnet-forensics-years-forensic-4cast-awards/


Following this, I created a logical image of the Chrome and Firefox histories on my laptop with EnCase. The total size for the histories were 3GB. (Yes, lots of historic stuff included there as well).

So the testing is pretty straight forward: Can I get to the above listed searches and web histories in Axiom, FTK and EnCase. Let’s see:


Axiom (v2.10)

Parsing the logical image in Axiom gave us the following for ‘Web related’ artifacts:

Chrome

Firefox

Result: Great Success


FTK (v7.0)

Same thing, processed the image and got the following from the ‘Internet’ tab:

Chrome

Firefox

Again: Great Success


Now, let’s fire up the ‘2019 SC Magazine Winner‘ for ‘Best Computer Forensic Solution‘…

EnCase (v8.08):

After processing the image with EnCase, we hobble on over to the ‘Artifact’ tab and open the ‘Internet Records’ section.

First up, Chrome histories:

Great, it works as expected.

Next up, Firefox (The browser with 840,689,200 active users in the past 365 days)

And this is where we ran into trouble: EnCase was able to parse Firefox Cookies and some cache files, but for the life of me I couldn’t get to any actual browsing histories.

I suspect that, as it’s shown on the processing window, EnCase only supports Firefox up until v51.0.0. The current Firefox version is v65.

Firefox version 51.0.0 was released to channel users on January 24th 2017. That is the same month when Ed Sheeran released his single “Shape of You”. (And now you can’t unsee the singing dentist guy covering the song)

What I’m trying to say is that Firefox v51 is old.

I’ve logged a query with OpenText about this and will update this post if and when I get feedback. (Really hoping this is something I’m doing wrong, but we’ll see.)

[Update 2019-03-10: EnCase v8.09, set for release in April, is said to have updated Firefox support]


What’s the point of this post?

  1. Test stuff. If something doesn’t look right, test it.
  2. You don’t need test images to test your tools. If you have a laptop or a mobile phone, then you have test data.
  3. Don’t assume stuff. If my results above are correct, there’s a good chance you could have missed crucial Firefox data if you were only relying on EnCase.
  4. If I’m wrong, then at least I’ll hopefully know pretty soon how to get EnCase to parse Firefox histories correctly… and someone else might learn something too.

#ForensicMania S01E01 – Processing

Welcome to Forensic Mania 2019 – Episode 1. If you’re new to #ForensicMania, catch the full lowdown here.


To recap, we are testing the latest versions of four of the big commercial forensic tools against the MUS2018 CTF image.

Side note_ Following my intro post, promises were made by certain Magnet folk (you can run but you can’t Hyde). So I reprocessed with the newly released version of Axiom, v2.10. If said promises aren’t kept, we might need to roll back to version 1.0.9 just for fun.


EPISODE 1

Today we’ll be running through processing the MUS forensic image with the four tools.


Analysis Workstation Details

For these tests, we will be using a Dell workstation, with the following specs:

  • Intel Xeon Gold 6136 CPU.
  • 128GB ram.
  • Windows 10 Pro.
  • OS Drive: 2.5″ AData SSD.
  • Case directories and the MUS2018 image file was located on separate Samsung M.2 SSDs.

How does the scoring work

The scoring for this section kept the adjudication committee deadlocked in meetings for weeks, grappling with the question: “How do you score forensic tools on processing, in a fair manor“. After a few heated arguments, the committee realised that this was not the NIST Computer Forensics Tool Testing Program, but a blog. With that pressure off, they created a very simple scoring metric.

First, to get everyone on the same page, consider the following: Say MasterChef Australia is having a pressure test, where each of the Top 25 need to bake a lemon meringue tart. Best tart wins an immunity pin.

Being the first contestant to separate your egg yolks from the whites is pretty cool, might even get some applause from the gantry. But, the proof will always be in the pudding, which is when you start whisking your whites for the meringue. If you did a messy job during the separation, you ain’t going to see firm glossy peaks forming, no matter how hard you whisk.

This then is typically where Maggie Beer and George comes walking over to your bench and drops a comment like “a good meringue is hard to beat“.
You get the point.


The Scoring System

In this round, the tools will be judged in two categories, each with 5 points up for grabs. These two categories are:

1_ Processing Progress Indication. We’ll be looking at how well the tool does at providing accurate and useful feedback during processing. “Does it matter?” you may ask… Well, it is the year of our Lord 2019. I can track the Uber Eats guy on my phone until he gets to my door. Similarly, I expect a forensic tool to at least provide some progress indication, other than just “go away, I’m still busy”.

2_ Time to Completion. Yes, the big one. Pretty straight forward. How long did it take to complete the processing task.

Points will be awarded in the form of limited edition (and much coveted across the industry) #ForensicMania challenge coins:

Side note_ I initially planned on putting a bunch more categories in adjudicating the processing phase (things like how customizable are the processing options, ease of use, can it make waffles etc) but it got a bit too complex and subjective. These tools have fairly different approaches to processing data, so let’s leave the nitpicking for next week when we start analyzing data.

This means there is a total of 10 points up for grabs in Episode 1.


Setting up processing

In order to keep these posts within a reasonable readable length, I’m not going to delve into each granular step that was followed. For each tool, I’ve provided the main points of what was selected in processing, as well accompanying screenshots.


Axiom

  • Full Searches on partitions, Unpartitioned space search on the unpartitioned space of the drive.
  • Keyword Search Types: Artifacts. Note: Axiom does not have the functionality to do a full text index of the entire drive’s contents, but only indexes known artifacts.
  • Searching of archives and mobile backups.
  • Hashing (MD5 and SHA1). Limited to files smaller than 500MB.
  • Enabled processing of the default custom file types.
  • All computer artifacts were selected
1 search type
Author: Jaco Swanepoel
« 1 of 6 »

BlackLight

  • File Signature Analysis
  • Picture Analysis
  • Video Analysis
  • Hashing (MD5 and SHA1)
  • File Carving: All available file types were selected
  • Advanced Options: All available options were selected (see screenshots)
1 add ev
Author: Jaco Swanepoel
« 1 of 4 »

EnCase

  • File Signature Analysis
  • Thumbnail Creation
  • Hash Analysis (MD5 & SHA1)
  • Expand Compound Files
  • Find Email
  • Find Internet Artifacts
  • Index text and Metadata
  • Modules:
  • System Info Parser (All artifacts)
  • File Carver (All predefined file types, Only in Unallocated and Slack)
  • Windows Event Log Parser
  • Windows Artifact Parser (Including Search Unallocated)
1 Processing
Author: Jaco Swanepoel
« 1 of 9 »

FTK

For FTK, I used their built-in ‘Forensics’ processing profile, but tweaked it a bit.

  • Hashing (MD5 & SHA1)
  • Expand all available compound file types
  • Flag Bad Extensions
  • Search Text Index
  • Thumbnails for Graphics
  • Data Carving (Carving for all available file types)
  • Process Internet Browser History for Visualization
  • Generate System Information
2 compound exp Opts
Author: Jaco Swanepoel
« 2 of 3 »

To give each tool a fair chance, the MUS image was processed twice with each.


Results: Processing Progress Indication.

Here are the results for each tool’s ability to provide the user with adequate feedback regarding what is being processed:

Axiom

Axiom’s processing window is quite easy to make sense off. It shows which evidence source is currently processing (partition specific), as well as which ‘search definition’ it’s currently on. During the testing, the percentage progress indicators also seemed to be reliable.

In the category of “Processing Progress Indication”, the adjudication committee scored Axiom: 5 out of 5.


BlackLight

BlackLight also has a great granular processing feedback window. For each partition, it shows what it’s busy with processing as well as progress indicators. These were deemed reliable with the tests.

In the category of “Processing Progress Indication”, the adjudication committee scored Blacklight: 5 out of 5


EnCase


EnCase’s processing window seems a bit all over the show. More like something you’ll look at for diagnostic info, not processing progress. It was a bit difficult to gauge what it was actually busy with. It does have a progress indicator showing a ‘percentage complete’ value, however, this was quite unreliable. When processing the MUS image, it hit 99% complete quite quickly and then continued processing for another hour at 99%, before completing. This happened with both tests. I again processed the same image on a different workstation and got similar results.

In the category of “Processing Progress Indication”, the adjudication committee scored EnCase: 3 out of 5.


FTK

FTK’s processing window is quite straight forward. Perhaps too much so. It does have an overall process bar, although not entirely accurate, and shows which evidence item (e01) it’s currently processing. However, because you have no idea what it’s actually busy with processing, it remains a waiting game to see how many files it discovers, processes and indexes. And once you think it’s done, you get a surprise with a couple hours of “Database Optimization”.

In the category of “Processing Progress Indication”, the adjudication committee scored FTK: 3 out of 5.



Results: Time To Completion.

These are pretty straight forward. How long did it take to process the MUS image with the above noted processing settings?

Axiom

Axiom took 52 minutes and 31 seconds to process the MUS image. Following this, the ‘building connections’ process took another 17 minutes and 25 seconds.

This gave Axiom a total of 1 hour, 9 minutes and 56 seconds.


BlackLight

BlackLight took 1 hour flat to process the image. Following this, the option was available to carve the Pagefile for various file types. This added another 14 minutes and 30 seconds.

This gave BlackLight a total of 1 hour, 14 minutes and 30 seconds.



EnCase

EnCase took 1 hour, 23 minutes and 25 seconds.

No additional processing required, all jobs were completed in one go.


FTK

FTK took 59 minutes and 9 seconds to process and index the image. That’s faster than all the others… But, before you celebrate: Following the processing, FTK kicked off a “Database Optimization” process. This took another 2 hours and 17 minutes! Although it’s enabled by default, you can switch off this process in FTK’s database settings. However, according to the FTK Knowledge BaseDatabase maintenance is required to prevent poor performance and can provide recovery options in case of failures.” Seems like it’s something you rather want to run on your case.

This gave FTK a total of 3 hours, 12 minutes and 9 seconds.


Let’s dish out some coins:

For winning the time challenge, Axiom gets 5/5

Not too much separated BlackBag and EnCase from Axiom, both gets 4/5

And, bringing up the rear, taking almost 3 times as long as the others, FTK with 2/5


Before we look at the totals for this week, here is the result of the poll from last week:

Pretty much in line with what we saw this week…

Here’s your scoreboard after S01E01 of #ForensicMania


What’s Next?

Tune in next week to see if Axiom can keep it’s narrow lead, whether BlackLight knows what to do with a Windows image and if FTK can pick itself up by it’s dongles. We’ll start with analyzing the MUS image, so stay tuned for all the drama, first and only on The Swanepoel Method.

Side note_ It is still early days. Don’t go burning (or buying) any dongles after this post alone. The proof will be in the analysis capabilities of these tools, so check back next week.

Announcing: Forensic Mania 2019

I’ve long been wanting to publish comparisons between some of the big commercial Digital Forensic tools. After recently playing around with triage ideas with the MUS2018 CTF image compiled by Dave and Matt, I thought now is as good a time as any.


Meet Jack

As we dig in, allow me to introduce you to hypothetical Jack. (Don’t worry, Jack is not a real person, but a photo generated by some funky algorithms on https://thispersondoesnotexist.com)

Jack would like to start his own Digital Forensic and Incident Response company in sunny South Africa. We’ll refer to this hypothetical company as DFIRJack Inc. DFIRJack Inc will focus on Windows Forensics for now. Following some Googling, Jack has come to a shortlist of commercial Digital Forensic tools that he wants to put through some tests. This is to aid him in making a final decision on where he should spend his hard earned cash.


The Tools

  • Access Data FTK v7.0.0 (Date Released: Nov 2018)
  • BlackBag BlackLight v2018 R4 (Date Released: Dec 2018)
  • Magnet Forensics Axiom v2.9 (Date Released: Jan 2019)
  • Opentext EnCase v8.08 (Date Released: Nov 2018)

Side note 1_ Jack always thought that Blacklight was predominantly a Mac forensics tool, but after seeing posts on Twitter by one of their new training guys punting it’s Windows Forensic capabilities, he thought it can’t hurt to give it a shot.

Side note 2_ In the midst of writing this, Magnet released Axiom v2.10. By the time that I hit publish on this post, v2.11 will most likely be uploading for release. I’ll stick with version v2.9 for now. If you work for Magnet and want to persuade me with some swag to use v2.10 in this series going forward (or whatever version you’re going to be on next week Tuesday), send me a DM to negotiate.


The Cost

Jack’s research has brought him to the conclusion that a single user license (the standard license for computer analysis, no cloud or mobile extras) will cost more or less the same for either FTK, Axiom or EnCase. Interestingly enough, he can buy two BlackLight licenses for the price of one of the other three.

After making some South African market related comparisons, Jack realized that he can either buy one of the aforementioned licenses (two in the case of BlackLight), or a secondhand 1992 Toyota Land Cruiser GX with 350,000km on the clock.

This is the GX:

Jack has long dreamt of buying a GX and taking the fam to the Central Kalahari Game Reserve (CKGR) in Botswana on an overland expedition. But that’ll have to wait, as it looks like he’ll be spending that money on a license dongle. What will it be? A GX or pure forensic joy? (Jack did find it odd that the only place where he can buy the licenses for these tools were from the same companies that he’ll be competing against with DFIRJack Inc. Kind of like the Bulls only being allowed to buy their Rugby kit from the Stormers.)


The Plan

In order for Jack to decide which license dongle will take the place of his GX, he opted to put these tools through some head-to-head tests.


We’ll call it Forensic Mania



Forensic Mania will run for an undefined number of rounds or blog posts. (Undefined, yes, but most likely until I loose interest and move on to a new blog idea…)


Series 1

For the first series, we’ll use the MUS2018 CTF image of Max Powers to run the tests. Why this image?

  1. The forensic image is publicly available (here)
  2. There are write ups available online of the answers, so you can run and verify your answers (here and here)
  3. It’s small enough (50GB) to throw the kitchen sink at it, and all the tools should be able to swim.
  4. It’s a Windows 10 image. Windows 10 was released in July 2015 and brought lots of new forensic artifacts with it. Almost four years later, I’d expect that the big forensic tools should be able to exploit this.
  5. It’s my blog, so I make the rules. Get off my lawn.

Bias alert: The forensic image was created for a CTF set to run specifically at MUS2018. Did Matt & Dave design the CTF image to benefit Axiom? Maybe. But we’ll try and be as objective as possible.

Following this series, I’m planning to run similar style tests against more real world images to see how the tools hold up.


Whats next?

Episode 1 – Processing is coming soon…


What can you do?

You can vote! <Voting has now closed>

Check back soon…



Calculating the Cost: Triaging with Axiom and EnCase

https://www.worldwildlife.org/magazine/issues/winter-2017/articles/brandon-davis-uses-improved-tracking-collars-to-keep-african-painted-dogs-roaming-free

Having seen Eric Zimmerman’s release of Kape (Or Kale as Ovie Carol calls it) I thought it could be insightful to play around with the Triage idea some more.

Basic premise for this post was this:

For an Incident Response type case, how much answers can you get to by just grabbing and analyzing selective data (triage) versus full disk images.

With remote acquisition, acquiring only a few GB’s of data instead of full images can, in some cases, make a difference of a few hours – depending on network speed. The same calculation applies when it comes to processing the data.


To run this exercise, I dusted off the evidence files from the 2018 Vegas Magnet User Summit CTF. I managed to win the live CTF on the day, but didn’t get a full score. Oleg Skulkin and Igor Mikhaylov however did a write-up of the full CTF that we’re going to use.

You can check out their write-ups here:

https://cyberforensicator.com/2018/06/28/magnet-user-summit-ctf-anti-forensics/
https://cyberforensicator.com/2018/07/01/magnet-user-summit-ctf-exfiltration/
https://cyberforensicator.com/2018/06/29/magnet-user-summit-ctf-misc/
https://cyberforensicator.com/2018/07/01/magnet-user-summit-ctf-intrusion/

For this test, I created a quick and dirty condition in EnCase that only targets specific data. Things like Registry files, Event logs, Browser Artifacts, File System Artifacts etc. A good place to start with a Triage list is to have a look at the Sans Windows Forensics “Evidence Of…” poster for areas of interest.

A condition in EnCase is basically a fancy filter, allowing you to filter for files with specific names, paths, sizes etc. Not that it matters, but I named my condition Wildehond, which is the Afrikaans name for Wild Dog or Painted Wolf. Wild dogs are known to devour their prey while it’s still alive, and that’s what we’re trying to do here… (You can Youtube it at your own risk).

Running my Wildehond condition in EnCase on the Max Powers hard drive image, resulted in 2,279 files totaling 2.5GB. The mock image of Max Powers, the victim in the CTF, was originally 50GB. After running the condition I created a Logical Evidence File of the filtered triage files.


So, the question is, can you get a full score for the CTF from processing and analyzing 5% of the data?

Let’s try.

First off, I processed the ‘full’ image in Axiom v2.9:

And selected all available artifacts to be included:

Processing ran for around 45 minutes, with another 15 minutes to build connections. That’s a round 60 minutes.

The processing resulted in about 727,000 artifacts:


Next up, I used the exact same processing settings on the 2.5GB Triage image I created with EnCase and Wildehond.

Processing took 13 minutes, with another minute to complete the connections. A cool 14 minutes in total. This left us with around 290,000 artifacts for analysis:

So yes, as expected, there is a large difference (45 minutes) in processing 2.5GB in stead of 50GB. (This difference will be a lot bigger between a real world 500GB drive and a 2.5GB triage set)

But this doesn’t mean anything if we can get to the answers, so lets go.


After running the processing, I did a side-by-side comparison between the two sets of data, and worked through the CTF questions on each side.

All of the questions were answerable on the full image processed with Axiom 2.9, except for three questions relating to the $MFT, where a tool like Eric Zimmerman’s MFTEcmd would do the trick.

This is how the two images did in providing answers:

So, with the Triage set of 2.5GB, we could answer 23 of the 28 Questions (82%… which is more than what I got for C++ at University).

However, real world incidents can differ quite a bit from question and answer style exercises, especially if you don’t know what exactly you are looking for.


For the 5 questions that could not be answered from the Triage set, below is the reasons why:

Wiped file names:

Strangely enough, the UsnJrnl did not parse in my Triage image.

From the full image:

However, nothing from my Triage system.

I confirmed that the file was present in my image:

So, to troubleshoot, I used Joachim Schicht’s UsnJrnl2Csv to try and parse the UsnJrnl that was in my Triage image.

And… It liked my UsnJrnl exported from the Triage image:

So… for some odd reason Axiom doesn’t recognize the $UsrnJrnl•$J file when contained in my Triage LX01 image. Will do some more trouble-shooting to figure out why this is the case.

Browser to download Dropbox:

From the full image, the answer was quite clear: Maxthon

Yes, my Triage image contains lots of artifacts referencing Maxthon and Dropbox separately, but no immediate obvious link that Maxthon was used to download Dropbox. The main reason for this is that I did not capture Maxthon web histories (i.e. mxundo.dat) in my Triage image.

Email data:

The last two questions where my Triage image came up short related to Email. As no email was targeted with my Triage, this was to be expected.



So, there you have it. In this case, you could do a pretty good job at getting a handle on a your case by only using Triage data.

Will full disk imaging and analysis not provide you with better context? Yes, perhaps… but with the likely trade-offs in Triaging, it’s worth exploring it first.

Detecting Time Changes with L2T (Ain’t Nobody Got Time For That)

Every good blog post about time issues in forensics needs a theme song.

Today’s theme song is Ain’t nobody got time for that from the local band Rubber Duc:

Having a theme song, and more importantly, embedding the Youtube video for said theme song in your blog post, serves the following two purposes:

  1. It keeps the reader here for 3minutes and 18seconds (when viewing it embedded on this page), which will make me and my post analytics think that they actually spent time reading through the entire article.
  2. Gets a song stuck the reader’s head, ideal for when you go back to writing that report you’ve been putting off all week.

Now that we got that out of the way, lets get down to the business of the day:

Identifying Time changes in Windows Event Logs with L2t:

As you’d recall from my previous post,  the aim of this series is to play around with quick things you can do at the beginning of an investigation, while for example, waiting for processing to complete. Specifically, those ‘nice to know’ things that takes only a couple of minutes to check…

Time changes on a system can make a simple investigation quite complex very quickly. Sample case is often where a user backdates a system before deleting / creating files.

The following steps should be enough to give you a quick view of user initiated time changes on a system. Remember, this is only to get a high level view, just enough to let you know you need to dig deeper.

Let’s start:

Step 1

First off, we start with processing only the Security and System event logs with Log2Timeline, followed by psort-ing it using the l2tcsv output format. The reason for having a look at the Security and System event logs is that Time change events are recorded in both. Often, the Security event log is quite busy, so chances are that historical events will get overwritten a lot quicker than those in the System event log. My current Security event log has 30,000 entries, with System only sitting at 10,000.

Step 2

Now that we have an output file (in my case SecSysEvt.l2t.csv) which contains the L2T output from the Security.evtx and System.evtx, we can start Grepping.

We’ll do this in two sections:

  1. Dealing with time change events in the Security Event log (this post).
  2. Dealing with time change events from the System event log (next post)

Security Event log

When a time change occurs on a Windows 7 and later system, Event Id 4616 fires. See more about this event at Ultimate Windows Security.

So let’s get grepping:

grep Security\.evtx SecSysEvt.l2t.csv

This will gives us events in our L2T output which came from our Security.evtx file (ignoring events from the System.evtx for now). In my case I have 27,884 Security.evtx events.

Next, we want to narrow it down to only Event ID 4616. The following should do the trick:

grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616"

After this, we clear out some unwanted 4616 events. In this case we are excluding events that were not caused by user action. Remember, we want to know if a user was messing around with the system time.

To accomplish this, we exclude events containing LOCAL SERVICE as well as S-1-5-18:

grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616" |grep -v "SubjectUserName\">LOCAL SERVICE\|S-1-5-18"

Our output is now ready for us to only extract the columns we want. To do this we make use of awk. First up, we output only the xml section of the L2T output:

grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616" |grep -v "SubjectUserName\">LOCAL SERVICE\|S-1-5-18" | awk -F"xml_string: " '{print $2}'

This gives us something like:

L2T xml Output

We now use awk to only give us the columns we are currently interested. For this scenario, I’m only looking for the following columns:

  • Event ID
  • User SID Responsible for the change
  • User Profile Name
  • Computer Name
  • The process responsible for the change
grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616" |grep -v "SubjectUserName\">LOCAL SERVICE\|S-1-5-18" | awk -F"xml_string: " '{print $2}' | awk -F'[<,>]' '{print $9 "\t" $57 "\t" $61 "\t" $65 "\t" $85 }'

Using this, we get the following output:

Awk Columns

All that’s left now is some sorting and unique-ing:

grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616" |grep -v "SubjectUserName\">LOCAL SERVICE\|S-1-5-18" | awk -F"xml_string: " '{print $2}' | awk -F'[<,>]' '{print $9 "\t" $57 "\t" $61 "\t" $65 "\t" $85 }'| sort | uniq -c | sort -n -r

Output

This gives us the following:

For this event log, there were 8 time changes, resulting from user actions. 6 by SystemSettingsAdminFlows.exe and 2 by dllhost.exe.

From what I can see on my Win10 test system, SystemSettingsAdminFlows.exe is responsible for making system time changes when a user made use of the “Adjust Date\Time” option from the taskbar. I’m doing some more testing with regards to when dllhost.exe fires on Windows 10. So far I haven’t been able to replicate it…

Remember, this is just a pointer or a flag that gets raised to let you know that it might be useful to have a deeper look at time change events on a system.

Lastly, this grep should work on Windows 7 Security event logs as well (Haven’t tested it on Win8). I ran it on a couple of test Win7 systems, and it was good enough to show a specific application installed by a user was making regular time adjustments across these systems.

Next time, we’ll look at time change events in the System event log.

Finding Failed Logon Attempts With Log2Timeline While You’re Searching For Your FTK Dongle

I have recently been thinking through ideas for some quick and dirty initial processes one can do at the start of an investigation.

This would typically be whilst you’re doing one of the following:

  1. Waiting on full disk (including VSS) log2timeline processing to complete.
  2. Waiting on Axiom to run the ‘build connections’ module because you forgot to enable the option prior to the initial processing phase.
  3. Waiting on EnCase 8.07 to finish processing, although it’s been sitting at 100% for the last 2 hours.
  4. Trying to figure out where you last saw your FTK dongle.

This brings us to a New Blog Series:

The aim of this post (and hopefully this series) is to play around with things you can do at the beginning of an investigation, while for example, waiting for processing to complete. Specifically things that could be of value to know at the beginning of an investigation.

 

And, that brings us to today’s post:

Finding failed logon events.

Identifying failed logon events in the Security event log of a system could mean a couple of things:

  1. Someone is attempting to brute force an account.
  2. <add a list of more possible reasons here>

The above extensive list provides good reason why it could be of value to have a quick squiz through a system’s Security Event logs for failed logon attempts.

As such, I wanted to know the following relating to failed logon events:

  • How many (if any) failed logon attempts were recorded in the system’s security event log.
  • Which accounts were attempted to log on with the most, as well as the logon types.
  • What were the top failed source IP addresses recorded.
  • What date(s) did the most failed logon attempts occur on.

Side note: The sample data I used for this post came from the image provided by Dave and Matt (The Forensic Lunch) as part of the MUS CTF. More about the MUS CTF and the image, check here.

To answer these questions, here’s one quick and dirty way:

Step 1:

Process the Security event log with Log2Timeline (this took just over a minute to process 33,000 events from Security.evtx) :

$ log2timeline.py mus.sec.evtx.l2t securityevt/

Log2Timeline Output

 

Step 2:

Run psort across the output using the l2tcsv  format (this took 30 seconds to run):

$ psort.py -o l2tcsv -w mus.sec.evtx.csv mus.sec.evtx.l2t

Psort Output

 

Step 3: Grep & Awk

This is where the fun starts. Because it is expected that the output from running log2timeline / psort on a Security event log should provide the same output structure each time, the same commands should work. (I tested this with Security Event logs from Server 2012, Windows 7 and Windows 10 and seems to work on all the different outputs).

This may appear ugly, but it works.

Grep & Awk Output

Total Failed Logons: grep “EventID>4625” mus.sec.evtx.csv | wc -l

Top Failed Accounts: grep “EventID>4625″ mus.sec.evtx.csv | awk -F”xml_string: ” ‘{print $2}’ | awk -F”TargetUserName\”>” ‘{print $2}’ | awk -F”<” ‘{print $1}’ | sort | uniq -c | sort -n -r | head

Top Failed Logon Accounts: grep “EventID>4625″ mus.sec.evtx.csv | awk -F”xml_string: ” ‘{print $2}’ | awk -F”LogonType\”>” ‘{print $2}’ | awk -F”<” ‘{print $1}’ | sort | uniq -c | sort -n -r | head

Top Failed IP Address Origins:
grep “EventID>4625″ mus.sec.evtx.csv | awk -F”xml_string: ” ‘{print $2}’ | awk -F”IpAddress\”>” ‘{print $2}’ | awk -F”<” ‘{print $1}’ | sort | uniq -c | sort -n -r | head

Top Dates With Failed Logons: grep “EventID>4625″ mus.sec.evtx.csv | awk -F”xml_string: ” ‘{print $2}’ | awk -F”TimeCreated SystemTime=\”” ‘{print $2}’ | awk -F”T” ‘{print $1}’ | sort | uniq -c | sort -n -r | head

 

And the end result:

Success.

We can now see that there were 612 failed Type 3 logon attempts, all on May 5th 2018. It also shows us that the Administrator account was most often attempted to log in with, as well as the top IP addresses where the logon attempts came from.

All this in less that 5 minutes.

Highway To The Danger Zone.Identifier

Phill Moore recently did a write-up on some pretty cool changes made to the data being recorded within the Zone.Identifier Alternate Data Streams (ADS) for downloaded files.

Have a read here: https://thinkdfir.com/2018/06/17/zone-identifier-kmditemwherefroms/

Now, if you’re not going to read Phill’s blog and just opened this article because of your innate love for Tom Cruise and bad Top Gun puns, shame on you.

Son, before your ego starts writing checks your body can’t cash, let’s at least assume we all agree on the following:

A ZoneIdentifier ADS is an extra piece of information stored with downloaded files. This is done to assist Windows in determining if a file should be trusted or not. For example, an executable file downloaded from the internet will be treated with the necessary suspicion based on the zone it came from (i.e. the Internet).

Phill’s testing has highlighted two additional fields that are being stored within the Zone.Identifier:

  • HostUrl
  • ReferrerUrl

This is a great source of information as it can assist in determining where (URL) a downloaded file originated from.

A bit of Googling revealed the following response to a Bugzilla report by a Windows Defender ATP team member regarding the addition of these fields in Windows 10:

This feature was added in Windows 10 release 1703 (build 15063).
The HostUrl and ReferrerUrl are set by Microsoft Edge and Google 
Chrome.
Edge also sets a HostIpAddress field.

It is used for protection purposes.
Specifically, Microsoft’s Windows Defender Advanced Threat 
Protection exposes this info to the SOC, who can then identify where 
attacks came from, which other downloads might be related, and 
respond/block accordingly.
I don't know which other products/tools use this feature.

Thanks,
Tomer
(from the Windows Defender Advanced Threat Protection team)

I haven’t seen the HostIpAddress field before, so I decided to run similar tests with three browsers, identical to those used by Phill:

    • Firefox 60.0.2 (64-bit)
    • Chrome Version 67.0.3396.87 (Official Build) (64-bit)
    • Microsoft Edge 42.17134.1.0

For my tests, I downloaded the file RegistryExplorer_RECmd.zip with each browser from the following URL:

  • https://ericzimmerman.github.io/Software/RegistryExplorer_RECmd.zip

 

Results:

Firefox behaved as expected with no additional fields added to the Zone.Identifier:

Firefox Zone.Identifier

 

Chrome added the ReferrerUrl and HostUrl as follows:

Chrome Zone.Identifier

 

In my case, Edge also added the ReferrerUrl and HostUrl:

Edge Zone.Identifier

This is interesting as it differs from Phill’s testing. Will compare notes to see if there’s a specific reason for this.

 

Archives, Zone.Identifiers & ReferrerUrls

Now, if you’re one of those analysts who wont be happy unless you’re going Mach 2 with your hair on fire, you’ll like this:

If you use the built in Windows “Extract All” option to extract the downloaded archives, you get a Zone.Identifier for each extracted file:

Zone.Identifiers in Extracted Files

Note: when testing the same by extracting the archive with 7zip, it did not create the Zone.Identifiers for the extracted files.

In addition to the zones, the Zone.Identifier now records the path of the parent archive where the extracted files originated from in the ReferrerUrl field:

Zone.Identifier in Extracted File Showing Parent

Not only are you now able to determine from which URL a downloaded file originated, you may also be able to track an extracted file back to it’s original archive.

 

Copying files to an external hard drive

“But Maverick” you interject, “what happens when the files are copied to an external hard drive?”

“Fear not Goose, the lovely thing about Zone.Identifiers are that they travel oh so well.”

Copying the downloaded zip to an NTFS formatted external hard drive still kept the Zone.Identifier intact:

Zip Zone.Identifier on External HDD

 

The same was found for the Zone.Identifiers for the extracted files:

Zip Zone.Identifier for Extract Files on External HDD

 

Till next time…

Update [2018-06-19]

Welcome to Next Time.

Thanks to Paul Bryant (see comments below the post) we have more ‘clarity’ on when Edge will add a HostIPAddress field to downloaded files.

Saving the Streams.zip with Edge:

The following DOES NOT store a HostIPAddress:
1. Clicking on a file link to directly download the file.
2. Right-Clicking on a file link > Save Target As > And directly click save without changing the path.

The following stores a HostIPAddress:
1. Right-Clicking on a file link > Save Target As > Changing the target directory and saving the file.
2. Right-Clicking on a file link > Save Target As > Changing the target directory to something else, and then changing the target dir back to the original default folder.

Here is a sample of a Zone.Identifier containing a HostIpAddress for a file downloaded with Edge, where the target directory was changed a couple of times and the then changed back to the Downloads dir:

So now, calculate how many users are on Windows 10, uses Edge as their browser, and are “Right-Clicking, Save Target Assing, Change Dirring” when they save data.

That’s how often you’ll see the HostIPAddress field in a Zone.Identifier (that I know of)

Seems to be an Edge case, if you pardon the pun.

Parsing APFS with Axiom before the thing from Lost eats you

During the latter part of 2017, Apple introduced their APFS file system which is being rolled out with their High Sierra macOS.

The following section was taken from an Apple support article:

When you install macOS High Sierra on the Mac volume of a solid-state drive (SSD) or other all-flash storage device, that volume is automatically converted to APFS. Fusion Drives, traditional hard disk drives (HDDs), and non-Mac volumes aren’t converted. You can’t opt out of the transition to APFS.

Although there are a couple of articles floating around which shows ways to ‘opt-out’ of APFS, it is still likely that 99% of High Sierra systems with Solid State Drives you’re going to come across will have APFS running.

Now, picture this scenario:

You are stuck on an island with a forensic image of an APFS volume and a toolbox full of your favorite commercial forensic tools. Contained in the APFS volume is a backup of an iPhone 6s which contains a WhatsApp message with the instructions on how to make one mean coconut Mojito. You need to access said message in order to make the Mojito before sunset. Should you fail,  you’ll be forced to do manual USB device history analysis for 26 Windows 7 internet café PCs, after which, you may or may not get eaten by that thing that was eating people in Lost.

So, your options:

  • Blackbag’s BlackLight — Yes, it works.
  • Autopsy — No support as of version 4.7.
  • AccessData FTK — No support as of version 6.4. Their online tech support noted that APFS support is planned for future releases, however no eta yet.
  • Magnet Forensics Axiom — No support as of version 2.1.0.9727. Jad Saliba mentioned at the Magnet User Summit in Las Vegas (May 2018) that they’re currently working on it, but no eta yet.
  • OpenText EnCase — Officially: Yes, Unofficially: Sort of. Although EnCase announced APFS support in version 8.07, I’ve dealt with two separate Macs where EnCase is refusing to parse the APFS volumes. I’ve put one of the images through a few tests. The image happily parses with Blackbag’s Blacklight and mounts with both Paragon‘s APFS mounter and Simon Gander’s APFS-Fuse library. OpenText Tech support is currently looking into this.
  • X-ways — No support in version 19.6, however, according to this tweet from Eric it should be coming soon:

 

Plan A: Blackbag

After your confidence grows while scrolling through the heaps of tweets about Blackbag being ‘the only end-to-end solution for APFS’, you realize that your 30 day trial license has just expired… As you were about to accept your fate and Google “sans usb profiling cheat sheet“, you find two articles from Mari Degrazia on mounting APFS images:

As the daylight starts to fade and you try and remember how many episodes of Lost you actually watched before losing interest, you devise a new plan:

 

Plan B: Quick and dirty way to process APFS with Axiom and friends.

I was specifically looking for a way to get my APFS image parsed with Axiom.

The following approaches did not work:

Experiment 1:

Mount E01 with Arsenal Image Mounter > Mount resulting APFS partition with Paragon’s ‘APFS for Windows’ > Add files & folders in Axiom.

Result: It processed, but for some files Axiom wasn’t properly linking back to the actual source files to display their content. Not sure who’s fault it is, but most likely something to do with the mounting of a mounted image.

 

Experiment 2:

Mount E01 with Arsenal Image Mounter > Mount APFS partition with Paragon’s ‘APFS for Windows’ > Create AD Image with FTK Imager > Process AD Image with Axiom.

Result: It processed, but again had issues with displaying actual content for some of the files processed. During the creation of the AD Image, FTK Imager encountered a large volume of files it claimed couldn’t be added to the logical image, again likely due to the various mountings.

 

Experiment 3:

Mount E01 in SIFT with ewfmount (libewf) > mount APFS partition with APFS-fuse > Create a tar of mounted data > Process tar with Axiom

Result: Again got a similar result where Axiom processed the data, but didn’t display actual content for some files.

 

At this stage most island-stricken forensicators would have given up and resigned themselves to a life of USBSTORs and Volume GUIDs. But luckily, you’re not most forensicators and you try one more way:

Experiment 4:

  1. Mount E01 in SIFT with ewfmount (libewf)
  2. Mount APFS partition with APFS-fuse
  3. Create an empty DD image, give it a volume and copy mounted APFS data to new DD image. For a step by step walk through of basically creating a DD image from files and folders, check out Andy Joyce’s 2009 post: http://dougee652.blogspot.com/2009/06/logical-evidence-collection-stored-in.html
  4. Process DD image with Axiom.
  5. Success and Mojito’s.

Axiom was happy to process the DD, as well as the iPhone backup which was contained on the APFS volume in one go.

And yes, copying the mounted data to a DD container will update the creation dates of the files. If this makes you feel uneasy, remember, you also just used an ‘experimental’ driver to mount an APFS volume.

At least the thing from Lost didn’t eat you… #winning