Detecting Time Changes with L2T (Ain’t Nobody Got Time For That)

Every good blog post about time issues in forensics needs a theme song.

Today’s theme song is Ain’t nobody got time for that from the local band Rubber Duc:

Having a theme song, and more importantly, embedding the Youtube video for said theme song in your blog post, serves the following two purposes:

  1. It keeps the reader here for 3minutes and 18seconds (when viewing it embedded on this page), which will make me and my post analytics think that they actually spent time reading through the entire article.
  2. Gets a song stuck the reader’s head, ideal for when you go back to writing that report you’ve been putting off all week.

Now that we got that out of the way, lets get down to the business of the day:

Identifying Time changes in Windows Event Logs with L2t:

As you’d recall from my previous post,  the aim of this series is to play around with quick things you can do at the beginning of an investigation, while for example, waiting for processing to complete. Specifically, those ‘nice to know’ things that takes only a couple of minutes to check…

Time changes on a system can make a simple investigation quite complex very quickly. Sample case is often where a user backdates a system before deleting / creating files.

The following steps should be enough to give you a quick view of user initiated time changes on a system. Remember, this is only to get a high level view, just enough to let you know you need to dig deeper.

Let’s start:

Step 1

First off, we start with processing only the Security and System event logs with Log2Timeline, followed by psort-ing it using the l2tcsv output format. The reason for having a look at the Security and System event logs is that Time change events are recorded in both. Often, the Security event log is quite busy, so chances are that historical events will get overwritten a lot quicker than those in the System event log. My current Security event log has 30,000 entries, with System only sitting at 10,000.

Step 2

Now that we have an output file (in my case SecSysEvt.l2t.csv) which contains the L2T output from the Security.evtx and System.evtx, we can start Grepping.

We’ll do this in two sections:

  1. Dealing with time change events in the Security Event log (this post).
  2. Dealing with time change events from the System event log (next post)

Security Event log

When a time change occurs on a Windows 7 and later system, Event Id 4616 fires. See more about this event at Ultimate Windows Security.

So let’s get grepping:

grep Security\.evtx SecSysEvt.l2t.csv

This will gives us events in our L2T output which came from our Security.evtx file (ignoring events from the System.evtx for now). In my case I have 27,884 Security.evtx events.

Next, we want to narrow it down to only Event ID 4616. The following should do the trick:

grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616"

After this, we clear out some unwanted 4616 events. In this case we are excluding events that were not caused by user action. Remember, we want to know if a user was messing around with the system time.

To accomplish this, we exclude events containing LOCAL SERVICE as well as S-1-5-18:

grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616" |grep -v "SubjectUserName\">LOCAL SERVICE\|S-1-5-18"

Our output is now ready for us to only extract the columns we want. To do this we make use of awk. First up, we output only the xml section of the L2T output:

grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616" |grep -v "SubjectUserName\">LOCAL SERVICE\|S-1-5-18" | awk -F"xml_string: " '{print $2}'

This gives us something like:

L2T xml Output

We now use awk to only give us the columns we are currently interested. For this scenario, I’m only looking for the following columns:

  • Event ID
  • User SID Responsible for the change
  • User Profile Name
  • Computer Name
  • The process responsible for the change
grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616" |grep -v "SubjectUserName\">LOCAL SERVICE\|S-1-5-18" | awk -F"xml_string: " '{print $2}' | awk -F'[<,>]' '{print $9 "\t" $57 "\t" $61 "\t" $65 "\t" $85 }'

Using this, we get the following output:

Awk Columns

All that’s left now is some sorting and unique-ing:

grep Security\.evtx SysSec.l2t.csv | grep "EventID>4616" |grep -v "SubjectUserName\">LOCAL SERVICE\|S-1-5-18" | awk -F"xml_string: " '{print $2}' | awk -F'[<,>]' '{print $9 "\t" $57 "\t" $61 "\t" $65 "\t" $85 }'| sort | uniq -c | sort -n -r

This gives us the following:

For this event log, there were 8 time changes, resulting from user actions. 6 by SystemSettingsAdminFlows.exe and 2 by dllhost.exe.

From what I can see on my Win10 test system, SystemSettingsAdminFlows.exe is responsible for making system time changes when a user made use of the “Adjust Date\Time” option from the taskbar. I’m doing some more testing with regards to when dllhost.exe fires on Windows 10. So far I haven’t been able to replicate it…

Remember, this is just a pointer or a flag that gets raised to let you know that it might be useful to have a deeper look at time change events on a system.

Lastly, this grep should work on Windows 7 Security event logs as well (Haven’t tested it on Win8). I ran it on a couple of test Win7 systems, and it was good enough to show a specific application installed by a user was making regular time adjustments across these systems.

Next time, we’ll look at time change events in the System event log.

Finding Failed Logon Attempts With Log2Timeline While You’re Searching For Your FTK Dongle

I have recently been thinking through ideas for some quick and dirty initial processes one can do at the start of an investigation.

This would typically be whilst you’re doing one of the following:

  1. Waiting on full disk (including VSS) log2timeline processing to complete.
  2. Waiting on Axiom to run the ‘build connections’ module because you forgot to enable the option prior to the initial processing phase.
  3. Waiting on EnCase 8.07 to finish processing, although it’s been sitting at 100% for the last 2 hours.
  4. Trying to figure out where you last saw your FTK dongle.

This brings us to a New Blog Series:

The aim of this post (and hopefully this series) is to play around with things you can do at the beginning of an investigation, while for example, waiting for processing to complete. Specifically things that could be of value to know at the beginning of an investigation.


And, that brings us to today’s post:

Finding failed logon events.

Identifying failed logon events in the Security event log of a system could mean a couple of things:

  1. Someone is attempting to brute force an account.
  2. <add a list of more possible reasons here>

The above extensive list provides good reason why it could be of value to have a quick squiz through a system’s Security Event logs for failed logon attempts.

As such, I wanted to know the following relating to failed logon events:

  • How many (if any) failed logon attempts were recorded in the system’s security event log.
  • Which accounts were attempted to log on with the most, as well as the logon types.
  • What were the top failed source IP addresses recorded.
  • What date(s) did the most failed logon attempts occur on.

Side note: The sample data I used for this post came from the image provided by Dave and Matt (The Forensic Lunch) as part of the MUS CTF. More about the MUS CTF and the image, check here.

To answer these questions, here’s one quick and dirty way:

Step 1:

Process the Security event log with Log2Timeline (this took just over a minute to process 33,000 events from Security.evtx) :

$ mus.sec.evtx.l2t securityevt/

Log2Timeline Output


Step 2:

Run psort across the output using the l2tcsv  format (this took 30 seconds to run):

$ -o l2tcsv -w mus.sec.evtx.csv mus.sec.evtx.l2t

Psort Output


Step 3: Grep & Awk

This is where the fun starts. Because it is expected that the output from running log2timeline / psort on a Security event log should provide the same output structure each time, the same commands should work. (I tested this with Security Event logs from Server 2012, Windows 7 and Windows 10 and seems to work on all the different outputs).

This may appear ugly, but it works.

Grep & Awk Output

Total Failed Logons: grep “EventID>4625” mus.sec.evtx.csv | wc -l

Top Failed Accounts: grep “EventID>4625″ mus.sec.evtx.csv | awk -F”xml_string: ” ‘{print $2}’ | awk -F”TargetUserName\”>” ‘{print $2}’ | awk -F”<” ‘{print $1}’ | sort | uniq -c | sort -n -r | head

Top Failed Logon Accounts: grep “EventID>4625″ mus.sec.evtx.csv | awk -F”xml_string: ” ‘{print $2}’ | awk -F”LogonType\”>” ‘{print $2}’ | awk -F”<” ‘{print $1}’ | sort | uniq -c | sort -n -r | head

Top Failed IP Address Origins:
grep “EventID>4625″ mus.sec.evtx.csv | awk -F”xml_string: ” ‘{print $2}’ | awk -F”IpAddress\”>” ‘{print $2}’ | awk -F”<” ‘{print $1}’ | sort | uniq -c | sort -n -r | head

Top Dates With Failed Logons: grep “EventID>4625″ mus.sec.evtx.csv | awk -F”xml_string: ” ‘{print $2}’ | awk -F”TimeCreated SystemTime=\”” ‘{print $2}’ | awk -F”T” ‘{print $1}’ | sort | uniq -c | sort -n -r | head


And the end result:


We can now see that there were 612 failed Type 3 logon attempts, all on May 5th 2018. It also shows us that the Administrator account was most often attempted to log in with, as well as the top IP addresses where the logon attempts came from.

All this in less that 5 minutes.

Highway To The Danger Zone.Identifier

Phill Moore recently did a write-up on some pretty cool changes made to the data being recorded within the Zone.Identifier Alternate Data Streams (ADS) for downloaded files.

Have a read here:

Now, if you’re not going to read Phill’s blog and just opened this article because of your innate love for Tom Cruise and bad Top Gun puns, shame on you.

Son, before your ego starts writing checks your body can’t cash, let’s at least assume we all agree on the following:

A ZoneIdentifier ADS is an extra piece of information stored with downloaded files. This is done to assist Windows in determining if a file should be trusted or not. For example, an executable file downloaded from the internet will be treated with the necessary suspicion based on the zone it came from (i.e. the Internet).

Phill’s testing has highlighted two additional fields that are being stored within the Zone.Identifier:

  • HostUrl
  • ReferrerUrl

This is a great source of information as it can assist in determining where (URL) a downloaded file originated from.

A bit of Googling revealed the following response to a Bugzilla report by a Windows Defender ATP team member regarding the addition of these fields in Windows 10:

This feature was added in Windows 10 release 1703 (build 15063).
The HostUrl and ReferrerUrl are set by Microsoft Edge and Google 
Edge also sets a HostIpAddress field.

It is used for protection purposes.
Specifically, Microsoft’s Windows Defender Advanced Threat 
Protection exposes this info to the SOC, who can then identify where 
attacks came from, which other downloads might be related, and 
respond/block accordingly.
I don't know which other products/tools use this feature.

(from the Windows Defender Advanced Threat Protection team)

I haven’t seen the HostIpAddress field before, so I decided to run similar tests with three browsers, identical to those used by Phill:

    • Firefox 60.0.2 (64-bit)
    • Chrome Version 67.0.3396.87 (Official Build) (64-bit)
    • Microsoft Edge 42.17134.1.0

For my tests, I downloaded the file with each browser from the following URL:




Firefox behaved as expected with no additional fields added to the Zone.Identifier:

Firefox Zone.Identifier


Chrome added the ReferrerUrl and HostUrl as follows:

Chrome Zone.Identifier


In my case, Edge also added the ReferrerUrl and HostUrl:

Edge Zone.Identifier

This is interesting as it differs from Phill’s testing. Will compare notes to see if there’s a specific reason for this.


Archives, Zone.Identifiers & ReferrerUrls

Now, if you’re one of those analysts who wont be happy unless you’re going Mach 2 with your hair on fire, you’ll like this:

If you use the built in Windows “Extract All” option to extract the downloaded archives, you get a Zone.Identifier for each extracted file:

Zone.Identifiers in Extracted Files

Note: when testing the same by extracting the archive with 7zip, it did not create the Zone.Identifiers for the extracted files.

In addition to the zones, the Zone.Identifier now records the path of the parent archive where the extracted files originated from in the ReferrerUrl field:

Zone.Identifier in Extracted File Showing Parent

Not only are you now able to determine from which URL a downloaded file originated, you may also be able to track an extracted file back to it’s original archive.


Copying files to an external hard drive

“But Maverick” you interject, “what happens when the files are copied to an external hard drive?”

“Fear not Goose, the lovely thing about Zone.Identifiers are that they travel oh so well.”

Copying the downloaded zip to an NTFS formatted external hard drive still kept the Zone.Identifier intact:

Zip Zone.Identifier on External HDD


The same was found for the Zone.Identifiers for the extracted files:

Zip Zone.Identifier for Extract Files on External HDD


Till next time…

Update [2018-06-19]

Welcome to Next Time.

Thanks to Paul Bryant (see comments below the post) we have more ‘clarity’ on when Edge will add a HostIPAddress field to downloaded files.

Saving the with Edge:

The following DOES NOT store a HostIPAddress:
1. Clicking on a file link to directly download the file.
2. Right-Clicking on a file link > Save Target As > And directly click save without changing the path.

The following stores a HostIPAddress:
1. Right-Clicking on a file link > Save Target As > Changing the target directory and saving the file.
2. Right-Clicking on a file link > Save Target As > Changing the target directory to something else, and then changing the target dir back to the original default folder.

Here is a sample of a Zone.Identifier containing a HostIpAddress for a file downloaded with Edge, where the target directory was changed a couple of times and the then changed back to the Downloads dir:

So now, calculate how many users are on Windows 10, uses Edge as their browser, and are “Right-Clicking, Save Target Assing, Change Dirring” when they save data.

That’s how often you’ll see the HostIPAddress field in a Zone.Identifier (that I know of)

Seems to be an Edge case, if you pardon the pun.

Parsing APFS with Axiom before the thing from Lost eats you

During the latter part of 2017, Apple introduced their APFS file system which is being rolled out with their High Sierra macOS.

The following section was taken from an Apple support article:

When you install macOS High Sierra on the Mac volume of a solid-state drive (SSD) or other all-flash storage device, that volume is automatically converted to APFS. Fusion Drives, traditional hard disk drives (HDDs), and non-Mac volumes aren’t converted. You can’t opt out of the transition to APFS.

Although there are a couple of articles floating around which shows ways to ‘opt-out’ of APFS, it is still likely that 99% of High Sierra systems with Solid State Drives you’re going to come across will have APFS running.

Now, picture this scenario:

You are stuck on an island with a forensic image of an APFS volume and a toolbox full of your favorite commercial forensic tools. Contained in the APFS volume is a backup of an iPhone 6s which contains a WhatsApp message with the instructions on how to make one mean coconut Mojito. You need to access said message in order to make the Mojito before sunset. Should you fail,  you’ll be forced to do manual USB device history analysis for 26 Windows 7 internet café PCs, after which, you may or may not get eaten by that thing that was eating people in Lost.

So, your options:

  • Blackbag’s BlackLight — Yes, it works.
  • Autopsy — No support as of version 4.7.
  • AccessData FTK — No support as of version 6.4. Their online tech support noted that APFS support is planned for future releases, however no eta yet.
  • Magnet Forensics Axiom — No support as of version Jad Saliba mentioned at the Magnet User Summit in Las Vegas (May 2018) that they’re currently working on it, but no eta yet.
  • OpenText EnCase — Officially: Yes, Unofficially: Sort of. Although EnCase announced APFS support in version 8.07, I’ve dealt with two separate Macs where EnCase is refusing to parse the APFS volumes. I’ve put one of the images through a few tests. The image happily parses with Blackbag’s Blacklight and mounts with both Paragon‘s APFS mounter and Simon Gander’s APFS-Fuse library. OpenText Tech support is currently looking into this.
  • X-ways — No support in version 19.6, however, according to this tweet from Eric it should be coming soon:


Plan A: Blackbag

After your confidence grows while scrolling through the heaps of tweets about Blackbag being ‘the only end-to-end solution for APFS’, you realize that your 30 day trial license has just expired… As you were about to accept your fate and Google “sans usb profiling cheat sheet“, you find two articles from Mari Degrazia on mounting APFS images:

As the daylight starts to fade and you try and remember how many episodes of Lost you actually watched before losing interest, you devise a new plan:


Plan B: Quick and dirty way to process APFS with Axiom and friends.

I was specifically looking for a way to get my APFS image parsed with Axiom.

The following approaches did not work:

Experiment 1:

Mount E01 with Arsenal Image Mounter > Mount resulting APFS partition with Paragon’s ‘APFS for Windows’ > Add files & folders in Axiom.

Result: It processed, but for some files Axiom wasn’t properly linking back to the actual source files to display their content. Not sure who’s fault it is, but most likely something to do with the mounting of a mounted image.


Experiment 2:

Mount E01 with Arsenal Image Mounter > Mount APFS partition with Paragon’s ‘APFS for Windows’ > Create AD Image with FTK Imager > Process AD Image with Axiom.

Result: It processed, but again had issues with displaying actual content for some of the files processed. During the creation of the AD Image, FTK Imager encountered a large volume of files it claimed couldn’t be added to the logical image, again likely due to the various mountings.


Experiment 3:

Mount E01 in SIFT with ewfmount (libewf) > mount APFS partition with APFS-fuse > Create a tar of mounted data > Process tar with Axiom

Result: Again got a similar result where Axiom processed the data, but didn’t display actual content for some files.


At this stage most island-stricken forensicators would have given up and resigned themselves to a life of USBSTORs and Volume GUIDs. But luckily, you’re not most forensicators and you try one more way:

Experiment 4:

  1. Mount E01 in SIFT with ewfmount (libewf)
  2. Mount APFS partition with APFS-fuse
  3. Create an empty DD image, give it a volume and copy mounted APFS data to new DD image. For a step by step walk through of basically creating a DD image from files and folders, check out Andy Joyce’s 2009 post:
  4. Process DD image with Axiom.
  5. Success and Mojito’s.

Axiom was happy to process the DD, as well as the iPhone backup which was contained on the APFS volume in one go.

And yes, copying the mounted data to a DD container will update the creation dates of the files. If this makes you feel uneasy, remember, you also just used an ‘experimental’ driver to mount an APFS volume.

At least the thing from Lost didn’t eat you… #winning