[Update 2019-03-10] I’ve added the version numbers of Axiom, Encase and FTK used. Also added details about EnCase Firefox support update coming in next release.
So, last night, after watching the Forensic Dinner (yeah yeah it’s the Forensic Lunch, but hello time zones) I was busy with some testing for #ForensicMania.
Dealing with a simple question ‘What was searched for in Youtube on xx date’, I came to bit of a speed bump in EnCase. In short, I couldn’t get to the answer in EnCase for Youtube web histories viewed in Firefox. It was late, so I wasn’t sure if I were to blame, or EnCase. With this, I stopped with the #ForensicMania stuff and thought, let’s do some targeted testing.
The next morning (today), I decided to do a quick and simple test:
Conduct a few searches in Chrome and Firefox
Parse the web histories with Axiom, EnCase and FTK
Compare the results
I fired up Chrome and Firefox, and made sure they were up to date:
With last night’s Forensic Lunch still fresh in my mind, I Googled the following between 11:00 and 12:00 on 2019-03-09.
The same searches were done with Chrome first, and then with Firefox.
Google search: “Is lee whitfield brittish?” Result opened: “https://www.sans.org/instructors/lee-whitfield”
Google search: “How do you spell british?” Result opened: “https://en.oxforddictionaries.com/spelling/british-and-spelling”
Google search: “Where did Matt get the cool blue sunglasses?” Result opened: https://www.menshealth.com/style/a26133544/matthew-mcconaughey-blue-colored-sunglasses/
Google search: “Why is no one having lunch on the Forensic Lunch?” Result opened: https://www.youtube.com/user/LearnForensics/videos
Youtube search: “drummer at the wrong gig” Video played: https://www.youtube.com/watch?v=ItZyaOlrb7E
And then played this one from the Up Next bar: https://www.youtube.com/watch?v=RvatDKpc0SU
Google search: “Can you nominate yourself in the Forensic 4Cast awards?” Result opened: https://www.magnetforensics.com/blog/nominate-magnet-forensics-years-forensic-4cast-awards/
Following this, I created a logical image of the Chrome and Firefox histories on my laptop with EnCase. The total size for the histories were 3GB. (Yes, lots of historic stuff included there as well).
So the testing is pretty straight forward: Can I get to the above listed searches and web histories in Axiom, FTK and EnCase. Let’s see:
Parsing the logical image in Axiom gave us the following for ‘Web related’ artifacts:
Result: Great Success
Same thing, processed the image and got the following from the ‘Internet’ tab:
Again: Great Success
Now, let’s fire up the ‘2019 SC Magazine Winner‘ for ‘Best Computer Forensic Solution‘…
After processing the image with EnCase, we hobble on over to the ‘Artifact’ tab and open the ‘Internet Records’ section.
First up, Chrome histories:
Great, it works as expected.
Next up, Firefox (The browser with 840,689,200 active users in the past 365 days)
And this is where we ran into trouble: EnCase was able to parse Firefox Cookies and some cache files, but for the life of me I couldn’t get to any actual browsing histories.
I suspect that, as it’s shown on the processing window, EnCase only supports Firefox up until v51.0.0. The current Firefox version is v65.
Firefox version 51.0.0 was released to channel users on January 24th 2017. That is the same month when Ed Sheeran released his single “Shape of You”. (And now you can’t unsee the singing dentist guy covering the song)
What I’m trying to say is that Firefox v51 is old.
I’ve logged a query with OpenText about this and will update this post if and when I get feedback. (Really hoping this is something I’m doing wrong, but we’ll see.)
[Update 2019-03-10: EnCase v8.09, set for release in April, is said to have updated Firefox support]
What’s the point of this post?
Test stuff. If something doesn’t look right, test it.
You don’t need test images to test your tools. If you have a laptop or a mobile phone, then you have test data.
Don’t assume stuff. If my results above are correct, there’s a good chance you could have missed crucial Firefox data if you were only relying on EnCase.
If I’m wrong, then at least I’ll hopefully know pretty soon how to get EnCase to parse Firefox histories correctly… and someone else might learn something too.
Welcome to Forensic Mania 2019 – Episode 1. If you’re new to #ForensicMania, catch the full lowdown here.
To recap, we are testing the latest versions of four of the big commercial forensic tools against the MUS2018 CTF image.
Side note_Following my intro post, promises were made by certain Magnet folk (you can run but you can’t Hyde). So I reprocessed with the newly released version of Axiom, v2.10. If said promises aren’t kept, we might need to roll back to version 1.0.9 just for fun.
Today we’ll be running through processing the MUS forensic image with the four tools.
Analysis Workstation Details
For these tests, we will be using a Dell workstation, with the following specs:
Intel Xeon Gold 6136 CPU.
Windows 10 Pro.
OS Drive: 2.5″ AData SSD.
Case directories and the MUS2018 image file was located on separate Samsung M.2 SSDs.
How does the scoring work
The scoring for this section kept the adjudication committee deadlocked in meetings for weeks, grappling with the question: “How do you score forensic tools on processing, in a fair manor“. After a few heated arguments, the committee realised that this was not the NIST Computer Forensics Tool Testing Program, but a blog. With that pressure off, they created a very simple scoring metric.
First, to get everyone on the same page, consider the following: Say MasterChef Australia is having a pressure test, where each of the Top 25 need to bake a lemon meringue tart. Best tart wins an immunity pin.
Being the first contestant to separate your egg yolks from the whites is pretty cool, might even get some applause from the gantry. But, the proof will always be in the pudding, which is when you start whisking your whites for the meringue. If you did a messy job during the separation, you ain’t going to see firm glossy peaks forming, no matter how hard you whisk.
This then is typically where Maggie Beer and George comes walking over to your bench and drops a comment like “a good meringue is hard to beat“. You get the point.
The Scoring System
In this round, the tools will be judged in two categories, each with 5 points up for grabs. These two categories are:
1_ Processing Progress Indication. We’ll be looking at how well the tool does at providing accurate and useful feedback during processing. “Does it matter?” you may ask… Well, it is the year of our Lord 2019. I can track the Uber Eats guy on my phone until he gets to my door. Similarly, I expect a forensic tool to at least provide some progress indication, other than just “go away, I’m still busy”.
2_ Time to Completion. Yes, the big one. Pretty straight forward. How long did it take to complete the processing task.
Points will be awarded in the form of limited edition (and much coveted across the industry) #ForensicMania challenge coins:
Side note_I initially planned on putting a bunch more categories in adjudicating the processing phase (things like how customizable are the processing options, ease of use, can it make waffles etc) but it got a bit too complex and subjective. These tools have fairly different approaches to processing data, so let’s leave the nitpicking for next week when we start analyzing data.
This means there is a total of 10 points up for grabs in Episode 1.
Setting up processing
In order to keep these posts within a reasonable readable length, I’m not going to delve into each granular step that was followed. For each tool, I’ve provided the main points of what was selected in processing, as well accompanying screenshots.
Full Searches on partitions, Unpartitioned space search on the unpartitioned space of the drive.
Keyword Search Types: Artifacts. Note: Axiom does not have the functionality to do a full text index of the entire drive’s contents, but only indexes known artifacts.
Searching of archives and mobile backups.
Hashing (MD5 and SHA1). Limited to files smaller than 500MB.
Enabled processing of the default custom file types.
All computer artifacts were selected
File Signature Analysis
Hashing (MD5 and SHA1)
File Carving: All available file types were selected
Advanced Options: All available options were selected (see screenshots)
File Signature Analysis
Hash Analysis (MD5 & SHA1)
Expand Compound Files
Find Internet Artifacts
Index text and Metadata
System Info Parser (All artifacts)
File Carver (All predefined file types, Only in Unallocated and Slack)
Windows Event Log Parser
Windows Artifact Parser (Including Search Unallocated)
For FTK, I used their built-in ‘Forensics’ processing profile, but tweaked it a bit.
Hashing (MD5 & SHA1)
Expand all available compound file types
Flag Bad Extensions
Search Text Index
Thumbnails for Graphics
Data Carving (Carving for all available file types)
Process Internet Browser History for Visualization
Generate System Information
To give each tool a fair chance, the MUS image was processed twice with each.
Results: Processing Progress Indication.
Here are the results for each tool’s ability to provide the user with adequate feedback regarding what is being processed:
Axiom’s processing window is quite easy to make sense off. It shows which evidence source is currently processing (partition specific), as well as which ‘search definition’ it’s currently on. During the testing, the percentage progress indicators also seemed to be reliable.
In the category of “Processing Progress Indication”, the adjudication committee scored Axiom: 5 out of 5.
BlackLight also has a great granular processing feedback window. For each partition, it shows what it’s busy with processing as well as progress indicators. These were deemed reliable with the tests.
In the category of “Processing Progress Indication”, the adjudication committee scored Blacklight: 5 out of 5
EnCase’s processing window seems a bit all over the show. More like something you’ll look at for diagnostic info, not processing progress. It was a bit difficult to gauge what it was actually busy with. It does have a progress indicator showing a ‘percentage complete’ value, however, this was quite unreliable. When processing the MUS image, it hit 99% complete quite quickly and then continued processing for another hour at 99%, before completing. This happened with both tests. I again processed the same image on a different workstation and got similar results.
In the category of “Processing Progress Indication”, the adjudication committee scored EnCase: 3 out of 5.
FTK’s processing window is quite straight forward. Perhaps too much so. It does have an overall process bar, although not entirely accurate, and shows which evidence item (e01) it’s currently processing. However, because you have no idea what it’s actually busy with processing, it remains a waiting game to see how many files it discovers, processes and indexes. And once you think it’s done, you get a surprise with a couple hours of “Database Optimization”.
In the category of “Processing Progress Indication”, the adjudication committee scored FTK: 3 out of 5.
Results: Time To Completion.
These are pretty straight forward. How long did it take to process the MUS image with the above noted processing settings?
Axiom took 52 minutes and 31 seconds to process the MUS image. Following this, the ‘building connections’ process took another 17 minutes and 25 seconds.
This gave Axiom a total of 1 hour, 9 minutes and 56 seconds.
BlackLight took 1 hour flat to process the image. Following this, the option was available to carve the Pagefile for various file types. This added another 14 minutes and 30 seconds.
This gave BlackLight a total of 1 hour, 14 minutes and 30 seconds.
EnCase took 1 hour, 23 minutes and 25 seconds.
No additional processing required, all jobs were completed in one go.
FTK took 59 minutes and 9 seconds to process and index the image. That’s faster than all the others… But, before you celebrate: Following the processing, FTK kicked off a “Database Optimization” process. This took another 2 hours and 17 minutes! Although it’s enabled by default, you can switch off this process in FTK’s database settings. However, according to the FTK Knowledge Base “Database maintenance is required to prevent poor performance and can provide recovery options in case of failures.” Seems like it’s something you rather want to run on your case.
This gave FTK a total of 3 hours, 12 minutes and 9 seconds.
Let’s dish out some coins:
For winning the time challenge, Axiom gets 5/5
Not too much separated BlackBag and EnCase from Axiom, both gets 4/5
And, bringing up the rear, taking almost 3 times as long as the others, FTK with 2/5
Before we look at the totals for this week, here is the result of the poll from last week:
Pretty much in line with what we saw this week…
Here’s your scoreboard after S01E01 of #ForensicMania
Tune in next week to see if Axiom can keep it’s narrow lead, whether BlackLight knows what to do with a Windows image and if FTK can pick itself up by it’s dongles. We’ll start with analyzing the MUS image, so stay tuned for all the drama, first and only on The Swanepoel Method.
Side note_It is still early days. Don’t go burning (or buying) any dongles after this post alone. The proof will be in the analysis capabilities of these tools, so check back next week.
I’ve long been wanting to publish comparisons between some of the big commercial Digital Forensic tools. After recently playing around with triage ideas with the MUS2018 CTF image compiled by Dave and Matt, I thought now is as good a time as any.
As we dig in, allow me to introduce you to hypothetical Jack. (Don’t worry, Jack is not a real person, but a photo generated by some funky algorithms on https://thispersondoesnotexist.com)
Jack would like to start his own Digital Forensic and Incident Response company in sunny South Africa. We’ll refer to this hypothetical company as DFIRJack Inc. DFIRJack Inc will focus on Windows Forensics for now. Following some Googling, Jack has come to a shortlist of commercial Digital Forensic tools that he wants to put through some tests. This is to aid him in making a final decision on where he should spend his hard earned cash.
Access Data FTKv7.0.0 (Date Released: Nov 2018)
BlackBag BlackLightv2018 R4 (Date Released: Dec 2018)
Magnet Forensics Axiomv2.9 (Date Released: Jan 2019)
Opentext EnCase v8.08 (Date Released: Nov 2018)
Side note 1_ Jack always thought that Blacklight was predominantly a Mac forensics tool, but after seeing posts on Twitter by one of their new training guys punting it’s Windows Forensic capabilities, he thought it can’t hurt to give it a shot.
Side note 2_ In the midst of writing this, Magnet released Axiom v2.10. By the time that I hit publish on this post, v2.11 will most likely be uploading for release. I’ll stick with version v2.9 for now. If you work for Magnet and want to persuade me with some swag to use v2.10 in this series going forward (or whatever version you’re going to be on next week Tuesday), send me a DM to negotiate.
Jack’s research has brought him to the conclusion that a single user license (the standard license for computer analysis, no cloud or mobile extras) will cost more or less the same for either FTK, Axiom or EnCase. Interestingly enough, he can buy two BlackLight licenses for the price of one of the other three.
After making some South African market related comparisons, Jack realized that he can either buy one of the aforementioned licenses (two in the case of BlackLight), or a secondhand 1992 Toyota Land Cruiser GX with 350,000km on the clock.
This is the GX:
Jack has long dreamt of buying a GX and taking the fam to the Central Kalahari Game Reserve (CKGR) in Botswana on an overland expedition. But that’ll have to wait, as it looks like he’ll be spending that money on a license dongle. What will it be? A GX or pure forensic joy? (Jack did find it odd that the only place where he can buy the licenses for these tools were from the same companies that he’ll be competing against with DFIRJack Inc. Kind of like the Bulls only being allowed to buy their Rugby kit from the Stormers.)
In order for Jack to decide which license dongle will take the place of his GX, he opted to put these tools through some head-to-head tests.
We’ll call it Forensic Mania
Forensic Mania will run for an undefined number of rounds or blog posts. (Undefined, yes, but most likely until I loose interest and move on to a new blog idea…)
For the first series, we’ll use the MUS2018 CTF image of Max Powers to run the tests. Why this image?
There are write ups available online of the answers, so you can run and verify your answers (here and here)
It’s small enough (50GB) to throw the kitchen sink at it, and all the tools should be able to swim.
It’s a Windows 10 image. Windows 10 was released in July 2015 and brought lots of new forensic artifacts with it. Almost four years later, I’d expect that the big forensic tools should be able to exploit this.
It’s my blog, so I make the rules. Get off my lawn.
Bias alert: The forensic image was created for a CTF set to run specifically at MUS2018. Did Matt & Dave design the CTF image to benefit Axiom? Maybe. But we’ll try and be as objective as possible.
Following this series, I’m planning to run similar style tests against more real world images to see how the tools hold up.
Having seen Eric Zimmerman’s release of Kape (Or Kale as Ovie Carol calls it) I thought it could be insightful to play around with the Triage idea some more.
Basic premise for this post was this:
For an Incident Response type case, how much answers can you get to by just grabbing and analyzing selective data (triage) versus full disk images.
With remote acquisition, acquiring only a few GB’s of data instead of full images can, in some cases, make a difference of a few hours – depending on network speed. The same calculation applies when it comes to processing the data.
To run this exercise, I dusted off the evidence files from the 2018 Vegas Magnet User Summit CTF. I managed to win the live CTF on the day, but didn’t get a full score. Oleg Skulkin and Igor Mikhaylov however did a write-up of the full CTF that we’re going to use.
For this test, I created a quick and dirty condition in EnCase that only targets specific data. Things like Registry files, Event logs, Browser Artifacts, File System Artifacts etc. A good place to start with a Triage list is to have a look at the Sans Windows Forensics “Evidence Of…” poster for areas of interest.
A condition in EnCase is basically a fancy filter, allowing you to filter for files with specific names, paths, sizes etc. Not that it matters, but I named my condition Wildehond, which is the Afrikaans name for Wild Dog or Painted Wolf. Wild dogs are known to devour their prey while it’s still alive, and that’s what we’re trying to do here… (You can Youtube it at your own risk).
Running my Wildehond condition in EnCase on the Max Powers hard drive image, resulted in 2,279 files totaling 2.5GB. The mock image of Max Powers, the victim in the CTF, was originally 50GB. After running the condition I created a Logical Evidence File of the filtered triage files.
So, the question is, can you get a full score for the CTF from processing and analyzing 5% of the data?
First off, I processed the ‘full’ image in Axiom v2.9:
And selected all available artifacts to be included:
Processing ran for around 45 minutes, with another 15 minutes to build connections. That’s a round 60 minutes.
The processing resulted in about 727,000 artifacts:
Next up, I used the exact same processing settings on the 2.5GB Triage image I created with EnCase and Wildehond.
Processing took 13 minutes, with another minute to complete the connections. A cool 14 minutes in total. This left us with around 290,000 artifacts for analysis:
So yes, as expected, there is a large difference (45 minutes) in processing 2.5GB in stead of 50GB. (This difference will be a lot bigger between a real world 500GB drive and a 2.5GB triage set)
But this doesn’t mean anything if we can get to the answers, so lets go.
After running the processing, I did a side-by-side comparison between the two sets of data, and worked through the CTF questions on each side.
All of the questions were answerable on the full image processed with Axiom 2.9, except for three questions relating to the $MFT, where a tool like Eric Zimmerman’s MFTEcmd would do the trick.
This is how the two images did in providing answers:
So, with the Triage set of 2.5GB, we could answer 23 of the 28 Questions (82%… which is more than what I got for C++ at University).
However, real world incidents can differ quite a bit from question and answer style exercises, especially if you don’t know what exactly you are looking for.
For the 5 questions that could not be answered from the
Triage set, below is the reasons why:
Wiped file names:
Strangely enough, the UsnJrnl did not parse in my Triage
From the full image:
However, nothing from my Triage system.
I confirmed that the file was present in my image:
So, to troubleshoot, I used Joachim Schicht’s UsnJrnl2Csv to try and parse the UsnJrnl that was in my Triage image.
And… It liked my UsnJrnl exported from the Triage image:
So… for some odd reason Axiom doesn’t recognize the $UsrnJrnl•$J file when contained in my Triage LX01 image. Will do some more trouble-shooting to figure out why this is the case.
Browser to download Dropbox:
From the full image, the answer was quite clear: Maxthon
Yes, my Triage image contains lots of artifacts referencing Maxthon and Dropbox separately, but no immediate obvious link that Maxthon was used to download Dropbox. The main reason for this is that I did not capture Maxthon web histories (i.e. mxundo.dat) in my Triage image.
The last two questions where my Triage image came up short related to Email. As no email was targeted with my Triage, this was to be expected.
So, there you have it. In this case, you could do a pretty good job at getting a handle on a your case by only using Triage data.
Will full disk imaging and analysis not provide you with better context? Yes, perhaps… but with the likely trade-offs in Triaging, it’s worth exploring it first.
– InfoSec stories scavenged for you from across the internet –
Three new stories this week:
Two Nigerians Visit Kuala Lumpur (and Hack 20 US Universities)
Phishing for iPhones (Breaking into iCloud-Locked phones)
A Bad Week At Eskom (Malware, data leakage and a breakup)
1_ Two Nigerians Visit Kuala Lumpur
Back in 2014, two Nigerian chaps (sorry folks, you’re not helping the stigma) were living with expired Visas in Kuala Lumpur.
Instead of using their new found freedom to enjoy the sights of say, the Petronas Twin Towers, they launched phishing campaigns. These campaigns were targeted at employees at 140 educational institutes across the United States. Once usernames and passwords were obtained via their phishing emails, Olayinka and Damilola acquainted themselves with the financial systems of said institutes.
Their end game was to change the banking details of employees in order to reroute salary payments to accounts they (or their more unscrupulous friends) controlled. These phishing attacks were successful at 20 schools; however, when Georgia Tech personnel didn’t get their Thanksgiving paychecks, they caught wind of what was going on and called the Feds.
After some proper investigation and cooperation with the Malaysian authorities, Olayinka and Damilola was given silver arm bracelets and extradited to the US to face trial. Olayinka got six years behind bars, with Damilola receiving three.
In addition to their prison sentences, the judge also ordered them to pay restitution of $56,175.44 each (about ₦20,358,214). Back in Lagos, this can buy them around 76,000 heads of lettuce, each.
Joseph Cox and Jason Koebler over at Motherboard wrote a detailed piece titled: “How Hackers and Scammers Break into iCloud-Locked iPhones“. In this piece they delved into the world of thugs stealing iPhones and what goes into getting them unlocked.
If you are planning to not read their article, at least know this:
If your iPhone / iPad is stolen, the thug typically can’t do anything with it unless they have your unlock code or iCloud password. (Read the full piece to see why I say ‘typically’). This means they can’t factory reset it to sell it on.
However, there is a fairly good chance that the thieve might target you with phishing or other social engineering attacks. Reason: To get you to give up your device lock code or your iCloud account details.
And if you’re thinking: ‘Ah, first world problems, won’t affect us down South’ Think again… same attacks have been running here for the last few years already.
Eskom, our local (South African) electricity provider is having an interesting week.
First, a guy on Twitter claimed to have found an online database of Eskom that’s exposing customer details. Following attempts to responsibly disclose this, he voiced his concerns in a tweet. However, Eskom has come back stating that the database he identified is not theirs, but they are investigating if the data is…
Second, another guy on Twitter claimed to have identified an Eskom computer which was infected by a RAT. It does not seem like this is a critical system (i.e. SCADA stuff) but rather a computer of a Tannie that shops for Bernina sewing supplies at Makro (based on her desktop icons). But, nether the less, still not where you want to be.
Finally, our President just announced that Eskom is being split into three separate entities (generation, transmission and distribution). This is in an attempt to prevent the corruption ridding entity from dragging the entire country’s economy down the pooper. Not that it has anything to do with points one and two, but now you know.
And lastly… I’ll leave you with some wise electricity related words:
If you can’t fix it with a hammer, it’s an electrical fault.
Inspired by Timothy Ferriss’ book Tribe of Mentors, Marcus compiled a list of the fourteen most common questions he gets asked about cybersecurity. These questions were then posed to seventy notable InfoSec practitioners, with their responses recorded across more than four hundred pages in Tribe of Hackers.
Question number two caught my eye:
“What is one of the biggest bang-for-the-buck actions that an organization can take to improve their cybersecurity posture?“
Assuming the 70 has seen some stuff over the years, I thought this would be good advice to follow for most companies. I was also interested to see if there would be any commonalities between the answers, so I read through the seventy responses and compiled a Top 7 list of common responses.
Again, go get the book, the proceeds are going to charity after all.
So, here we go:
The Top 7 Bang-For-Your-Buck Actions To Improve Your Security Posture.
For each of the Top 7 Bang-For-Your-Buck responses, I’ve quoted some comments from the answers. However, read the book for the full responses and more in-depth reasoning.
Number 7_ Conduct Risk and Threat Assessments (4 mentions) “Once an organization identifies and quantifies risks and the assets associated with their key function(s), it becomes inherently easier to identify potential causes of a critically impactful incident.” – Lesley Carhart
Number 6_ Hire Good People (6 mentions) “Hire good people. You will never spend money on something more effective within this domain than talented people.” – Ben Donnelly
Number 5_ Asset Management (7 mentions) “You can’t protect it if you can’t find it” – Cheryl Biswas
Number 4_ Least Privilege | Limit Administrative Access (8 mentions) “Get users out of the local administrators group” – Jake Williams
Number 3_ Do The Basics (9 mentions) “There’s a lot of talk about the basics. If the basics were easy, everybody would be doing them. But I think they’re still worth calling out, even though they are difficult.” – Wendy Nather
Number 2_ Security Culture (11 mentions) “Culture change impacts behavior, incentive models, accountability, and transparency — and myriad other critical enablers that help to mature and improve cybersecurity programs. Until organizational culture — comprised of values and behaviors—is substantially reformed, cybersecurity failures will continue to abound.” – Ben Tomhave
Number 1_ Security Awareness Training (14 mentions) “I have gotten the best return on investment from security awareness training.” – Brad Schaufenbuel “Invest in educating employees. Awareness goes a long way in a world where lying and “social engineering” are the key to most doors.” – Edward Prevost
– InfoSec stories scavenged for you from across the internet –
Your three stories for this week are:
How to Stuff a Chicken (Dailymotion Gets Attacked)
Old Ladies Making Payments (Mikko on Payment System Segregation)
Cyber Attacks In Real Life (Great Awareness Video from Hiscox)
1_ How To Stuff A Chicken
(Dailymotion suffers a credential stuffing attack)
If you are on the market for some roast chicken tips, here are a few great ones from Jamie: https://www.youtube.com/watch?v=bJeUb8ToRIw
Back to today’s actual program: Credential Stuffing Attacks.
The online video streaming site Dailymotion (which is a treasure trove for bootlegging MasterChef Australia episodes) was recently the target of a Credential Stuffing Attack. According to their website, Dailymotion attracts “300 million users from around the world, who watch 3.5 billion videos on its player each month.“
Dailymotion published the following alert on January 25th 2019:
The attack consists in “guessing” the passwords of some dailymotion accounts by automatically trying a large number of combinations, or by using passwords that have been previously stolen from web sites unrelated to dailymotion.
Credential Stuffing attacks aren’t anything new. In October 2018, the American Cloud Services Provider, Akamai, published a report on Credential Stuffing attacks. They recorded around 8.35 billion credential stuffing attempts world wide between May and June 2018, with the US and Russia being the main attack sources.
The report further notes:
“These botnets attempt to log into a target site in order to assume an identity, gather information, or steal money and goods. They use lists of usernames and passwords gathered from the breaches you hear about nearly every day on the news. They’re also one of the main reasons you should be using a password manager to create unique and random strings for your passwords. Yes, remembering that “*.77H8hi9~8&” is your password is difficult, but having your login at the bank compromised is a much bigger hassle.”
There you go, don’t reuse passwords!
2_ Old Ladies and Payment Systems
(I’m not going to write too much about this one)
Mikko Hypponen from the Finnish Cyber Security company, F-Secure, did a keynote at BSides London in June 2018. During his talk ‘State of the Net’, he addressed the common issue of securing computer systems used for financial payments. However, he was not talking about securing servers and things making up advanced payment systems. He was rather talking referring to the laptops and desktops used by employees who make the actual payments that keep your business running.
And… he makes a very valid point: Don’t use the same computer that you use for things like Facebook, Twitter, Email and Instagram for your business’ online banking system. Rather use a designated and segregated computer to load and process your payments. This simple step will go a long way in ensuring that the computers used for payments remain secure.
Have a look at the talk here:
3_ Cyber Attacks In Real Life
UK company Hiscox has made a clever video illustrating how a cyber attack would look if it happened in real life.
They show three attack scenarios: • IP Theft: Robbing companies of their ideas and inventions. • Phishing: Fraudulently pretending to be someone else. • Denial of Service: Flooding the target with traffic triggering a crash.
I think this is quite effective in order to create awareness for espcially small businesses, without the usual FUD (Fear, Uncertainty and Doubt) used by lots of security vendors.
– InfoSec stories scavenged for you from across the internet –
Three stories this week (again):
DDoS-ing a Country (Guy who took Liberia offline is jailed)
Lazarus at the Waterhole (Company breached in nifty attack)
Incoming! (Hijacked camera sends false ‘Incoming Missile’ warning)
1_ DDoS-ing a Country
(Guy who used the Mirai botnet against Liberia gets jail time in the UK)
In 2016, researchers detected one of the largest publicly recorded Distributed Denial of Service attacks (DDoS). The attack made use of hijacked webcams part of the Mirai botnet and generated traffic up to 500 Gbps. This traffic was directed at the internet infrastructure of the West African nation of Liberia. See 2016 article from Threatpost detailing the attack.
Fast forward 3 years later and one Daniel Kaye has been sentenced to 32 months in the slammer for this DDoS attack. Turns out an employee of the Liberian telecoms company Cellcom (now rebranded as Orange Liberia) hired Mr Kaye to launch the attack on their competitor, Lonestar Cell MTN. Not only did it successfully disrupt Lonestar’s network, it also took down the entire country’s internet!
After the Liberian attacks, Mr Kaye attempted to take control of some of Deutchse Telekom’s routers for more attacks, but this ended up taking about 900,000 routers offline. A week later he again fumbled and inadvertently took down 100,000 UK based routers from three separate ISPs. In the end this was what got the fuzz to hunt him down.
(For a quirky video about a ‘actual’ watering hole, check this)
Attackers, allegedly linked to North Korea’s Lazarus group, have been fingered for an attack on a Chilean networking company. This company, Redbanc, is basically responsible for all of Chile’s ATM networks.
What makes this attack notable is the method in which Redbanc was compromised – a watering hole attack. Attackers put an advertisement up on LinkedIn, to which a Redbanc employee responded. This then led to a phony Skype interview with a Spanish speaking ‘recruiter’. During the ‘interview’ the employee was tricked into downloading what appeared to be an application form. The application form however turned out to be malware which subsequently infected his work computer.
Luckily the introduced malware was picked up by Redbanc before too much snooping could be done on their network…
(Hijacked Nest camera sends false ‘Incoming Missile’ warnings)
Laura was cooking up a storm in her California kitchen, when the loud noise of an emergency broadcast interrupted the bubbling sounds from her simmering chicken broth:
You have three hours to evacuate! North Korea has launched a missile attack on the United States. Move!
Ok, she was probably not making a chicken broth, but you get the idea. Needless to say, panic ensued after the family heard the announcement, thinking it came from their television. It turned out that an attacker managed to hack into their internet connected (IoT) Nest Security Camera and play the fake alert. Luckily, sanity prevailed after an excruciating 30 minutes of trying to figure out which of your favorite cast iron frying pans to take along in the evacuation.
Reminds me of the saying: “The S in IoT stands for Security”.
A 20-year-old German man managed to obtain and publish a bunch of personal information of, among others, the Chancellor of Germany, Angela Dorothea Merkel, as well as the German head of state.
If, at this point, you are confused that Merkel is not the German head of state, welcome to the party. Here’s a video of the inauguration of the German President, Frank-Walter Steinmeier: https://www.youtube.com/watch?v=6UsXzwke6OE.
But we digress…
The suspect, who still lives with his parents, claimed to have acted alone when police arrested him earlier this month. The reason for his actions was attributed to anger at “public statements made by politicians, journalists and public figures”. It is unclear how he obtained the leaked information, but it is said to include contact information, credit card details, banking and financial details as well as ID cards and private chats.
First things first: If the title of this one made you think of the 1995 Ricky Martin song… here’s the music video for your pleasure: https://www.youtube.com/watch?v=vCEvCXuglqo (and the chap in this story’s name is Martin… Coincidence??)
In 2013, Martin Gottesfeld came to hear about the ‘medical’ child custody case of Justina Pelletier. She was being treated at Boston Children’s Hospital at the time. Taking her fight upon himself, Martin posted a video online claiming to be part of the Anonymous hacking group. He followed this by doxing personal information from people involved in her treatment and then launched a Distributed Denial of Service (DDoS) attack on the Boston Children’s Hospital. The DDoS knocked their internet facing systems offline for two weeks. Fearing arrest by the FBI, Martin and his wife bought a speedboat and fled for Cuba.
Unfortunately for the Gottesfelds, their boat broke down in rough seas and they were forced to send out a distress signal… only to be rescued by a Disney Cruise Liner of all things. In the end, he was arrested and sentenced to 10 years in prison for his efforts.
By this time, you would most probably have heard or read about this one, as it is widely reported on. But, before you start running down corridors screaming ‘the end is nigh!‘, read this first.
This isn’t a new single breach. To quote Troy Hunt, who runs Have I Been Pwnd: The leaked data set is “made up of many different individual data breaches from literally thousands of different sources.”
Brian Krebs also notes that this is old data and offers the following advice relating to the ‘breach’:
If this Collection #1 has you spooked, changing your password(s) certainly can’t hurt — unless of course you’re in the habit of re-using passwords. Please don’t do that. As we can see from the offering above, your password is probably worth way more to you than it is to cybercriminals (in the case of Collection #1, just .000002 cents per password).”
A common piece of advice we often give to users is:
Do not click any links in unexpected emails.
Good advice. Let’s put it to the test:
The South African Revenue Service (SARS) brand is notorious for being used in Phishing attacks, trying to trick users into divulging banking or other personal information.
See some of the samples here: (Yes, I know it’s a link…) http://www.sars.gov.za/TargTaxCrime/Pages/Scams-and-Phishing.aspx?k=
SARS also shares warnings for things to look out regarding phishing mails:
“Members of the public are randomly emailed with false “spoofed” emails made to look as if these emails were sent from SARS, but are in fact fraudulent emails aimed at enticing unsuspecting taxpayers to part with personal information such as bank account details.”
“Importantly, SARS will not send you any hyperlinks to other websites – even those of banks.”
Good advise, however, the following happened:
It is a Phish?
Yesterday, I received an email message with subject “Please rate your SARS experience“. Now, if you’re a law abiding citizen of the Republic, you’ll know that your online eFiling deadline was 31 October 2018. So emails like these could be expected, but could also be phishing:
In this instance, Gmail is kind enough to show us that the email did not originate from SARS, but came in via bounce.mkt2356[.]com:
South African Revenue Service (SARS)firstname.lastname@example.org bounce.mkt2356.com
And they are asking me to click on a link, which is bad. So let’s investigate further…
The Post Office
For this analogy, we’ll run with the idea that I have a letter that I’d like to send to the friendly people at Eskom to enquire about their power generating capability as we are having Stage2 load shedding today.
I decide to drop my well worded letter off at the big red metal post box at the Hatfield Post Office in Pretoria, South Africa.
Upon receiving my letter, the Post Office adds something called an email header to it. An email header keeps track of (among others) all those stamps added to your envelope as it travels past different post offices and mail sorting stations on its way to the friendly folks at Eskom.
One of the many fields contained in the email header is called the Message-ID. This field can help us in our quest to determine where the email originated from. This is in essence the name and serial number of the post box at Hatfield Post Office, as well as a uniquely created tracking number for my letter.
(I’ve changed the URL a bit as it’s most likely unique to each address the mail was sent to)
But mkt2356[.]com isn’t SARS. Let’s take a look where you’ll end up if you clicked it:
So, clicking that link for http://links.mkt2356[.]com would actually get you to the legitimate SARS website https://tools.sars[.]gov.za/SatisfactionSurvey/Surveys/Index/32
However, to make things worse, mkt2356[.]com has a Certificate Name Mismatch error, which will be cause lots of security products to warn you before visiting the site:
And here’s what it looks like when you eventually end at the actual SARS website:
So, it turns out that the MKTxxx domains are owned by IBM’s Watson Campaign Automation digital marketing solution.
Ok, so at this point you are asking the following: “Come on dude, it’s just SARS using a marketing company to send out emails with unique links so that they can track who actually clicks it after which it take you to the actual SARS page so no need for all this screenshots and stuff so get of your horse and enjoy your load shedding.”
Well, my point is this:
This is not helpful.
We can’t be telling people “DON’T CLICK ON ANYTHING! JUST DON’T” and then send them crappy survey emails with links we want them to click. So the message becomes:
DON’T CLICK ON ANYTHING!*
*Unless we send you stuff via a third party, so then please go ahead and click it, even if it was set up crappy, don’t worry, it’s fine, trust us.