Just over a year ago (Feb 2020) I started running weekly internal training CTFs @work.
These were aimed at the various levels of analysts in the SOC as well as the folks in Incident Response. It ultimately allowed us to test and train analysts in a question-answer style CTF, validating understanding of the tools and systems used in everyday work. One of the great things about it for me was that we were using actual data and tools from our own environment. I could see how analysts were answering questions, which for me is a great way to identify gaps in either technical knowledge or (mis)understanding of tool output.
Since then, I’ve long wanted to launch something similar in the public domain. A CTF aimed at SOC and DFIR (Digital Forensics and Incident Response) analysts. But, just to get a decent amount of data generated on which you can build a public CTF is a fair amount of work. Since the start of this year I kept coming back to the idea of running a public training CTF and have now made work of an MVP (Minimum Viable Product).
So say hallo to SocVel:
The name SocVel is derived from the well known South African term Stokvel. But more on that at a later time… MVP right.
What is the aim of all this?
For those new to the field
Most infosec vendors will have some training available to help you understand how to interpret what is on the screen when using their tools. Whether that is an AV solution, EDR, SIEM, SOAR or SNAFU. (The last one is not a real infosec term, although in this day and age, that could be deemed an acceptable way to refer to the industry.)
But, one of the main gaps I often see is the ability to link all the bits of information together. Some analysts may get overwhelmed by the noise in their environment, and struggle to identify the golden needles in a stack of more needles.
For me, it often comes down to asking the right questions about the situation in front of you, and being able to devise plans to answer those.
In addition, you need to be able to formulate these answers you’ve found during an incident to tell the story of what happened. Whether that story needs to be communicated to a colleague, a level up in the SOC, or an overworked CISO who really just wants to know if this is the big incident that finally pushes them over the edge.
If you are a veteran SOC or DFIR analyst, this is a great way for you to test your abilities as well as tooling. Challenge yourself by not having the data necessarily in the way you are used to get it from your EDR, SIEM or Triage Scripts.
What makes this different from most DFIR ‘conference’ CTFs?
There is no time pressure. Each SocVel CTF should remain open for a month or so, depending on the number of participants or general interest.
Oftentimes the time zones when CTFs are presented aren’t ideal. Yeah I know they can’t cater for the entire globe, but, doing a CTF between 01:00 and 07:00 local time on a Saturday morning is not my idea of fun.
Even if the CTF is in a respectable timeslot, the line of work most DFIR or SOC analysts find themselves in doesn’t always guarantee they’ll have the consecutive hours available to complete it.
Barrier To Entry
Sometimes CTFs are just plain whack in their asking (especially general hacking ones). Allow me to quote a post from hatsoffsecurity.com, referring to people who create CTFs:
“The challenge should be hard because the subject is hard, not because you’re being a d***”
My target market with SocVel are both experienced DFIR veterans and entry-level analysts. To that end, most questions in a SocVel CTF will have an unlockable hint available. This should be helpful enough for you to derive how to get to the answer.
You’re not going to learn anything if you get stuck at a point, and there is nothing or no one there to guide you in understanding what needs to be done.
Again, my aim for SocVel is to be a training CTF.
In an online conference CTF which took place last year, there were no limits on the amount of incorrect answers you could submit. This was the stats for the winner:
Correct Submissions: 22 (5.49%)
Wrong Submissions: 379 (94.51%)
As a strategy for winning CTF’s, that will probably get you there. If the question is: “Which browser was used by the attacker”, you just start submitting browser names until you get it right. However, I don’t want someone working on incidents that have a mere 5.49% success rate.
To combat this, SocVel will deduct points for each incorrect submission. You can still try and try again until you get it right, but it will cost you.
And with that, the first investigation (Pooptoria) is live:
The notorious threat actor Fancy Poodle has done it again! This time striking at Strikdaspoort Wastewater Treatment Plant in Pretoria, South Africa…
Do you have what it takes to solve the investigation while only using limited triage data? All before the license-dongle-wielding forensic analysts have checked their write blockers out of storage?
Quite the description for Emotet coming from a popular online malware sandbox.
CISA, The United States Cybersecurity and Infrastructure Security Agency, has described Emotet in a 2018 alert as the “most costly and destructive malware” affecting the US private and public sectors, whilst in 2020 labelling it as “one of the most prevalent ongoing threats”.
Now that is some introduction for a strain of malware that has been around since 2014.
But, where did it originate from, who is responsible for it, and what makes it such an insidious piece of malware today still?
The ‘Genesis’ of Emotet
We’ll start our journey back in the year of Flappy Birds and Ice Bucket challenges. A few months after Flappy Bird was abruptly removed from mobile app stores in early 2014, a blog post appeared by Trend Micro analyst Joie Salvio which introduced the world to “new banking malware” detected as Emotet. Joie was however not responsible for naming the malware, and it appears that the reason behind Trend Micro calling it Emotet will forever be lost in the sands of time.
Although this 27 June 2014 blog post was seemingly the first time the world heard the name Emotet, it was not the first time the actual malware was observed. Security researcher Miko Hipponen noted the following message dug out from his industry mailing list archives from 2014: “Looks like someone found yet another name for Geodo, which we’ve seen since at least a month or more (mid to late May 2014)”
But first: Feodo
So let’s take a step back to 2010. This time I’ll spare you references to Fruit Ninja…
During the latter part of 2010, cybersecurity firm FireEye reported on a banking trojan called Feodo. The report noted that they have been seeing this trojan in the wild since August 2010, with similar traits to the then famous banking trojans called Zbot and SpyEye.
Now, this is where you need to keep your wits about you. The Feodo trojan was later on also referred to as Cridex or Bugat. Cridex is where another famous banking trojan called Dridex is said to have evolved from.
Fast forward again to 2014 (queue flappy birds stopping their flapping all too unexpectedly). Abuse.chreported in early June of that year that they were seeing a new version of the Feodo banking trojan “which some security experts started calling Geodo”. A few days after Trend Micro baptized Feodo as Emotet, Seculert also reported on a new version of Cridex (aka Feodo aka Bugat) whilst referring to it as Geodo.
The Geodo aka Emotet banking trojan continued to happily steal hard-earned cash from various victims between 2014 up until 2017 when a new version of Geodo arrived. The new version was called Heodo. (Now in keeping with the alphabet rotations, you would’ve thought that Geodo aka Emotet would then become Fmotet, but I guess that didn’t go well with focus groups, and the new Heodo malware was able to keep its Emotet naming.)
Here’s a quick Genesis summary:
First, there was Feodo (circa 2010), which was also known as Cridex or Bugat (although some might claim that Feodo was the successor to Cridex, and is not Cridex itself). Other researchers noted that Feodo was only first spotted in 2012.
In 2014 came Geodo (aka Emotet), the son of Feodo.
Finally, in 2017 came Heodo (aka Emotet), the son of Geodo.
As such if in the year of Our Lord 2020, someone is referring to an active Emotet campaign or infection, they are referring to Heodo, and vice versa.
Banking Trojan 101
So the question remains: What does a Banking Trojan do?
At its core, a banking trojan has the purpose of intercepting online banking usernames and passwords from infected computers. Once this data is obtained, it is sent off to their controlling syndicates to use for fraudulent transactions or even sold on for others to use.
This interception of banking credentials can be done in several ways:
Logging keystrokes typed on the keyboard of an infected computer.
Intercept username and password fields typed into logon forms.
Presenting victims with fake online banking login pages when they attempt to access their legitimate banking website.
Evolving With The Times
When Trend Micro analysed Emotet in 2014, they detailed how the malware would specifically monitor web activity on an infected machine. Once an online banking website was accessed which matched a predefined list of targeted banks, the malware would intercept the entered credentials. It was capable of doing this even if the banking website was accessed via an HTTPS connection.
We’ll call this Emotet version 1 (mainly because others did so)
Emotet version 2 and 3 came onto the scene that same year (2014), sporting functionality to automatically conduct fraudulent transactions on infected machines using automatic transfer systems (ATS).
In addition to the ATS functionality, Emotet went modular. This meant the malware had separate modules within itself which were responsible for different things, like stealing banking credentials, intercepting email login data, or distributing spam. Emotet’s loader was also changed into a separate module. A loader (in malware terms), is responsible for loading additional second-stage malware payloads onto the infected system.
Malspam All The Way
Since it’s early days, Emotet has been gaining its initial infections via malspam campaigns. That is spam emails that either contain malware as an attachment or a link that will download malware back to the victim’s computer. These email messages had themes ranging from financial communications to urgent courier delivery messages.
In the early twenty-tens, most banking trojan operators were relying on tricking their victims into thinking that the email attachment or downloaded file named Invoice.pdf.exe was an actual urgent PDF invoice and not something much more dangerous.
Emotet has since moved onto predominantly making use of malicious PDF documents or macro-enabled MS Word document email attachments, or a link to download either.
In 2017, while Elon Musk and Mark Zuckerberg were fighting on Twitterover the threat posed by Artificial Intelligence, Emotet started its own delivery service.
This service evolved with the times and by July 2018, CISA labeled Emotet as a “modular banking trojan that primarily functions as a downloader or dropper of other banking trojans”. This meant that Emotet pretty much became a dodgy food delivery service, that will walk up to your door, ring the bell and when you open, smash a freshly cut sample of the Dridex trojan in your face. To round it off, the delivery guy will then jump your back fence and repeat the same ‘face-smashing-Dridex-delivery-service’ with your neighbors.
CISA estimated that Emotet infections have cost SLTT Governments (State, local, tribal, and territorial) up to $1 million per incident to remediate.
Emotet had five known spreader modules at this stage, which were put to work to allow it to further spread and infect other computers. These could be computers on the same network by attempting to brute force passwords, or using extracted email addresses from Outlook on an infected machine to send out additional spam emails.
Emotet’s delivery service business continued strong throughout 2018 and 2019. In late 2019, Emotet was observed making use of socially engineered spam emails: “Emotet’s reuse of stolen email content is extremely effective. Once they have swiped a victim’s email, Emotet constructs new attack messages in reply to some of that victim’s unread email messages, quoting the bodies of real messages in the threads.” Talos, September 2019.
In 2019, campaigns were noted where Emotet dropped the TrickBot trojan to steal sensitive information from infected machines. After TrickBot did its job, it would in turn download the Ryuk ransomware for a coup de grace.
The Spider In The Room
We still haven’t touched on the aspect of attribution. That is, who are the people behind Emotet?
One thing that is certain is that we have three names being used to refer to Emotet’s handlers:
The “Spider” in Mummy Spider is the umbrella term used to refer to cybercriminal groups that aren’t directly linked to Nation-State-Based Adversaries. Some researchers have also noted that Mummy Spider is a Russian-speaking group.
But, for now, this is the short answer you’ll get when asking the question “Who is behind Emotet”: A likely Russian speaking cybercriminalgroup.
Emotet Today and Tomorrow
To date, researchers have tracked three different botnets used to send Emotet malspam campaigns. Each of these has its own infrastructure, and are referred to by either Epoch 1, Epoch 2, or Epoch 3. The themes used with Emotet malspam campaign emails also adapt to the times or seasons. One of many examples is the recent ‘Halloween house party’ themed email lures that were used during October. The Emotet delivery service has also been pushing on, with the malware currently being tracked for delivering the notorious QBot (aka Qakbot) malware.
Development of the Emotet malware appears to be ongoing as a new Emotet loader-type was discovered in early 2020, giving it the capability to spread to nearby wireless networks with poor passwords.
Even though there was a five-month hiatus at the beginning of this year without any notable Emotet malspam campaigns, it is still on track to end the year with a bang. Some security firms have stated that they were seeing between 1000% and 1300% increases in Emotet detections in the past months.
(it’s not lame if it makes you smile)
Has caused millions of dollars to be bled,
While helping the most treacherous cyber-attacks spread.
Need help? If you are looking for mitigation techniques against Emotet, most major cybersecurity firms have published advice on how to protect against it. Here is a comprehensive list put together by CISA: https://us-cert.cisa.gov/ncas/alerts/aa20-280a
Yep, Business Email Compromise (BEC) is a thing. (If you don’t believe me, read this, this and this.) There’s also an ugly step-sister of BEC that some people are calling Vendor Email Compromise (VEC).
Whatever you call it, a key step in these schemes usually involve an attacker gaining access to someone’s email account. Whether this be the victim’s, their vendor’s or their client’s email account. Why? Well the attacker wants to understand things like how the money flows in the targeted company, who is responsible for making payments and what the prerequisites are for someone in that company to act on payment instructions. Remember, they are trying to trick their target into thinking that they are communicating with a legitimate someone. The end goal usually being to get a victim to act on a fraudulent payment instruction or to change an existing vendor’s or client’s bank account to a new fraudulent account number. Not the most technical hacking ever, but these guys are patient, persistent and effective.
So, before the BEC scam gets into full swing, the attacker needs to get access (hack) their target’s email account. How do they do this? Well, the answer is “it varies” (like with most things in Infosec).
Often times it will start with something like either a bulk or targeted phishing campaign. One example is where an attacker will send an email posing as a supplier with an urgent outstanding invoice, something that will get the average person’s attention. Now, let’s pause for a moment. Before you roll your eyes and mutter ‘you can’t patch stupid!’… humor me for a bit with a story from Hypothetical Jack:
It’s just after 14:00 on Monday afternoon. The long weekend from which hypothetical Jack and his family returned on Sunday evening remains a distant memory. Bogged down in his 2×2 cubicle, Jack reminisces about the Certified-Karoo-free-range lamb chops they braaied on Saturday evening under a clear sky. He can still hear the distant howling of a black-backed jackal echoing through the Waterberg mountains, each time the clacking sound of his fingers pounding away at his laptop’s keyboard subsides for a brief moment. Strangely enough, this time the howling seems to get louder, almost like the jackal is stalking his cubicle…. “Jack! Snap out of it!” and like that the harsh voice of his colleague Gertrude abruptly rips him out of his daydream and smacks him down on his high-back orthopedic office chair. “Peter is waiting for your monthly financial report card. He wants it before 4 today. It’s month end, remember…”
Jack reluctantly realigns his grey matter back to the humdrum of checking multiple spreadsheets and adding meaningless Gantt charts to an even more meaningless PowerPoint month-end presentation. Flipping between sheets, his eye catches the Outlook pop-up notification appearing from the right bottom of his screen. He was able to make out something about an outstanding invoice due this week before it disappeared again behind his litter of open spreadsheets.
Jack’s squirrel instinct takes over as he conveniently forgets about his looming month-end deadline and clicks over to Outlook to find the following email:
Jack has never dealt with “New Real Supplies CC” before and concludes that this is likely a new vendor they are dealing with. As he stares at the attachment, he remembers the one thing they taught him in his Information Security Awareness training session: DON’T CLICK LINKS.
“Well, I’ve got 99 problems but the link ain’t one” Jack mumbles as he opens the Payment Invoice.html attachment promising a 25% discount for early payment.
We’ll pause the story here for now. The point is that in a high-stress-multi-tasking-deadline-driven-environment, it doesn’t take much to lure Hypothetical Jack into opening an attachment from an unknown source.
Dodgy HTML Attachments
Let’s look at two real world examples of how attackers use HTML attachments to trick users into revealing their email account credentials. Remember, Jack is expecting to see some sort of invoice as indicated in the email message…
Attachment one: The Blurred Invoice
Have a closer look. The attackers did a great job with this one. Once the attachment is opened in a user’s browser, it shows a blurry ‘invoice’ document in the background. All that now stands between Jack and the un-blurred ‘invoice’, is this box asking for his email address and password:
Let’s have a look what happens when you enter your details in the form and click the “View Document” button.
To do this, we’ll look at the source code of the Payment Invoice.html attachment. Firstly, this shows a poor attempt at hiding the actual code with a <!-- Source code not available ... --> comment at the top of the page:
Scrolling down, you eventually get to the real, but quite hectically obfuscated source code:
Now, you can do either one of two things at this stage:
Spend your Friday afternoon attempting to make sense of the obfuscation OR
Open the attachment in a controlled environment, run something like Fiddler to capture HTTP traffic while you enter an email address and password in the form and hit “View Document”.
Naturally, I opted for Option 2 which gave use the following output in Fiddler:
From the above you can see that whatever the user enters into the “Email ID” and “Email Password” fields gets shipped off to the attackers URL. They now have a username and password to log into the victims email account.
Attachment 2: PDF File Inside
This one is all nice and official looking, even with a fake McAfee “Secured Page” badge. Again it is asking the user to enter their email address and password to access the document.
Having a look at the source code of the above gives the following obfuscated code:
This time it’s fairly easy to de-obfuscate the code. The above conforms to Hexadecimal code, so an easy way to decode this is to pop it into CyberChef. One of the myriad functionalities of CyberChef is to decode hex code to ASCII. Adding the above to the ‘Input’ box in CyberChef while selecting the hex decode recipe, gives you the following: (Note the Output box)
Scrolling through the decoded source code of the attachment brings us to the following section:
Here we again see that all this page does is to capture what the user entered in the form (i.e. email username and password) and ship that off the the attacker’s URL. No actual invoice for the victim to view, while the attackers are helping themselves to an email inbox using the newly acquired username and password.
Attackers will continue coming up with innovative ways to target users. As seen in these examples, they are luring users into entering their email credentials in order to get access to an ‘urgent invoice’.
Here are 2 ways that could assist in mitigating these type of attacks:
Secure your email. One step that can go a long way is to enable 2FA / MFA (Multiple forms of authentication). This will assist in preventing an attacker from logging into an email account, even though they were able to obtain the username and password of the account. They’ll still need an additional form of authentication (such as a uniquely generated code sent to a trusted device) to be able to log in.
Review your payment processes. Put extra validation processes in place to ensure payment instructions received via email is actually coming from who they say they are coming from. The following scenarios are often used by attackers:
A request is sent to a company to change banking details for an “existing” client. Attackers attempt to get them to pay a legitimate invoice into a fraudulent bank account.
An urgent payment needs to be made to a new account. Attackers attempt to impersonate a supplier or even the company’s CFO, requesting his staff to urgently act on a fraudulent payment instruction.
Well, hello and welcome to the second episode for Season 1 of #ForensicMania.
Today we are looking at answering the ‘Misc’ section of questions from the 2018 MUS CTF, putting our four tools head-to-head with some analysis work.
Why are we doing this? To give you, the reader, a view on how different commercial tools compare with digital forensic analysis.
To recap, in Episode 1 – Processing we processed our evidence file with the four tools, after which the scoring looked as follows: Axiom took a narrow lead with 10 coins, while Blacklight was chomping at its heels with 9. In third place was EnCase with 7 coins and bringing up the rear of the pack was FTK with 5.
“How will the scoring work this round” I hear the masses scream from the districts. For the ‘Misc’ section, we have 2 coins up for grabs for each question, that is, if the tool gets to the correct answer with an acceptable amount of effort, 2 coins are awarded. However, if the tool hides the answer under a rock, but you can still get to it, or if the answer is only halfway there, only 1 coin will be awarded. Finally, 0 coins for wrong answers.
This means we have a total of 22 coins up for grabs in this round.
So… will Axiom keep it’s narrow lead or trip over its connections? Does Blacklight know how to spell Shimcache? Can EnCase parse web histories? And FTK, will it fly, or just try to sell me Quin-C instead?
So many questions, so little time. Let’s dig in!
Timezone: What is the system’s timezone set to?
Correct Answer: Mountain Standard Time
Axiom parses this key as an Operating System artifact:
Under Blacklight’s “System” section, open the registry sections and it shows you the “TimeZoneInformation” registry key:
Following processing, EnCase has a ‘Case Analyzer’ option which provides various reports about artifacts identified. One of these shows the Time Zone:
Navigate to the System hive in the folder structure, right-click to open with Registry Viewer, hit the ‘Common Areas’ toggle, and Bob’s your uncle:
However, this is an obvious artifact that could be shown more easily to the investigator in the ‘System Information’ tab, than having to open it with Registry Viewer first.
File Sequence Number: What is the MFT file sequence number for the Python27\python.exe file? [This is not the MFT entry number]
Correct Answer: 1
Axiom does not parse MFT file sequence numbers.
Blacklight shows the correct value in the “Data Structure” view for the file:
EnCase doesn’t parse the $MFT. However, if you’ve attended EnCase training at some stage, you would’ve probably have received an EnScript (“NTFS Single MFT Record & Attributes”) that will do this for you. Unfortunately, as this isn’t included as stock with EnCase, it doesn’t exist for most users (and it’s also not available in the Guidance App store).
FTK doesn’t parse MFT file sequence numbers.
FileName Lookup: What is the name of the file that has MFT entry of 86280?
Correct Answer: $USNJrnl.
in the ‘File System’ view, you can filter on ‘MFT record number’ to get to the desired file:
Blacklight allows you to filter all files based on “File System ID”, which is the MFT Record Number:
EnCase shows the ‘MFT record number’ in the columns under the label ‘File Identifier’. So just show all files, and sort according to the ‘File Identifier’ to get to the answer:
You can get to this quite easily by listing all entries and sorting according to MFT Record Number in the columns.
FileTimestamp: What is the Standard Information Attribute’s Access timestamp of the Windows\Prefetch\CMD.EXE-89305D47.pf file? [UTC in YYYY-MM-DD hh:mm:ss format]
Correct Answer: 2018-04-26 15:48:40
The Access timestamp from the Standard Information Attribute is what is displayed by our tools. Check out more info about Standard Information Attributes here:
Shown nicely in the File System Information artifact
So… Blacklight shows the Volume Serial Number for a specific volume in the “Details” section under “Disk View”. However, it shows the value in Big Endian (which you can then convert to Little Endian with another tool):
So, only halfway there.
When the volume is selected in the Tree view, it shows the volume serial for you:
Head over to the file structure and navigate to the OS volume, and click on Properties:
YouTube Search: What term was searched in YouTube on 3/28/2018?
Correct Answer: “simpsons max power”.
Looking at ‘Web Related’ artifacts and applying a date filter for March 28th 2018, get’s you the answer:
Hop on over to the “Internet” tab, and you’ll get the answer:
EnCase seems to be the tool that you hope the opposing party used when reviewing your client’s web histories… Cause there’s no way a sane person will enjoy using this for analysing internet artifacts.
Find the “ConsoleHost_history.txt” file which contains the PowerShell command history, and search in the file for “SRUDB.dat”.
Search the entire case for “SRUDB.dat”, which will lead you to the “ConsoleHost_history.txt” file.
For this question we’ll go with door number 2, as I didn’t (and don’t) necessarily know this path or filename off by heart.
Searching for “SRUDB.dat” shows the “ConsoleHistory_history.txt” log listed as a Document artifact:
Blacklight does not have an index search function, but only live searches. I ran a live search for ‘srudb.dat’ which took a few minutes to get to the Powershell log with the ifind command in it
Searching for “srudb.dat” in the indexed search provided a hit for the Console_history.txt file, showing the ifind command.
Search for “SRUDB.dat” in FTK index search, which will get you bunch of hits, one of which is the “ConsoleHost_history.txt” file showing the command used.
Administrator Logon Count: How many times did Administrator logon to the system?
Correct Answer: 2018-04-11.
The ‘User Accounts’ artifact shows this for the Administrator account:
Blacklight’s ‘Actionable Intel’ section gives you the Logon Count for each local user account:
EnCase’s ‘System Info Parser’ artifact does provide info about the local user accounts, however, there’s nothing about logon count:
You can view the SAM hive’s structure from within EnCase, but again, they want you to work for it. In order to get this value in EnCase, you need to go to offset 66-67 of the F value of the user’s subkey:
This then translates to the integer value of 14.
Again, a simple artifact that should be shown to the user in a much simpler way. I’m giving EnCase a 0 for this one, as having to highlight offsets of the F value, is just not ideal.
FTK does have a ‘SAM Users’ section in their ‘System Information’ tab, but this only shows you SIDs and User Names. So, find the SAM in the tree structure. This will then show the content in readable way in the ‘Natural’ view pane, without having to open it with Registry Viewer:
Install Q: What day was the Go programming language installed on? [Answer format: YYYY-MM-DD
Correct Answer: 2018-04-11
This is recorded in the ‘Installed Programs’ artifact
Blacklight did not list the Go Programming Language under it’s Application artifact:
But, you could however find it under the “Uninstall” registry key with the built in registry viewer:
EnCase lists installed software under: Artifacts > System Info Parser > Software.
However, Go was not listed by EnCase:
By manually traversing the SOFTWARE hive in EnCase, I got to the Uninstall key for Atom (based on what the other tools showed), but for the life of me I couldn’t figure out how to get actual data to be shown in EnCase for this key:
The System Information tab shows this quite easily:
Who Installed Atom?: Which user installed Atom? [Answer is the complete SID not the username]
For this question, I’m looking for proof that the Atom installer, AtomSetup_x64.exe, was downloaded (Chrome Web History) and that the file was executed by the user (Windows OS Artefact).
After searching for “Atom” in Axiom, you can get to the install file “AtomSetup-x64.exe”. In the connections view, it shows the installer being downloaded by the ‘maxpowers’ account in Chrome and then executed by the same account via the Shimcache:
In addition to the above, there is also a SRUM Application Resource Usage entry linking the installer to the profile.
To get the SID for the profile, head over to the ‘User Accounts’ tab which shows the SID for ‘maxpowers’:
Blacklight recorded the installer being downloaded by the profile ‘maxpowers’ in Chrome:
You can then link the SID to the profile via the registry viewer:
However, there was no artifact recording AtomSetup-x64.exe being executed
I could not get to the downloading of the AtomSetup-x64.exe in it’s Chrome histories, nor any artefacts showing the execution of AtomSetup-x64.exe by ‘maxpowers’.
The ‘Internet/Chat’ tab in FTK shows the ‘maxpowers’ profile downloading the setup file:
However, FTK did not have any artefacts showing the file was executed by the user profile.
The ‘Sam Users’ section then shows you SIDs mapped to usernames.
Deletion in LogFile: The $LogFile shows at LogFile Sequence Number [LSN] 4433927454 a file is deleted. What is the name of the file that was deleted?
Correct Answer: 7z.dll
Axiom parses the $LogFile entries, so you can search for 4433927454, which will take you to the 7z.dll entry in the ‘$LogFile Analysis’ artifact
Blacklight did ‘parse’ the $LogFile, but not properly:
EnCase also ‘parsed’ the $LogFile, but doesn’t show LSN numbers:
FTK doesn’t parse the $LogFile.
And that’s it!
After a gruelling round, let’s have a look at the scoreboard for Episode 2:
Well, there you have it: Congratulations to Axiom for taking pole position once again. Taking second is BlackLight, with FTK following close behind in third.
[Update 2019-03-10] I’ve added the version numbers of Axiom, Encase and FTK used. Also added details about EnCase Firefox support update coming in next release.
So, last night, after watching the Forensic Dinner (yeah yeah it’s the Forensic Lunch, but hello time zones) I was busy with some testing for #ForensicMania.
Dealing with a simple question ‘What was searched for in Youtube on xx date’, I came to bit of a speed bump in EnCase. In short, I couldn’t get to the answer in EnCase for Youtube web histories viewed in Firefox. It was late, so I wasn’t sure if I were to blame, or EnCase. With this, I stopped with the #ForensicMania stuff and thought, let’s do some targeted testing.
The next morning (today), I decided to do a quick and simple test:
Conduct a few searches in Chrome and Firefox
Parse the web histories with Axiom, EnCase and FTK
Compare the results
I fired up Chrome and Firefox, and made sure they were up to date:
With last night’s Forensic Lunch still fresh in my mind, I Googled the following between 11:00 and 12:00 on 2019-03-09.
The same searches were done with Chrome first, and then with Firefox.
Google search: “Is lee whitfield brittish?” Result opened: “https://www.sans.org/instructors/lee-whitfield”
Google search: “How do you spell british?” Result opened: “https://en.oxforddictionaries.com/spelling/british-and-spelling”
Google search: “Where did Matt get the cool blue sunglasses?” Result opened: https://www.menshealth.com/style/a26133544/matthew-mcconaughey-blue-colored-sunglasses/
Google search: “Why is no one having lunch on the Forensic Lunch?” Result opened: https://www.youtube.com/user/LearnForensics/videos
Youtube search: “drummer at the wrong gig” Video played: https://www.youtube.com/watch?v=ItZyaOlrb7E
And then played this one from the Up Next bar: https://www.youtube.com/watch?v=RvatDKpc0SU
Google search: “Can you nominate yourself in the Forensic 4Cast awards?” Result opened: https://www.magnetforensics.com/blog/nominate-magnet-forensics-years-forensic-4cast-awards/
Following this, I created a logical image of the Chrome and Firefox histories on my laptop with EnCase. The total size for the histories were 3GB. (Yes, lots of historic stuff included there as well).
So the testing is pretty straight forward: Can I get to the above listed searches and web histories in Axiom, FTK and EnCase. Let’s see:
Parsing the logical image in Axiom gave us the following for ‘Web related’ artifacts:
Result: Great Success
Same thing, processed the image and got the following from the ‘Internet’ tab:
Again: Great Success
Now, let’s fire up the ‘2019 SC Magazine Winner‘ for ‘Best Computer Forensic Solution‘…
After processing the image with EnCase, we hobble on over to the ‘Artifact’ tab and open the ‘Internet Records’ section.
First up, Chrome histories:
Great, it works as expected.
Next up, Firefox (The browser with 840,689,200 active users in the past 365 days)
And this is where we ran into trouble: EnCase was able to parse Firefox Cookies and some cache files, but for the life of me I couldn’t get to any actual browsing histories.
I suspect that, as it’s shown on the processing window, EnCase only supports Firefox up until v51.0.0. The current Firefox version is v65.
Firefox version 51.0.0 was released to channel users on January 24th 2017. That is the same month when Ed Sheeran released his single “Shape of You”. (And now you can’t unsee the singing dentist guy covering the song)
What I’m trying to say is that Firefox v51 is old.
I’ve logged a query with OpenText about this and will update this post if and when I get feedback. (Really hoping this is something I’m doing wrong, but we’ll see.)
[Update 2019-03-10: EnCase v8.09, set for release in April, is said to have updated Firefox support]
What’s the point of this post?
Test stuff. If something doesn’t look right, test it.
You don’t need test images to test your tools. If you have a laptop or a mobile phone, then you have test data.
Don’t assume stuff. If my results above are correct, there’s a good chance you could have missed crucial Firefox data if you were only relying on EnCase.
If I’m wrong, then at least I’ll hopefully know pretty soon how to get EnCase to parse Firefox histories correctly… and someone else might learn something too.
Welcome to Forensic Mania 2019 – Episode 1. If you’re new to #ForensicMania, catch the full lowdown here.
To recap, we are testing the latest versions of four of the big commercial forensic tools against the MUS2018 CTF image.
Side note_Following my intro post, promises were made by certain Magnet folk (you can run but you can’t Hyde). So I reprocessed with the newly released version of Axiom, v2.10. If said promises aren’t kept, we might need to roll back to version 1.0.9 just for fun.
Today we’ll be running through processing the MUS forensic image with the four tools.
Analysis Workstation Details
For these tests, we will be using a Dell workstation, with the following specs:
Intel Xeon Gold 6136 CPU.
Windows 10 Pro.
OS Drive: 2.5″ AData SSD.
Case directories and the MUS2018 image file was located on separate Samsung M.2 SSDs.
How does the scoring work
The scoring for this section kept the adjudication committee deadlocked in meetings for weeks, grappling with the question: “How do you score forensic tools on processing, in a fair manor“. After a few heated arguments, the committee realised that this was not the NIST Computer Forensics Tool Testing Program, but a blog. With that pressure off, they created a very simple scoring metric.
First, to get everyone on the same page, consider the following: Say MasterChef Australia is having a pressure test, where each of the Top 25 need to bake a lemon meringue tart. Best tart wins an immunity pin.
Being the first contestant to separate your egg yolks from the whites is pretty cool, might even get some applause from the gantry. But, the proof will always be in the pudding, which is when you start whisking your whites for the meringue. If you did a messy job during the separation, you ain’t going to see firm glossy peaks forming, no matter how hard you whisk.
This then is typically where Maggie Beer and George comes walking over to your bench and drops a comment like “a good meringue is hard to beat“. You get the point.
The Scoring System
In this round, the tools will be judged in two categories, each with 5 points up for grabs. These two categories are:
1_ Processing Progress Indication. We’ll be looking at how well the tool does at providing accurate and useful feedback during processing. “Does it matter?” you may ask… Well, it is the year of our Lord 2019. I can track the Uber Eats guy on my phone until he gets to my door. Similarly, I expect a forensic tool to at least provide some progress indication, other than just “go away, I’m still busy”.
2_ Time to Completion. Yes, the big one. Pretty straight forward. How long did it take to complete the processing task.
Points will be awarded in the form of limited edition (and much coveted across the industry) #ForensicMania challenge coins:
Side note_I initially planned on putting a bunch more categories in adjudicating the processing phase (things like how customizable are the processing options, ease of use, can it make waffles etc) but it got a bit too complex and subjective. These tools have fairly different approaches to processing data, so let’s leave the nitpicking for next week when we start analyzing data.
This means there is a total of 10 points up for grabs in Episode 1.
Setting up processing
In order to keep these posts within a reasonable readable length, I’m not going to delve into each granular step that was followed. For each tool, I’ve provided the main points of what was selected in processing, as well accompanying screenshots.
Full Searches on partitions, Unpartitioned space search on the unpartitioned space of the drive.
Keyword Search Types: Artifacts. Note: Axiom does not have the functionality to do a full text index of the entire drive’s contents, but only indexes known artifacts.
Searching of archives and mobile backups.
Hashing (MD5 and SHA1). Limited to files smaller than 500MB.
Enabled processing of the default custom file types.
All computer artifacts were selected
File Signature Analysis
Hashing (MD5 and SHA1)
File Carving: All available file types were selected
Advanced Options: All available options were selected (see screenshots)
File Signature Analysis
Hash Analysis (MD5 & SHA1)
Expand Compound Files
Find Internet Artifacts
Index text and Metadata
System Info Parser (All artifacts)
File Carver (All predefined file types, Only in Unallocated and Slack)
Windows Event Log Parser
Windows Artifact Parser (Including Search Unallocated)
For FTK, I used their built-in ‘Forensics’ processing profile, but tweaked it a bit.
Hashing (MD5 & SHA1)
Expand all available compound file types
Flag Bad Extensions
Search Text Index
Thumbnails for Graphics
Data Carving (Carving for all available file types)
Process Internet Browser History for Visualization
Generate System Information
To give each tool a fair chance, the MUS image was processed twice with each.
Results: Processing Progress Indication.
Here are the results for each tool’s ability to provide the user with adequate feedback regarding what is being processed:
Axiom’s processing window is quite easy to make sense off. It shows which evidence source is currently processing (partition specific), as well as which ‘search definition’ it’s currently on. During the testing, the percentage progress indicators also seemed to be reliable.
In the category of “Processing Progress Indication”, the adjudication committee scored Axiom: 5 out of 5.
BlackLight also has a great granular processing feedback window. For each partition, it shows what it’s busy with processing as well as progress indicators. These were deemed reliable with the tests.
In the category of “Processing Progress Indication”, the adjudication committee scored Blacklight: 5 out of 5
EnCase’s processing window seems a bit all over the show. More like something you’ll look at for diagnostic info, not processing progress. It was a bit difficult to gauge what it was actually busy with. It does have a progress indicator showing a ‘percentage complete’ value, however, this was quite unreliable. When processing the MUS image, it hit 99% complete quite quickly and then continued processing for another hour at 99%, before completing. This happened with both tests. I again processed the same image on a different workstation and got similar results.
In the category of “Processing Progress Indication”, the adjudication committee scored EnCase: 3 out of 5.
FTK’s processing window is quite straight forward. Perhaps too much so. It does have an overall process bar, although not entirely accurate, and shows which evidence item (e01) it’s currently processing. However, because you have no idea what it’s actually busy with processing, it remains a waiting game to see how many files it discovers, processes and indexes. And once you think it’s done, you get a surprise with a couple hours of “Database Optimization”.
In the category of “Processing Progress Indication”, the adjudication committee scored FTK: 3 out of 5.
Results: Time To Completion.
These are pretty straight forward. How long did it take to process the MUS image with the above noted processing settings?
Axiom took 52 minutes and 31 seconds to process the MUS image. Following this, the ‘building connections’ process took another 17 minutes and 25 seconds.
This gave Axiom a total of 1 hour, 9 minutes and 56 seconds.
BlackLight took 1 hour flat to process the image. Following this, the option was available to carve the Pagefile for various file types. This added another 14 minutes and 30 seconds.
This gave BlackLight a total of 1 hour, 14 minutes and 30 seconds.
EnCase took 1 hour, 23 minutes and 25 seconds.
No additional processing required, all jobs were completed in one go.
FTK took 59 minutes and 9 seconds to process and index the image. That’s faster than all the others… But, before you celebrate: Following the processing, FTK kicked off a “Database Optimization” process. This took another 2 hours and 17 minutes! Although it’s enabled by default, you can switch off this process in FTK’s database settings. However, according to the FTK Knowledge Base “Database maintenance is required to prevent poor performance and can provide recovery options in case of failures.” Seems like it’s something you rather want to run on your case.
This gave FTK a total of 3 hours, 12 minutes and 9 seconds.
Let’s dish out some coins:
For winning the time challenge, Axiom gets 5/5
Not too much separated BlackBag and EnCase from Axiom, both gets 4/5
And, bringing up the rear, taking almost 3 times as long as the others, FTK with 2/5
Before we look at the totals for this week, here is the result of the poll from last week:
Pretty much in line with what we saw this week…
Here’s your scoreboard after S01E01 of #ForensicMania
Tune in next week to see if Axiom can keep it’s narrow lead, whether BlackLight knows what to do with a Windows image and if FTK can pick itself up by it’s dongles. We’ll start with analyzing the MUS image, so stay tuned for all the drama, first and only on The Swanepoel Method.
Side note_It is still early days. Don’t go burning (or buying) any dongles after this post alone. The proof will be in the analysis capabilities of these tools, so check back next week.
I’ve long been wanting to publish comparisons between some of the big commercial Digital Forensic tools. After recently playing around with triage ideas with the MUS2018 CTF image compiled by Dave and Matt, I thought now is as good a time as any.
As we dig in, allow me to introduce you to hypothetical Jack. (Don’t worry, Jack is not a real person, but a photo generated by some funky algorithms on https://thispersondoesnotexist.com)
Jack would like to start his own Digital Forensic and Incident Response company in sunny South Africa. We’ll refer to this hypothetical company as DFIRJack Inc. DFIRJack Inc will focus on Windows Forensics for now. Following some Googling, Jack has come to a shortlist of commercial Digital Forensic tools that he wants to put through some tests. This is to aid him in making a final decision on where he should spend his hard earned cash.
Access Data FTKv7.0.0 (Date Released: Nov 2018)
BlackBag BlackLightv2018 R4 (Date Released: Dec 2018)
Magnet Forensics Axiomv2.9 (Date Released: Jan 2019)
Opentext EnCase v8.08 (Date Released: Nov 2018)
Side note 1_ Jack always thought that Blacklight was predominantly a Mac forensics tool, but after seeing posts on Twitter by one of their new training guys punting it’s Windows Forensic capabilities, he thought it can’t hurt to give it a shot.
Side note 2_ In the midst of writing this, Magnet released Axiom v2.10. By the time that I hit publish on this post, v2.11 will most likely be uploading for release. I’ll stick with version v2.9 for now. If you work for Magnet and want to persuade me with some swag to use v2.10 in this series going forward (or whatever version you’re going to be on next week Tuesday), send me a DM to negotiate.
Jack’s research has brought him to the conclusion that a single user license (the standard license for computer analysis, no cloud or mobile extras) will cost more or less the same for either FTK, Axiom or EnCase. Interestingly enough, he can buy two BlackLight licenses for the price of one of the other three.
After making some South African market related comparisons, Jack realized that he can either buy one of the aforementioned licenses (two in the case of BlackLight), or a secondhand 1992 Toyota Land Cruiser GX with 350,000km on the clock.
This is the GX:
Jack has long dreamt of buying a GX and taking the fam to the Central Kalahari Game Reserve (CKGR) in Botswana on an overland expedition. But that’ll have to wait, as it looks like he’ll be spending that money on a license dongle. What will it be? A GX or pure forensic joy? (Jack did find it odd that the only place where he can buy the licenses for these tools were from the same companies that he’ll be competing against with DFIRJack Inc. Kind of like the Bulls only being allowed to buy their Rugby kit from the Stormers.)
In order for Jack to decide which license dongle will take the place of his GX, he opted to put these tools through some head-to-head tests.
We’ll call it Forensic Mania
Forensic Mania will run for an undefined number of rounds or blog posts. (Undefined, yes, but most likely until I loose interest and move on to a new blog idea…)
For the first series, we’ll use the MUS2018 CTF image of Max Powers to run the tests. Why this image?
There are write ups available online of the answers, so you can run and verify your answers (here and here)
It’s small enough (50GB) to throw the kitchen sink at it, and all the tools should be able to swim.
It’s a Windows 10 image. Windows 10 was released in July 2015 and brought lots of new forensic artifacts with it. Almost four years later, I’d expect that the big forensic tools should be able to exploit this.
It’s my blog, so I make the rules. Get off my lawn.
Bias alert: The forensic image was created for a CTF set to run specifically at MUS2018. Did Matt & Dave design the CTF image to benefit Axiom? Maybe. But we’ll try and be as objective as possible.
Following this series, I’m planning to run similar style tests against more real world images to see how the tools hold up.
Having seen Eric Zimmerman’s release of Kape (Or Kale as Ovie Carol calls it) I thought it could be insightful to play around with the Triage idea some more.
Basic premise for this post was this:
For an Incident Response type case, how much answers can you get to by just grabbing and analyzing selective data (triage) versus full disk images.
With remote acquisition, acquiring only a few GB’s of data instead of full images can, in some cases, make a difference of a few hours – depending on network speed. The same calculation applies when it comes to processing the data.
To run this exercise, I dusted off the evidence files from the 2018 Vegas Magnet User Summit CTF. I managed to win the live CTF on the day, but didn’t get a full score. Oleg Skulkin and Igor Mikhaylov however did a write-up of the full CTF that we’re going to use.
For this test, I created a quick and dirty condition in EnCase that only targets specific data. Things like Registry files, Event logs, Browser Artifacts, File System Artifacts etc. A good place to start with a Triage list is to have a look at the Sans Windows Forensics “Evidence Of…” poster for areas of interest.
A condition in EnCase is basically a fancy filter, allowing you to filter for files with specific names, paths, sizes etc. Not that it matters, but I named my condition Wildehond, which is the Afrikaans name for Wild Dog or Painted Wolf. Wild dogs are known to devour their prey while it’s still alive, and that’s what we’re trying to do here… (You can Youtube it at your own risk).
Running my Wildehond condition in EnCase on the Max Powers hard drive image, resulted in 2,279 files totaling 2.5GB. The mock image of Max Powers, the victim in the CTF, was originally 50GB. After running the condition I created a Logical Evidence File of the filtered triage files.
So, the question is, can you get a full score for the CTF from processing and analyzing 5% of the data?
First off, I processed the ‘full’ image in Axiom v2.9:
And selected all available artifacts to be included:
Processing ran for around 45 minutes, with another 15 minutes to build connections. That’s a round 60 minutes.
The processing resulted in about 727,000 artifacts:
Next up, I used the exact same processing settings on the 2.5GB Triage image I created with EnCase and Wildehond.
Processing took 13 minutes, with another minute to complete the connections. A cool 14 minutes in total. This left us with around 290,000 artifacts for analysis:
So yes, as expected, there is a large difference (45 minutes) in processing 2.5GB in stead of 50GB. (This difference will be a lot bigger between a real world 500GB drive and a 2.5GB triage set)
But this doesn’t mean anything if we can get to the answers, so lets go.
After running the processing, I did a side-by-side comparison between the two sets of data, and worked through the CTF questions on each side.
All of the questions were answerable on the full image processed with Axiom 2.9, except for three questions relating to the $MFT, where a tool like Eric Zimmerman’s MFTEcmd would do the trick.
This is how the two images did in providing answers:
So, with the Triage set of 2.5GB, we could answer 23 of the 28 Questions (82%… which is more than what I got for C++ at University).
However, real world incidents can differ quite a bit from question and answer style exercises, especially if you don’t know what exactly you are looking for.
For the 5 questions that could not be answered from the
Triage set, below is the reasons why:
Wiped file names:
Strangely enough, the UsnJrnl did not parse in my Triage
From the full image:
However, nothing from my Triage system.
I confirmed that the file was present in my image:
So, to troubleshoot, I used Joachim Schicht’s UsnJrnl2Csv to try and parse the UsnJrnl that was in my Triage image.
And… It liked my UsnJrnl exported from the Triage image:
So… for some odd reason Axiom doesn’t recognize the $UsrnJrnl•$J file when contained in my Triage LX01 image. Will do some more trouble-shooting to figure out why this is the case.
Browser to download Dropbox:
From the full image, the answer was quite clear: Maxthon
Yes, my Triage image contains lots of artifacts referencing Maxthon and Dropbox separately, but no immediate obvious link that Maxthon was used to download Dropbox. The main reason for this is that I did not capture Maxthon web histories (i.e. mxundo.dat) in my Triage image.
The last two questions where my Triage image came up short related to Email. As no email was targeted with my Triage, this was to be expected.
So, there you have it. In this case, you could do a pretty good job at getting a handle on a your case by only using Triage data.
Will full disk imaging and analysis not provide you with better context? Yes, perhaps… but with the likely trade-offs in Triaging, it’s worth exploring it first.
– InfoSec stories scavenged for you from across the internet –
Three new stories this week:
Two Nigerians Visit Kuala Lumpur (and Hack 20 US Universities)
Phishing for iPhones (Breaking into iCloud-Locked phones)
A Bad Week At Eskom (Malware, data leakage and a breakup)
1_ Two Nigerians Visit Kuala Lumpur
Back in 2014, two Nigerian chaps (sorry folks, you’re not helping the stigma) were living with expired Visas in Kuala Lumpur.
Instead of using their new found freedom to enjoy the sights of say, the Petronas Twin Towers, they launched phishing campaigns. These campaigns were targeted at employees at 140 educational institutes across the United States. Once usernames and passwords were obtained via their phishing emails, Olayinka and Damilola acquainted themselves with the financial systems of said institutes.
Their end game was to change the banking details of employees in order to reroute salary payments to accounts they (or their more unscrupulous friends) controlled. These phishing attacks were successful at 20 schools; however, when Georgia Tech personnel didn’t get their Thanksgiving paychecks, they caught wind of what was going on and called the Feds.
After some proper investigation and cooperation with the Malaysian authorities, Olayinka and Damilola was given silver arm bracelets and extradited to the US to face trial. Olayinka got six years behind bars, with Damilola receiving three.
In addition to their prison sentences, the judge also ordered them to pay restitution of $56,175.44 each (about ₦20,358,214). Back in Lagos, this can buy them around 76,000 heads of lettuce, each.
Joseph Cox and Jason Koebler over at Motherboard wrote a detailed piece titled: “How Hackers and Scammers Break into iCloud-Locked iPhones“. In this piece they delved into the world of thugs stealing iPhones and what goes into getting them unlocked.
If you are planning to not read their article, at least know this:
If your iPhone / iPad is stolen, the thug typically can’t do anything with it unless they have your unlock code or iCloud password. (Read the full piece to see why I say ‘typically’). This means they can’t factory reset it to sell it on.
However, there is a fairly good chance that the thieve might target you with phishing or other social engineering attacks. Reason: To get you to give up your device lock code or your iCloud account details.
And if you’re thinking: ‘Ah, first world problems, won’t affect us down South’ Think again… same attacks have been running here for the last few years already.
Eskom, our local (South African) electricity provider is having an interesting week.
First, a guy on Twitter claimed to have found an online database of Eskom that’s exposing customer details. Following attempts to responsibly disclose this, he voiced his concerns in a tweet. However, Eskom has come back stating that the database he identified is not theirs, but they are investigating if the data is…
Second, another guy on Twitter claimed to have identified an Eskom computer which was infected by a RAT. It does not seem like this is a critical system (i.e. SCADA stuff) but rather a computer of a Tannie that shops for Bernina sewing supplies at Makro (based on her desktop icons). But, nether the less, still not where you want to be.
Finally, our President just announced that Eskom is being split into three separate entities (generation, transmission and distribution). This is in an attempt to prevent the corruption ridding entity from dragging the entire country’s economy down the pooper. Not that it has anything to do with points one and two, but now you know.
And lastly… I’ll leave you with some wise electricity related words:
If you can’t fix it with a hammer, it’s an electrical fault.
Inspired by Timothy Ferriss’ book Tribe of Mentors, Marcus compiled a list of the fourteen most common questions he gets asked about cybersecurity. These questions were then posed to seventy notable InfoSec practitioners, with their responses recorded across more than four hundred pages in Tribe of Hackers.
Question number two caught my eye:
“What is one of the biggest bang-for-the-buck actions that an organization can take to improve their cybersecurity posture?“
Assuming the 70 has seen some stuff over the years, I thought this would be good advice to follow for most companies. I was also interested to see if there would be any commonalities between the answers, so I read through the seventy responses and compiled a Top 7 list of common responses.
Again, go get the book, the proceeds are going to charity after all.
So, here we go:
The Top 7 Bang-For-Your-Buck Actions To Improve Your Security Posture.
For each of the Top 7 Bang-For-Your-Buck responses, I’ve quoted some comments from the answers. However, read the book for the full responses and more in-depth reasoning.
Number 7_ Conduct Risk and Threat Assessments (4 mentions) “Once an organization identifies and quantifies risks and the assets associated with their key function(s), it becomes inherently easier to identify potential causes of a critically impactful incident.” – Lesley Carhart
Number 6_ Hire Good People (6 mentions) “Hire good people. You will never spend money on something more effective within this domain than talented people.” – Ben Donnelly
Number 5_ Asset Management (7 mentions) “You can’t protect it if you can’t find it” – Cheryl Biswas
Number 4_ Least Privilege | Limit Administrative Access (8 mentions) “Get users out of the local administrators group” – Jake Williams
Number 3_ Do The Basics (9 mentions) “There’s a lot of talk about the basics. If the basics were easy, everybody would be doing them. But I think they’re still worth calling out, even though they are difficult.” – Wendy Nather
Number 2_ Security Culture (11 mentions) “Culture change impacts behavior, incentive models, accountability, and transparency — and myriad other critical enablers that help to mature and improve cybersecurity programs. Until organizational culture — comprised of values and behaviors—is substantially reformed, cybersecurity failures will continue to abound.” – Ben Tomhave
Number 1_ Security Awareness Training (14 mentions) “I have gotten the best return on investment from security awareness training.” – Brad Schaufenbuel “Invest in educating employees. Awareness goes a long way in a world where lying and “social engineering” are the key to most doors.” – Edward Prevost