Purple Team: About Beacons

As the Director of Offensive Security Services at Critical Informatics, it is my job to understand and emulate the adversaries that threaten our clients.  Adversary sophistication varies broadly, as some adversaries are incredibly advanced nation-states, while others are attackers of opportunity with less sophisticated tools, skillsets, and organization. Further confusing the matter, some highly advanced adversaries will “dumb down” their tradecraft for less sophisticated targets. The “dumbing down” of tradecraft is something we’ve seen nation states do in order to preserve their more advanced tactics, techniques, and procedures should an advanced incident response get involved. Custom RATs are expensive, and there is no sense wasting your Cadillac toolkit on a low priority target.

To demonstrate one of the common and sometimes sophisticated attack characteristics, beaconing capabilities and indicators, I’ll be using Adaptive Empire, a PowerShell and Python Remote Access Tool (RAT). Empire utilizes native PowerShell on Windows systems and Python for Mac (yes, there is malware for Mac).

The purpose of this post is to investigate common Command & Control (C2) network traffic signatures, as well as identifying methods to evade blue team (network defenders) pattern analysis. This will not be an exhaustive list of tactics, techniques, and procedures (TTPs) but rather a small sample for education and training purposes.

What is beaconing?

Beaconing is when the malware communicates with a C2 server asking for instructions or to exfiltrate collected data on some predetermined asynchronous interval. The C2 server hosts instructions for the malware, which are then executed on the infected machine after the malware checks in. How frequently the malware checks in, and what methods it uses for this communication are typically configured by the attacker.

There are numerous communication protocols that can be used for C2. A few examples include HTTP/S, SSH, DNS, SMTP, and cloud services like Dropbox, Google sheets, Gmail, and Twitter. While beaconing may use common services like Twitter or Dropbox, it is NOT using those services as intended. The malware reaches out to pre-configured accounts for new instructions; maybe a special file on Dropbox or new tweets on a Twitter account.. Each protocol has its own advantages and disadvantages. The important thing to note here is that common whitelisting and blacklisting techniques often fail against a sufficiently advanced adversary, since they use important common services like Twitter as their communication channel for C2. An attacker will determine through reconnaissance and intelligence gathering which methods are likely to work, and pre-configure their malware payloads to utilize those which will bypass common firewall rulesets.

How is beaconing used?

The check-in interval for beacons varies, and is dependent on the expected sophistication of the target and the goals of the attacker. Current tradecraft suggests using at least one beacon in “long haul” mode, with check-ins of several hours to days or weeks apart. Long haul beacons are specifically configured to evade Digital Forensics and Incident Response (DFIR). The other and more common configuration for an active attack is an interval of 1-5 minute checkins for a quicker operational tempo (a 5-day engagement vs. a 5-month engagement). Empire defaults to 5 seconds for it’s HTTP/S beacons, and 60 seconds for it’s Dropbox channel.

Beacons configured in a “long haul” mode can be the most difficult to discover. Operated by a sufficiently advanced attacker, they will be using separate C2 channels and domains. They will be operating on machines with very little evidence of compromise. A favorite method is to use an in-memory only beacon on a critical server that rarely gets rebooted. The beaconing software is not written to disk, and disappears on reboot. Other methods include adding a trojan to startup items, using Windows’ own task scheduler, or hijacking common application Dynamically Loaded Libraries (DLLs). These have a nasty habit of allowing the attacker back into the network after an incident response takes place.

What the traffic looks like

What this traffic looks like on the wire is largely dependent on how advanced the adversary is or wants to look. Frankly, any cloud service that allows read and write to a resource can be used for C2.

The C2 channel that is used will ultimately determine what the traffic looks like on the wire, however there are certain abnormal patterns that can occur. For instance, in a recent blog post about using Dropbox as a C2 channel, I was able to see the extremely consistent traffic produced by the Empire beacon vs. the normal Dropbox desktop client.

Below is a normal Dropbox desktop agent check-in activity. The agent is presumably checking Dropbox.com for any changes to files, and updating Dropbox.com to reflect any local changes (there were none).

This appears as a pseudorandom spread across the fifteen minutes of capture with some unexplained variation in packet sizes.

Now, lets take a look at what the Empire Dropbox C2 channel looks like. This is using the Dropbox API in PowerShell on a Windows system. No tasks or data were transferred, this is simply beacon activity from initial launch to 15 minutes of beaconing at a 60s interval.

What sticks out to me is an extremely consistent pattern, where the beacon interval is exactly 60s apart and the packet sizes are also almost exactly the same.

Below is an HTTP beacon with the default checkin time of 5 seconds. This ran for about 10 minutes before launching “bypassuac”, which launched a second beacon with high-integrity (admin) privileges. We can see the denser traffic graph at this time. The first beacon was then killed and only the high-integrity beacon remained. The remaining beacon was changed to checkin every 60 seconds, and then every 60 seconds with 50% jitter.

Again, we can see the highly regular patterns in the default behavior of an Empire beacon. Adding “jitter” or randomizing by a certain percentage the checkin interval can help, but we’re still seeing packet sizes of exactly the same size.

Furthermore, looking at the packet capture (pcap), there are indicators of default Empire settings. While Empire is useful for Red Teams, and not necessarily a malicious tool, I’d encourage Blue Teams to become familiar with it, and use it for writing rules and looking for IOCs. Understanding open source and off the shelf tools will only help when trying to understand malware in a broader context. Furthermore, as the operation “Cobalt Kitty” demonstrated, Cobalt Strike (Commercial Off The Shelf or COTS RAT) was used in an industrial espionage campaign.

Manipulating default traffic indicators

ThreatExpress has an excellent post on modifying Empire to better obfuscate its behavior. The offending default settings which make the activity more likely to be caught contain things like inconsistent user agent strings, odd page requests for an IIS server, and unusual responses for the stated server type. Whether or not you do this will depend on your engagement objectives. There will of course be cases when you want, or need to get caught in order to properly train your blue team.

Part of a true Red Team engagement includes adversary simulation. Empire is perhaps not as flexible as Cobalt Strike, but does provide the ability to make modifications. Knowing existing attacker TTPs will allow a good Red Team to emulate those TTPs for the Blue Team and provide them quality, real-world training.

Learn patience

Finally, from either a Red or Blue perspective, having a good amount of patience is important. From a Red Team perspective, your long haul beacon may possibly be the thing that gets you back in after a sustained DFIR cycle. From a Blue Team perspective, its important to remember that if your network has been compromised, you must assume all assets are breached unless you can somehow verify each machine has NOT been compromised. Proving a negative is incredibly difficult and frustrating work. Strong role based access control and network segmentation can go a long way towards keeping attackers from laterally moving from a compromised asset to critical systems.

Jeremy Johnson
Jeremy Johnson
Offensive cybersecurity specialist and author of the blog: https://bneg.io/
//]]>