Between an Incident Response and a Break-up

Long time again… Sometimes I feel like I am gathering inspiration for too long, and it starts defusing after a while.

There is a perfect timing – a sweet spot – for writing a poem or a python package. If you miss it  that’s it, you missed it… You need to gather your inspiration all over again…

It’s been almost a year since my first post (Dating as a form of Penetration Testing). It is time for a break-up parallelization. Here we go!


Incident Response as a form of a Break-up

The Setup

Sometimes bad things happen. Those bad things vary in type, but a security incident in a company can be a very bad thing. A Bad like Jesse James thing. A company can lose thousands of $ or because of a spear-phishing campaign, or a compromised account on the database server.

A break-up, in the other hand is a more straightforward thing. You gotta get separated from someone or something beloved (I won’t forget the moment I gave up my ThinkPad, for my corporal machine).

For a guy the beloved thing is his girlfriend (or even his boyfriend), for a sys admin it’s the rootkit‘d File Server (he spend days and mojo building).

And you gotta get separated for sure… The Relationship/Server is no good anymore. It actually does more harm than good

Going Deeper

Technically speaking, there are several phases, both on an incident response, and on a break-up. And if you think of it hard enough, they seem to be the same phases…

SANS documents the Incident Response Phases in the GCIH cert material as follows:

  • Identification
  • Containment
  • Eradication (Cleaning Up)
  • Recovery
  • Lessons Learned

Hell, doesn’t this sound awfully familiar already?

So, let’s shine! Our star tonight: the Separated & Hacked SysAdmin



I don’t actually feel the same way I used to with her. I feel nothing when I touch her… I don’t care if I will be seeing her tonight or not

# ps aux
root 1 0.0 0.0 19356 648 ? Ss May20 0:02 /sbin/init
root 2 0.0 0.0 0 0 ? S May20 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S May20 0:03 [migration/0]
root 4108 0.0 0.0 11716 548 ? S May20 0:02 /usr/sbin/.httpd
apache 37008 0.0 0.0 213148 11328 ? S May21 0:04 /usr/sbin/httpd

It is this time! The shivers you get. The mindblow. The spine that tingles a bit. The urge to cry… The realization that you have no tears…

It happened on Monday. It had to be about 19:something. Stayed at work late and had to come at her place for the night. The moment he was typing the ps command (for no reason, like every Linux guy who is bored in front of the #), he was thinking of his girlfriend…

Stayed late just to have some time alone. Didn’t have work to do. Didn’t need to be at work at all. But, for some reason he couldn’t get his head around the forthcoming sleepover. It ‘d be the same thing again. The same meal, the same sex, the same “Calvin and Hobbes” comic in the WC… He wouldn’t do it.

Then he grasped the ps output. Couldn’t actually believe what he was staring at. At first he was like “Hey, my httpd ain’t running as root. I fixed the config two days ago“. And then he went: “what the fuck is this bro?” (he loves P.C. Principle from South Park).

After the shock he had two phone calls to make. More like three. The first one was to his girlfriend. Talked about the incident. He couldn’t come over… He was somewhat hopeful for this. Somewhat… Sometimes digital forensics are better than sex. But that night… That night anything was better than sex!

And then he called the Incident Response team, and his best friends. Ordered pizza from the office to his home (that makes it four phone calls). Arranged a meeting for tomorrow morning with the Incident Response guys and headed for home were his friends were also heading after the Bromance Alarm.

He had to figure out both issues the following day. How comes he can’t see her face in his mind?. How this rogue process was planted? He had to get his shit together…



SANS got me on this again! SANS explains the Containment Phase as “to stop the bleeding“. SANS guys must be really experienced with break-ups apart from sleuthkit.

So, the sys-admin guy broke-up the next morning. He was the smart type of guy – he didn’t depress his feelings. He felt like so and he broke-up!

Ironically, he did so over the phone, while the Incident Response guys were unplugging the Ethernet from his server (after gathering live memory dumps of course)…

The containment phase began the same evening. He gathered the gang again and went to a Pub. Got almost wasted with just beers and nachos. Then he introduced himself to a stranger, while his friends were constantly provoking him, like high-schoolers (do we –men– actually ever escape the high-school age?). Told everything to her, while constantly burping like Rick from Rick and Morty.

He knew nothing about what was going to happen next… He knew nothing about the Nightingale Syndrome that the woman was under. To make a long story short, after almost crying on her arms, they got pre-laid in the Pub’s WC and completely laid in her place…

She became his “rebound girl” for a while. He slept over in her house for almost a week. He went home to pick up things like toothbrush and clothes. Too many memories in there… It was time for…



The Friday Night was a bummer. All day Friday in work he was trying to remove the malware from the compromised server. There were also, crontabs, services, even a kernel mod was found by the forensics team…

He kept removing shit and more shit kept spawning. Rogue binaries in /root/bin/ and bogus entries in lsmod output… All day Friday he was removing malware…

Then he got home. A Friday evening. His friends were all busy and he kind of missed his ex.

What follows is often seen in movies and teenager video-clips. He got his zippo and went to the bathroom with all the pictures he had from the previous holidays they did together. He set them on fire on the bathtub. Later he brought presents and all romance-shit card-postals and letters from the Erasmus era. Her old sunglasses, her toothbrush, 3 pairs of socks… He kept burning stuff all night and more kept spawning…



He took days burning stuff and drinking Mountain Dew or Dr. Pepper. He also made a new friend – the pizzaboy. Gained some weight, stopped going out, lost the NBA finals. He was a mess for some time…


Then, suddenly one morning, he woke up motivated! Went for a walk before work by himself. Had some push-ups before putting his jeans on. It was time to finally recover…

Went to work and rebuilt the whole server. LDAP authentication, public key only authentication on SSH, remote sys-logging, etc.

Then he got home. Cooked a meal for himself, after a long time. Brewed some coffee. Checked out Hacker News from his Android, like he used to, even before he met his ex.

Later that day, he went to the Pub, alone. Got some beer and sat by the window, alone. Nothing happened. None talked to him and he talked to no one. He did some thinking, all by himself.

Before going to bed, ’round midnight, he remembered he loved Kerouac and Burroughs. Their books were on the shelf collecting dust for too long. It had been years since he last read his favorite books. Goddammit, what had happened to him…

Fell asleep while reading the Junky, feeling nostalgic. It was the first day for the rest of his life…


Lessons Learned

He went to work earlier that morning. Determined. Whatever fucked that server up wouldn’t happen again. At least wouldn’t without him noticing

Utilized syslog everywhere. Everywhere! Even to the coffee machine. Spent countless hours setting up Kibana, added a Suricata to the firewall appliance and FINALLY created VLANs!

The thing went personal. This was not just his company’s network, it was his personal fortress


He stayed up late at work, and when he got home, he got a beer and did some more thinking. Why did he abandon everything while being with his ex-girlfriend? Why did he give up all the music he used to like? His favorite books? His friends? His role as linux-guru sys-admin?

Felt a bit desperate on why he left everything for her. Couldn’t understand why he lost the best bits of himself just by being with her… He wouldn’t do that again.

The thing went personal. He was not just a body and soul looking forward to mate again, he was his personal fortress




Highly inspired by “\”How To Break Up\” Tales Of Mere Existence” and my life.

Thanks for reading my 12th article.




Reinventing the Wheel for the last time. The “covertutils” package.

A colleague reviewed my article and found it hopeless. I could definitely blame him, but he is a real rock when it comes to reporting, teaching and lecturing about security topics.

So I revised my article, according to his remarks, which mainly were: “You are describing a damn protocol – add some PICTURES goddammit!”. Enjoy…


The motivation

Those last months I came across several Github projects with RAT utilities, reverse shells, DNS shells, ICMP shells, anti-DLP mechanisms, covert channels and more. Researching code of other people gave me the ideas below:

Those things have to support at least an encryption scheme, some way of chunking and reassembling data, maybe compression, networking, error recovery. (To not mention working-hours operation-empire agent, certificate pinningmeterpreter and unit identification-pupyRAT).

And they all do! Their authors spent days trying to recreate the chunking for the AES Scheme, find a way to parse the Domain name from the exfiltrating DNS request, recalculate IP packet checksums and pack them back in place, etc…

And then it got me. A breeze of productivity. That crazy train of creation stopped just before my footnails. The door opened…

What about a framework that would handle all those by itself?

A framework that would be configurable enough to create everything from a TCP reverse shell, to a Pozzo & Lucky implementation.

A framework without even the most stable external dependencies, that uses only python build-ins

And all those without even thinking of encryption, message identification, channel password protections and that stuff we hate to code.

Then I started coding. Easter found me coding. Then Easter ended and I was still coding. Then I didn’t like my repo and deleted it altogether. I recreated it and did some more coding. Spent a day trying to support Python 3 and gave up after 10 hours of frustrated coding.

And finally it started working. The “covertutils” package was born. A proud python package! And here it is for your amusement:

And here are the docs:

Let’s get to it…


Basic Terminology of a backdoor

So let’s break down a common backdoor payload. In a backdoor we have mainly two sides. The one that is backdoored and the one that uses the backdoor.

The host that is backdoored typically runs a process that gives unauthorized access to something (typically OS shell). This process and the executable (binary or shellcode) that started it is the “Agent“.

The host that takes control of the backdoored machine typically does so using a program that interacts with the Agent in a specific way. This program is the “Handler” (from exploit/multi/handler anyone?)

Those two have to be completely compatible for the backdoor to work. Noticed how the Metasploit’s exploit/multi/handler asks for the payload that has been run to the remote host, just to know how to treat the incoming connection. Is it a reverse_tcp VNC? a stageless reverse_tcp_meterpreter?

Examining the similarities of those two (agents and handlers) helped me structure a python API, that is abstract, easy to learn, and configurable.


The covertutils API

All inner mechanics of the package end up in 2 major entities:

  • Handlers
    Which are abstract classes that model Backdoor Agent’s and Handler’s behavior (beaconing, silent execution, connect-back, etc).

    Attention passengers: The Handler classes are used to create both Agents and Handlers.

  • Orchestrators
    Which prepare the data that has to travel around. Encryption, chunking, steganography, are handled here.

With a proper combination of those two, a very-wide range of Backdoor Agents can be created. Everything from simple bind shells, to reverse HTTPS shells, and from ICMP shells to Pozzo & Lucky and other stego shells.


The data that is transferred is also modeled in three entities:

  • Messages
    Which are the exact things that an agent has to say to a handler and vice-versa.
  • Streams
    Arbitrary names, which are tags that inform the receiver for a specific meaning of the message. Think of them almost like meterpreter channels with the only difference that they are permanent.
  • Chunks
    Which are segmented data. They retain their Stream information though. When reassembled (using a Chunker instance) they return a (Stream, Message) tuple.

The Orchestrator

Orchestrators can be described as the “objects that decide about what is gonna fly through the channel“. They transform messages and streams to raw data chunks. Generally they operate like follows:


The chunks can then be decoded to the original message and stream by a compatible Orchestrator instance. They are designed to produce no duplicate output! Meaning that all bytes exported from this operation seem random to an observer (that hasn’t a compatible Orchestrator instance available). This feature is developed to avoid any kind of signature creation upon the created backdoors, when their data travel around networks…

The code that actually is needed for all this magic is the following:

>>> message = "find / -perm -4000 2>/dev/null"
>>> sorch = SimpleOrchestrator("Pa55w0rd!", streams = ['main'])
>>> chunks = sorch.readyMessage( message, 'main' )
>>> for chunk in chunks :
...     print chunk.encode('hex')

And to decode all this:

>>> sorch2 = SimpleOrchestrator("Pa55w0rd!", streams = ['main'], reverse = True)
>>> for c in chunks :
...     stream, message = sorch2.depositChunk( c )
>>> stream, message
('main', 'find / -perm -4000 2>/dev/null')
  • Note the reverse = True argument! It is used to create the compatible Orchestrator. Same objects are not compatible due to duplex OTP encryption channel.


The Handler

Handler‘s basic stuff is declared in an Abstract Base Class, called BaseHandler. There, 3 abstract functions are declared, to be implemented in every non-abstract subclass:

  • onMessage
  • onChunk
  • onNotRecognised

When data arrive to a Handler object, it uses the passed Orchestrator object (Handlers get initialized with an Orchestrator object) to try and translate it to a chunk. If it succeeds the onChunk(stream, message) method will be run. If the received data can’t be translated to a chunk then the onNotRecognised() will run.
Finally, and if the raw data is successfully translated, the Orchestrator will create the actual message when the last chunk of it is received. The onMessage(stream, message) method is run when a message is fully assembled.

The combined idea of a backdoor can be seen in the following image (fullscreen might be needed):



The Internals

How Streams are implemented

The Idea

Data needs to be tagged with a constant, for the handler to understand that it is meant to consume it. As a handler may receive data that is irrelevant, not sent from the agent, etc…

The problems in this idea are several. Bypassing them created the concept of the stream.

First of all, the constant has to be in a specific location in the data, for the handler to know where to search for it. That brings as to the second thing:

If a constant is located at a specific data offset, it defines a pattern. And a pattern can be identified. Then escalated to analysts. Then blacklisted. Then publicly reported and blocked by public anti-virus products.

So for the tagging idea to work well, we mustn’t use a constant. Yet the handler has to understand a pattern (that can’t be understood by analysts). Considering that both the Agent and Handler share a secret (for encryption), the solution is a Cycling Algorithm!

The StreamIdentifier Class

When sharing a secret, infinite secrets are shared. If the secret is pa55phra53 then we share SHA512(“pa55phra53“) too. And MD5(“pa55phra53“). And SHA512(SHA512(“pa55phra53“)). And MD5(SHA512(“pa55phra53“+”1”)). You get the idea.

So the StreamIdentifier uses this concept to create tags that are non-repetitive and non-guessable. It uses the shared secret as seed to generate a hash (the StandardCyclingAlgorithm is used by default, a homebrew, non-secure hasher) and returns the first few bytes as the tag.

When those bytes have to be recognized by a handler, the StreamIdentifier object of the handler will create the same hash, and do the comparison.

The catch is that when another data chunk has to be sent, the StreamIdentifier object will use the last created hash as seed to produce the new tag bytes. That makes the data-tag a variable value, as it is always produced from the previous tag used plus the secret.

A sequence of such tags is called a Stream.

Multiple Streams

Nothing stops the implementation from having multiple streams (in fact there is a probability pitfall, explained below…)! So instead of starting from “pa55phra53″ and generate a single sequence of, let’s say, 2 byte tags, we can start from “pa55phra531″, “pa55phra532”, “pa55phra533” … and create several such sequences (streams).

The StreamIdentifier will, not only identify that the data is consumable, but will also identify that a tag has been produced from “pa55phra531″, or “pa55phra533”. This can used to add context to the data. Say:

  • Everything produced from “pa55phra531 will be for Agent Operation Control (killswitch, mute, crypto rekeying, etc)
  • Everything produced from “pa55phra532 will be run on a OS shell
  • Everything produced from “pa55phra533 will be shellcode that has to be forked and run
  • Goes on and on…

Now the messages themselves do not need to follow a specific protocol, like:

shell:uname -a

they can be raw (saving bytes on the way), relying on the stream for delivering the context (when writing a RAT’y agent several features have to implemented, streams come in handy with this).

The streams are named with user-defined strings (e.g “shell”, “control”, etc) to help the developer.


The Pitfall

Tags have to be small. They shouldn’t eat to much of the bandwidth. They are like protocol headers in a way. Not too small to be guessable or randomly generated from a non-agent, not too big to be a small part of the raw data.

When implementing a tone of features using streams (say 8 features), using a 2-byte tag (it is the default) will create a small chance of collision. Specifically a 1/2341 chance (still more probable than finding a shiny pokemon in Pokemon Silver – 1/8192).
And to make things worse: this chance is not for the whole session, but per sent chunk (as tags are cycling for every chunk), so it is quite high!

The Solution

Well, maths got us down. For so many features, a new byte (3 byte tags) will minimize the chances tremendously. There is also an option to make the tags constant. This way the above chance counts for the whole session, making a collision quite hard.


Handler Types

At time of writing, there are several Handler Classes implemented. Each modelling a specific backdoor behavior.

  • BaseHandler
    This is the Base Class that exposes all abstract functions to the sub-class.
  • FunctionDictHandler
    Gets a (stream -> function) dict and for every message that arrives from stream x, the corresponding function is called with message as argument.
  • InterrogatingHandler
    This handler sends a constant message across to query for data. This is the way the classic reverse_http/s agents work. They periodically query the handler for commands, that are returned as responses. Couples with the ResponseOnlyHandler.
  • ResettableHandler
    This Handler accepts a constant value to reset all resettable components to initial state. The One Time Pad key, the stream seeds the chunker’s buffer, etc.
  • ResponseOnlyHandler
    This is the reverse of the InterrogatingHandler. It sits and waits for data. It sends data back only as responses to received data. Never Ad-Hoc.
  • StageableHandler
    This is a FunctionDictHandler that can be extended at runtime. It accepts serialized functions in special format from a dedicated stream, to add another tuple in the function-dict, extending functionality.



The objects that handle the raw data to (stream, message) conversion are the Orchestrators.

They have some basic functionality of chunking, compression, stream tagging and encryption. They provide 2 methods, the readyMessage(message, stream) and the depositChunk(raw_data). The first one returns a list of data that are ready to be sent across (tagged, encrypted, etc), and the second one makes the Orchestrator try to consume data received and returns the (stream, message) tuple.


End of Part 1

The whole package includes several features that are not even mentioned in this article (Steganography, Data ManglingStegoInjector and DataTransformer classes-, etc), that while implemented, aren’t properly documented yet, so their internals may change.

They will be the subject of another post, along with a Pozzo & Lucky implementation using only coverutils and Raw Sockets.


I the mean time, there are some Example Programs for you to play around!

Feedback is always appreciated…


A Git Tutorial of Human Psychology

In Image and in Likeness

Catching Paragraph
that uses several seemingly irrelevant pieces of information to hook the reader.

Bible says that [G|g]od created humans “In Image and in Likeness“. While I am not that huge fan of Bible, I do believe that some things are not randomly written in this book.

In Image and in Likeness” is the only way to structure, design, and create something. No wonder, that God created people “In Image and in Likeness” himself. He couldn’t do it any other way…


Git is no Εxception

Creating Git Version Control was also a miracle (thanks again Linus). And it was created to resemble human nature and psychology as well. I don’t claim that the author and developers had this in mind when they started their codebase, but I do believe that they couldn’t help it.
Humans are doomed to duplicate themselves. With more than one ways…


Today’s Proof of Concept

All Git operations have human-side equivalents. Equivalents that resemble life choices and personal mind tricks. Branching, committing, rebasing, all are ways a person feels and acts about things.


The Childhood

Git init

Let there be light” (this is the last biblical reference, promise).
We can parallelize a person as a git repo. So here is what happens when a person is born:

God@Earth# NEW_PERSON="person-$(date +%s)-3"
God@Earth# mkdir $NEW_PERSON; cd $NEW_PERSON;
God@Earth# chroot . start_life $NEW_PERSON &

(the start_life executable starts by setting UID != 0, to avoid creating a new god.
This was the bug that created the Titans, Pantheon, Egyptian Gods and more, in the early years of development)

Because god runs Linux, and that’s for sure…

Then the person has its own process… It is alive! And this is what happens…

$ ls
$ ls -a
. ..
$ git init
Initialized empty Git repository in /.git

Here, we have a new proje… person! All initialized and ready to fulfill its life goals…


Git add

As a new project, at first, a person adds everything that is inside the directory inside the repo. And this isn’t always good…

$ ls    mother.tongue    mother.bad_habits    father.drinking_problem
$ git add *
$ git status
On branch master

Initial commit

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)

    new file:   mother.bad_habits
    new file:   father.drinking_problem 
    new file:
    new file:
    new file:
    new file:   mother.tongue

A child sucks everything in its environment to slowly develop a personality. And carries all added things with it. But a personality isn’t actually created before the…

Initial Commit

And here we have the end of Childhood… A child with a discrete personality is a teenager. Almost not a child anymore…
And here is the line that differentiates the two:

$ git commit -m "Built personality PoC"
[master (root-commit) 46ae33f] Built personality PoC

 6 files changed, 0 insertions(+), 0 deletions(-)
 create mode 100644
 create mode 100644
 create mode 100644 mother.tongue
 create mode 100644
 create mode 100644 father.drinking_problem 
 create mode 100644 mother.bad_habits

The Early Years

git commit

A commit happens every time a personal decision is made. As the commit is the most common command in git, it is also the most common mental condition in a person’s life. You commit every time consequences of your actions will affect you as a person. Just like a commit in git. It is a command that defines a state of you.

The .gitignore file!

A teenager starts to be more selective when adding things to his/her life. Tries to evaluate whether something is crucial for its development, or not.
A typical example of this is the following:

$ echo "mother.*" > .gitignore

This way a teenager permanently ignores all changes on its mother behavior, effectively carving its own way. One can add things to the .gitignore file as experience comes:

$ echo "*.assholes" >> .gitignore

Here we added the line to ignore all assholes, and prevent them from changing our life.


git branch

There are circumstances that you have to treat like a whole new person. There are events that need a whole fresh you when you first get into them, like relationships or hobbies. Events that every change they do to you, won’t affect you in other aspects of your life.

Let’s say that the teenager we left of, is a boy and is now ready to meet his first love “Lily”:

$ git status
On branch master
Untracked files:
  (use "git add <file>..." to include in what will be committed)

He is going to be a boyfriend, trying to leave the rest of his life intact. He has to create a new life branch.

$ git branch boyfriend_of_lily

git checkout

Now, every time he is with Lily he can just:

$ git checkout boyfriend_of_lily

and develop his relationship with her. Adding some Lily-specific files, or changing some already created ones. Also all commits done when with Lily, will affect their relationship only, not the rest of his life (hopefully).

People have countless branches. Think about hobbies, jobs and people that need a specific version of us to operate us expected… I do not treat my colleagues like my parents and I don’t cook with the same attitude I play basketball.

Sometimes, a hobby, a person, a general condition becomes so vital to us that is not “yet another thing” we do. It is something special, something really important to us… When this time happens for one of our branches we have to…

git merge

Here is why git shines. When we have a great hobby, that really means a lot to us, we have to merge it into our master branch.

$ git checkout master
$ git merge hobby_that_defines_you

after that, our hobby is included in the master branch, meaning that is an essential part of ourselves.

Problems start when some branches of ours that we want to merge to our master branch, have changed our inner selves in such a way that contradicts our personality.
When this happens we have the most serious first world problem:


The Merge Conflict…

Let’s say that:
as a person you are cheerful and generally happy, but then you met that goth girl, that  hates smiling and always wears that ring with the skull on it that gives you the creeps.

You are yourself when out with your friends, and you checkout to your Emo branch when with your goth girl! Great, that’s what branches are all about. But then you have to go to a party, were both your friends and your girlfriend will be there. Trying to merge those two branches raises the issue:

$ git checkout party_with_friends
$ git merge goth_girlfriend
Auto-merging attitude
CONFLICT (add/add): Merge conflict in attitude
Automatic merge failed; fix conflicts and then commit the result.
$ cat attitude
<<<<<<< HEAD
Happy and ready for the party!
Look like I hate myself.
>>>>>>> goth_girlfriend

This issue has to be resolved. The way to resolve it is to get to that file and remove anything that doesn’t really belong to you.

I believe that all psychological problems start with such conflicts. When merging, back to master, incompatible branches of our egos… This is because the heavy development has to be done in master branch. The heavy development and commiting has to be in ourselves. While gaining experience we learn when to merge. We also learn when to…


git rebase

When a huge event like a marriage, a job, a loss, a break-up happens, our whole life is then defined by it. Our personal history can be split to before the event and after the event periods. We can remember being completely different before the event.

But now that the event has happened and we have plenty of commits on its branch, it is really easier to adopt our master branch on the branch of the new event, than checkout again to our old and dusty self – master branch.

This is when a rebase happens. When we need to redefine ourselves on top of another event. Notice the difference with the merge. Merge puts some additional things to our master. Rebase redefines our master to include the additional things historically.



And the list goes on!

  • git cherry-pick, when we try to keep only the good stuff from a situation of ours,
  • git blame when we try to find when we made the wrong choice and what went wrong,
  • git tag when we accomplish something memorable.

The next time I get across someone that believes that computer science is far away from the human nature (there is such argument), I ‘ll answer 2 words (kinda)

$ git --help

*mic drop*


Information Gathering is not enough. Information storing and sharing is better. Meet GatherOS …

I ‘ve been absent for a while, switching jobs and analyzing personal goals couldn’t be postponed any longer. Now I am back to the grid! And I got a new tool too!


Why Gathering is a hell of a job…

Information about the target is what keeps the wheel spinning. More info, more attacks, more successful attacks, more shells, moar powah.

That applies perfectly for Vulnhub VMs and 3-4 hours CTFs, but the problem is obvious with assessments that require a team. The scalability isn’t exactly great when Information Gathering has to be done for a network and several hosts. Actually it ‘s a pain. If you have been there you know, if you haven’t, here are several examples:

- Hey, I started a minus A nmap for the slash 24
- Shit man, I am at 56 percent on the nmap.
- Ok, I control-c'd, what switches?
- Top 100, finished, come to see the results.
(Whole team leans towards a single monitor)
- OK OK, I GOT A SHELL (yelling)
- Great, what user? (yelling)
- wou wou data (www-data)
- What kernel?  (yelling)
- 3.8
- Distro?
- Ubuntu 14.04
- Check SUIDs!!!   (yelling)
- Hey buddy, stop yelling, I know what to do!
- I am not your buddy, pal
- I am not your pal, guy
- I am not your guy, friend
- I am not your...
- I uploaded the file!
- Good, what is the name?
- not *under* a *under* backdoor *dot* php (not_a_backdoor.php)
- Are you a moron? This is a Tomcat shit, why php?
- Who told me that?
- I told you before. (yelling)
- Oh, fuck you! You said that it 's Apache (yelling)
- Yes, it 's "Apache Tomcat" (yelling)

Generally Info-gathering with a team is a mess. Been there several times, yelled like this, got slapped twice, been to jail for ten long years, because I killed a guy who misinterpreted 1/* for */1 in a crontab file and then the whole team spent an hour on facebook as we missed to start our handler on time.

Tools have tried to bridge the gap. Most of them fail badly for inexperienced teams as they need an amount of seriousness to work. Dradis falls flat under this category. It is great but you have to learn to use it. Who has time for that shit? Life is short. People still use metasploit.


GatherOS: not the nasty shit you’re waiting for…

Two things are more essential than just gathering. They are sharing and storing. GatherOS handles them both neaty.

The Idea is simple. You got a Reverse/Bind Shell, SSH, physical access to a system (be it Linux or –for the love of god– Windows). There are some basic stuff you have to run on the shell to understand what kind of machine you semi-pwned.

If you like keyboard, you remember the commands (cat /etc/passwd, cat /etc/*release, crontab -l, etc) but you will miss at least one (uname -a).

If you once liked keyboard you have a script with nice and dandy output.
So you python -m SimpleHTTPServer 8080 the script and then you go for the download from the pwned machine:
wget: command not found.
OK, cool. You netcat to your machine and start typing the HTTP Request:

GET / HTTP/1.1

404 Not Found

Mistyped the script name…

You whisper something on the classic “fuck” pentester’s dialect and open the script with gedit copy-pasting all the commands to the reverse shell. Hating yourself.

After half an hour a colleague asks you: “what was the MAC for the ?“.
You have no idea, you are still copy-pasting…


What GatherOS does…

First things first. GatherOS resides here:
and has been the subject of about 2 rewrites. Also available with pip.
Just pip install gatheros and the commands will be in your PATH (like magic)!

Now the juicy stuff!


The heart of the package!

It’s a simple python module that gets a special formatted JSON file input containing OS commands, and runs them against a shell (be it reverse/bind/ssh/local). Then it stores the output in a JSON file.



The reason GatherOS exists

This module consumes JSON files created by gatheros-exec and fires up a flask web application, nicely presenting the command outputs for everyone to see and admire!


A showcase!

$ gatheros-exec -o /tmp/$(uname -r).json local
[waiting less than a minute...]
$ ls -lh /tmp
Total 1-rw-r--r-- 1 unused unused 110K Feb 6 13:10 4.9.0-kali1-amd64.json

And done! GatherOS ran the default InfoGathering scenario (built-in) against the local machine. For SSH on port 1022 it would be:

$ gatheros-exec -o /tmp/$(uname -r).json ssh uname@localhost -p1022


Now that there is a GatherOS file we could present it with gatheros-show at port 8086 (default is 8085):

$ gatheros-show /tmp/4.9.0-kali1-amd64.json -p8086

Woah! A Firefox spawned with this:

This slideshow requires JavaScript.

Let’s see the MAC now!


As you may have recognized the default Information Gathering Scenario is heavily based on the rebootuser’s Cheatsheet that I believe it is the complete Cheatsheet out there! I can’t, but thank this site as well as it’s references for providing so useful commands for eager privilege escalators!

A Windows Scenario will also be ready in a later release!


Storing the Info!

Just zip the JSON files for later use!
gatheros-show will always serve you whichever JSONs you feed it.


Why “Information Gathering scenarios” ?

Well, those JSONs aren’t just lists of grouped commands. They contain a whole logic on which commands should run in case some others fail to run, based on a dependency oriented model.
This aspect of GatherOS can be used to automatically launch local-root exploits and other goodies as well, and it will be explained in a later post, when some more development will have taken place!

Stay tuned, it ‘s gonna be huge!



In the Twisted mind of Upper Management

I have written before about how compliance fucks up security, this is a common ground now. This post isn’t about that, as it has been all used up in several conferences, blog posts, drunk Red Team meet-ups and so on.

This post will talk about the nonsense that takes place inside the average Security company itself. It is really astonishing how the most absurd situations just tend to all get together and find their home in Security companies.

But this whole post isn’t about that either. While some fucked up situations will be our case studies, the post will try to suggest the reason that all this shit doesn’t happen in companies that create refrigerators or condoms


The Model

A typical security company consists of about 3 departments. They can be 4 or 6 but they are simplified to the above 3:

  • A Red Team / Penetration Testing Team
  • A Development Team
  • A Monitoring/Operations Team

And they all suck for different reasons…


Stating what sucks…

The Red Team

Well, the Red Team doesn’t suck. Most of the time it is a bunch of folks that really know their shit deeply and all. What does suck is that they have to report things. And those guys, most of the time, can barely talk. Imagine how painful will be for them to write stuff. They pay for 1 hour of enjoyment (meterpreter dances, pizza breaks, pentesting vending machines and such hilarious stuff…) a total of 7 hours of hating themselves in front of a Word document, or similar text editor. They at least do what they love the 1/8 of the time…

The Developers

If you take a bunch o’ monkeys and leave them in a cage with enough pot they will eventually write a Security Product for Internal Use. This is the development department. The classic UML faggotry, Java nonsense, and such clichés all apply.

And every company has their product that isn’t of course ready yet, but it will soon be. And good Lord it is gonna kick ass when it ‘ll be…

Their reason of existence is simple. There can be no “Computer Company” without “Program Making“. It is well known that this is what Computers are all about: “Creating Programs” (in the twisted mind of upper management).



What does suck the most is the Monitoring part. And it sucks a lot. In a whole new, existential level.

If you take every guy in there they all wanted to be pentesters. Worse than that is that now they don’t know what exactly they are. And this agnostic mentality flows around the whole department. They are not sure if they maintain a Network Operation Center, a Security Operations Center, an Incident Response Center, a Log Storage Service, a Behavioral Analytics Service or a Hard Rock Cafe, whatever.

They are so clueless about their existence they need lengthy meetings to decide if they are capable of servicing a customer that needs a very specific service. They are not sure whether they support such service but they go “Fuck it” and onboard him anyway.

The only Group of people that knows exactly what kind of fruit is the Monitoring department is the Upper Management (spoiler alert: its a money and a cow, what is it?)…



Why does everything suck?

Meet the Beast: Upper Management

They couldn’t last a day in any department of the company. Most of the time they have no clue what the company is about. If you ask them: “Tell me what does your company provide without using the word ‘Security’ ?” they may get an epileptic seizure the next instant.

So Security Companies are fucked up because their bosses are collecting butterflies while they could at least study what they are being bosses at.

I mean, my idea about the Boss role (say the Platonic Idea) is the man that does the same work as you, but way better. If someone hires me that demands from me to make chickas, but he can’t do it himself, I can, very well, make chickos and he will barely notice (chickas and chickos are words I just invented, don’t google them).

But how can this work?

Spoiler Alert: It doesn’t… Have you ever heard of failure?
This situation can very well define failure. And this failure bleeds money until it bankrupts, really slowly.

This happens for a number of reasons. Someone needs to pay all those folks, working on maintaining the illusion of security to the customers.
The Company gets the annual money from contracts and projects, but instead of moving on with some education with that money, it hires more developers. Because more developers means less time for the product to come out (in the twisted mind of upper management)! And when it comes out it ‘s gonna kick ass and stop hacking worldwide! And EVERYONE is gonna buy it on a huge price anyways…
But, unfortunately when too many developers get together, nothing ever gets finished, so they could very well play Minecraft in LAN parties, or Dungeons & Dragons, or beat Piniatas and end up more productive than when actually coding for the project. Because development is something nearly impossible to do right (it takes a lot more than coding), and most of the time it becomes a black hole that  sucks money.

In the meantime someone among the pentesters has the free version of BurpSuite and the Monitoring Center has an underspecs server or two…

And how it stops (not) working…

This nonsense actually has two ways to stop:

  1. Developers suck up all the money and release no product.
  2. Developers release the product.

The first scenario is simple yet amazing. A bunch of people bring a whole company down by not doing what they had to, while working every day 9:00-17:00 (sometimes even in weekends). I find this scenario amusing! It is the college project failure scaled all the way up!

But the second scenario is the one closer to reality.
When development department proudly presents a product, that has the same functionality with a forgotten project of some Chinese guy on Github, and the whole Upper Management realises that they can’t sell this stuff because none in the security industry really needs something like this (this is also the reason the Chinese guy abandoned his project back at 2014), the company breaks down. And it does as it depended on the sales, that should have been tremendous!


Why those tragedies do not happen in companies that manufacture refrigerators or condoms…

Upper management can very well be non-technical in refrigerator or condom companies. But the big difference between Security companies and condom companies is the following:

Upper Management people use (or have used) condoms and will never use security products (not even nmap)!

In condom companies people that have meetings and make choices about the company do not need to be consulted about how a condom works by a specialist, or why use a condom.

In security companies, in the other hand, the non-technical upper management has no fucking clue, and will never understand if something is worth spending or not. They completely lack common sense regarding their service or product.

This can be very well understood with 3 examples:

Condom makers
[The Condom Designer]: Hey boss, I believe we need to make condoms with WiFi. The budget we 'll need is 150.000$.

[The Condom Boss]: You are fired.

This boss figured out from his* experience that condoms with WiFi are useless as fuck. This Designer got sucked and he deserved it because he lost hours trying to budgetize condoms with WiFi. Fuck him.

*: or rather “her“, I prefer female bosses.

Refrigerator makers
[The Refrigerator Designer]: Hey boss, I believe that we need to make refrigerators with microphones, cameras and TCP/IP stack to ensure good quality of service (?). The budget for this is 180.000$.

[The Refrigerator Boss]: You are fired. (hopefully)

Here the boss didn’t see the opportunity of the IoT circus. But he fired the ignorant bastard just to be on the safe side…

Security providers
[The Security Designer]: Hey boss, I believe that we need to develop a tool that can compromise every operating system, platform and network.
We 're gonna write this in Java, as it is cross platform (?), and the budget for this will be 300.000$.

[The Security Boss]: This is a great idea! We are gonna invest on this!

This boss has no idea about Cobalt Strike, Metasploit, etc. Tools that have been developed for years and are the defacto standard for the industry. He has no experience on “compromising” things.
If he ever knew what Java is all about, he would burst into tears of laughter before the Designer could finish the proposition. (For people that don’t know, Java is even worse than Ruby nowadays).
Plus the “compromise everything” sounds too bad-ass to be cheaper than 300.000$…


For me the last conversation has one more line:

[God] : Hey guys, come to see those two faggots! They are gonna write metasploit again, from scratch! (laughters).
In Java! (laughters)
(...laughters echo in paradise...)

I hand out to you a recipe of failure! Please stop cooking it…