Friday, August 18, 2017

Trust Issues

A continuous term heard when working with non-security focused IT groups, is that anything behind the internal firewall is “trusted”.  This is somewhat of a pet peeve, and is setting back organizational security.  Depending upon the business ear the non-security focused IT group has, it can undermine the initiatives of the security group, because business tends to take the path of least resistance.  If multiple parties are assuring the business Infosec is “being paranoid”, there’s a possibility Infosec is going to lose out to keeping up front costs low, hindering innovation (this is possibly a different topic at a different point in time), and being overruled by the “trusted” mindset.  Unless there is a dedicated compliance model enforcing segmentation, encryption, etc., this can be a losing argument.

This is where Infosec teams need to take a more hands on approach to their security and their internal threat model.  The internal threat model should be assuming malicious intent (or mistakes) by employees, along with attackers living on the inside.  When threat modeling, consider bringing in and working with the non-security IT groups.  Perform planning, where an attacker has compromised an internal system with full privilege on that system. Step through the process and what areas are now exposed by that system, and what lateral movement can be achieved.  We have to do a better job of showing why things can’t be trusted.

For organizations with internal red teams (yay money!), this is easy.  Show the defenders the tools, concepts, and IOC avoidance techniques.  The blue / infrastructure groups can look at the tools and the information being gleaned from them.  For those that aren’t so lucky, where it’s a defender only environment, the blue team is going to have to do a little more work.  External tests are normally tightly scoped, and don’t give a full representation to fall back on.  Start making the assumption of compromise, and at the very least familiarize attack methodologies that can demonstrate the theory.  Simply talking about it is not enough, because the non-security focused IT groups may not understand the concepts or theories.

This is part of why this blog has been so quiet.  When afforded time to research, most free time has been spent researching attack methodologies that don’t require an installation, and makes use of nothing more than what exists already within the org.  This allows for actual demonstration on the ease of attack methodologies, for the non-security focused IT groups.  It helps if practical simplified attack methods are demonstrated, and can potentially turn that into a game changer.  In the long run, Infosec is going to need these groups working on their side, if the business culture is going to change, and the term “trusted internal system” can be put to bed.

Tuesday, June 20, 2017

My Imposter Battle

Two weeks ago, I learned a new technique with PowerShell through experimenting with the profile.  It worked really well, at least for what I wanted to do, but my excitement was short lived.  Something that feels this basic shouldn’t be this cool, and obviously someone else probably would have already uncovered it.  Any time I get started on something, and I get stuck, I start looking online for documentation on a specific class.  Then I find out another researcher already did this work three years ago.

The second talk I gave last year in and around PowerShell. I wanted to do something cool for the audience, and most security people (myself included) are excited about new ways to create reverse shells. The first talk I did, I used one of the Invoke-TCP shells found in the Nishang toolkit.  This time around, I wanted to show how to use PowerShell to do the same thing against a Linux box.  My thought process was just showing possibilities of pentesting with a Windows based system more than anything.  I converted one of the scripts used for an Offensive Python OWASP training over to PowerShell (with permission), and thought that was great.  Although I was really excited this worked, and couldn’t wait to show it off, all I had really done was port someone else’s script into my talk.

I’ve been told repeatedly by my peers (not just when I was in InfoSec) that confidence is an area that is lacking for me. Most of my career, I’ve gotten great reviews, excelled at getting my projects accomplished, while diving into anything I don’t know.  It helps me gain confidence, but it’s also a never-ending battle.  I will be confident if I get a degree.  I will be confident if I get graduate degree.  If I work in X field, my confidence will improve.  It doesn’t take away from that fear of making a mistake, because if I make a mistake people will find out I have no clue what I’m doing.  This fear becomes magnified in a toxic environment, where peers are waiting to pounce on mistakes to point out the failure, as a means to prop themselves up.  When that happens, I start to discredit all the educational and career accomplishments.  The two examples I’ve given are just a small sample of the hundreds of time I’ve had these issues swirling around in my head where I feel like an imposter.


How do we combat the insecurities around imposter syndrome?  The more advanced the career field, the larger the odds we will work with things we don’t understand right away, and the more it will cause doubts (at least in the beginning).  I’ve spent the past year or two (time is flying) pushing my boundaries to work on the those kinds of projects.  This is how I’m choosing to combat.  It works the same way as some of the anxiety issues I’ve fought to overcome, where I have to turn into the skid of the anxiety.  If I don’t face it head on, I will stay where I’m comfortable, and this will never allow for further improvement.  I will try to teach others what I’m learning along the way, to improve their process.  If someone else has already done something similar to what I’m trying to accomplish, it doesn’t mean I need to give up with what I’m doing, because in order to talk to it, I need to understand it. I can look into their process and improve it for my environments and goals.  Remember, learning is the end goal, and everyone has to start somewhere.  Gaining experience in areas where otherwise inexperienced doesn’t make someone an imposter.  It makes them a lifetime student.

Saturday, February 18, 2017

Using PowerShell to Work w/ Tenable Restful Web APIs (IRM FTW)

This one feels like it’s been a long time coming, and since the Mrs. was out of town last week, I ended up not being able to keep up with regular duties and put together a new post.  It doesn’t help that I’m still somewhat new to forcing new topics every week, but enough of the excuses.  I present the latest entry, working with Tenable’s API using PowerShell.


A little bit of a backstory in how this came to happen.  Having implemented a new vulnerability program, I wanted to find a good metric to pull actual counts on vulnerabilities based upon their severity.  Nessus does a nice job in rolling up their reports, but the counts are summarized in the reporting rollup.  I needed something that would give total without being summarized by system.  Initially, I was going to write some long winded functions that would require:
  1. Detailed Csv manually saved.
  2. Parse the columns in the CSV to get summary
  3. Count each one by host
  4. Repeat


With that process, there would have been a lot of things running, but being done as a manual process.  That was not a good reason to be scripting something out.  Enter the Nessus API documentation.  The mapping should be found at the https://[nessus_server]:[portNo]/api.  If working with the cloud API, it can be found at https://cloud.tenable.com/api. I updated this script, shared fully at the end, to test for this if no port is specified.


I before I could work with the scans, I needed to authenticate to the application. After browsing to session -> create I was able to find if I post to /session with username and password in I would get the API key back.  Initially this was giving me fits, but at the bottom of the API documentation there is a form to test the method.  











I was having some issues keeping the session open and @maendarb recommended I take a look at Posh Nessus.  I reviewed the session handling, and was able to get the credential passed in the same manner.  I was receiving my access token with the following command:





I was receiving errors whenever I would test without passing it through as a PSCredential though, although in later modified runs, it was converting the JSON with no issues.


Next I needed to see how the session was passed, so I watched a scan retrieval flow through using the same test technique as with the authentication credential through Owasp ZAP.  When the token was passed through, it was passed using the x-cookie header.  Adding this to the Invoke-RestMethod -header parameter as a part of the command below was able to get me a JSON object returned of all my scans.  





The only thing left to do was to create a loop through the last active scan by scan ID, and obtain the vulnerability count by host based upon critical score.  Still had to pull the scan details, which was outlined in the API documentation.  It was as simple as looping through the scans and passing the scan id through one last Invoke-RestMethod.  






Lastly the script needed to just add the counts up by host to give me the real total on the number of vulnerabilities found based upon vulnerability classification.  The full script can be found here.


A few things learned within this process…

  1. Working with API’s in PowerShell can be amazingly simplistic with the Invoke-RestMethod.
  2. Once the initial session creation was completed within Tenable’s API, it has become easy to use PowerShell to automate a lot of my old manual reporting functions.
  3. Getting comfortable with a good web proxy will make things a lot easier for formatting commands to pass.

Friday, February 3, 2017

Getting Started (An Introduction to PowerShell)

This is my first run through in making sure I’m writing at least one entry per week.  Before I could really talk about how I go about doing things, I thought I’d share how I started using PowerShell…

When first starting out with PowerShell, I had a feeling it was going to be an incredibly useful tool.  My first script wasn’t very good, but it was a way for me to uninstall an application from all the remote systems on my work network within a matter of a couple minutes.  It was meant to save time, which it did do that, but was horrible in its execution.  Not even clear on everything from memory, but it was calling VBScript, and Cmd.exe remotely through PSexec.  Looking back there’s so many different ways I could have made that work (and probably taken out too).  That’s for a different time though.

I determined after working on that little project, a different way of doing things was available, and I saw the writing on the wall that made me realize Microsoft was going to be doing a lot more to integrate PowerShell into its systems management.  Now, I’ve known myself long enough to know that unless I force myself to do something, I’m going to continue doing things in my own way, and knew I would need a push in the right direction.  This is where I decided to replace my startup environment to no longer utilize explorer.  From there, if I needed to do something, I figured, I’d better learn how to do it quickly, because no one was around to ask for help when I’m working by myself on a night shift, and a system is down.  In Windows 7 there’s a way to replace the default login environment, under HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon, and change the entry Shell to “powershell.exe”.

Once done, I rebooted.  Luckily, I knew some of the command line utilities w/ DOS as a backup and there were still applications the business relied on that worked were thick client apps, but my rules were simple:
  1. Anything I did within the operating system to interact had to be with PowerShell.
  2. I could not use the Internet to determine how to do anything.
  3. Everything I did had to use the corresponding cmdlet.  That meant I couldn’t just type explorer.exe, I actually had to run “Start-Process -FilePath c:\windows\explorer.exe”

While this probably wasn’t the best way to go about doing things, I learned the help structure was incredibly simple to work with, and just typing Get-Help would tell me everything I needed know. I ended up using PowerShell as my sole way of navigating my system, and after about 3 weeks was beginning to feel really comfortable with getting around.  I did end up giving up on the experiment because I liked having my desktop around.  I started doing as much reading on the topic as I could, and slowly began learning more about how to interact within .NET from PowerShell without needing to compile.  Once that little nugget of knowledge was dumped on me, and I could put my .NET development training to use without working in Visual Studio, things took off from there.  It would end up being a couple more years before I was able to start putting that knowledge to use, and start writing some more in depth scripts.  Beginning next week I’ll start looking more in depth at different commands and getting more into the nitty gritty (as they say).