Episode136

From Paul's Security Weekly
Jump to: navigation, search
Palo Alto Networks
Tenable Network Security
The SANS Institute
Pwnie Express
Black Hills Information Security

Sponsors

  • Tenable Network Security - This episode sponsored by Tenable network security. Tenable is a developer of enterprise vulnerability, compliance and log management software, but most notably the creators of Nessus, the worlds best vulnerability scanner. Tenable Security Center software extends the power of Nessus through reporting, remediation workflow, IDS event correlation and much more. Tenable also offers a Nessus Professional Feed to detect vulnerabilities in your network today! Tenable – Unified Security Monitoring!
  • Core Security - This episode is also sponsored by Core Security Technologies, helping you penetrate your network. Now version 10.0 with WiFi-fu good to go! Rock out with your 'sploit out! Listen to this podcast and qualify to receive a 10% discount on Core Impact, the worlds best penetration testing tool.
  • Trustwave Spiderlabs - Trustwave's SpiderLabs - providing advanced information security services to planet Earth. Visit them online at trustwave.com/spiderlabs!

Announcements & Shameless Plugs

Welcome to PaulDotCom Security Weekly, Episode 136 for January 15th, 2009. A show for security professionals, by security professionals & given in the most entertaining way possible.

Episode Media

mp3 pt 1

mp3 pt 2

Interview: Dr. Eric Cole

Dr. Cole [1] spent more than five years working with information security for the CIA (Central Intelligence Agency) where he led a team in designing and deploying secure communications systems.

  • Tell us how you got your start in information security
  • How did you get hooked up with SANS and how long have you been

teaching for them?

  • You put in a lot of hours in the classroom, what is the most

frequently asked question of your students (info sec related, other than why is the Pauldotcom crew so damn sexy)?

Tech Segment: From a Picture of the President to Exploiting the Photographer

Here's an interesting application from some of the contents of my Metadata paper:

PE Obama released his official photo. The first of a president taken with a Digital camera. The photographer is the new official White House photographer, Pete Souza. Take a look here:

http://change.gov/newsroom/entry/new_official_portrait_released/

Now, let's analyze the photo with exiftool. First let's see if any intersting cropping has happened. Maybe he's holding his beloved blackberry? Let's extract the Thumbnail image:

exiftool -b -ThumbnailImage obama-officialportrait.jpg > thumb.jpg 

How about the Preview image as well:

exiftool -b -PreviewImage obama-officialportrait.jpg > preview.jpg 

Unfortunately, nothing revealed here; the thumbnail exists and is the same and the preview doesn't exist.

So let's look deeper. If we examine the rest of the metadata, we encounter other good info. Here's the command:

exiftool -a -u -g1 -b obama-officialportrait.jpg 

And here is some of the output (shortened for readability)

---- ExifTool ----
ExifTool Version Number         : 7.23
---- File ----
File Name                       : obama-officialportrait.jpg
Directory                       : .
File Size                       : 785 kB
File Modification Date/Time     : 2009:01:15 10:12:02
File Type                       : JPEG
MIME Type                       : image/jpeg
Exif Byte Order                 : Big-endian (Motorola, MM)
Image Width                     : 1916
Image Height                    : 2608
Encoding Process                : Baseline DCT, Huffman coding
Bits Per Sample                 : 8
Color Components                : 3
Y Cb Cr Sub Sampling            : YCbCr4:4:4 (1 1)
---- IFD0 ----
Image Description               : Official portrait of President-elect Barack Obama on Jan. 13, 2009...(Photo by Pete Souza)..
Make                            : Canon
Camera Model Name               : Canon EOS 5D Mark II
Orientation                     : Horizontal (normal)
X Resolution                    : 300
Y Resolution                    : 300
Resolution Unit                 : inches
Software                        : Adobe Photoshop CS3 Macintosh
Modify Date                     : 2009:01:13 19:35:18
Artist                          : Pete Souza
White Point                     : 0.313 0.329
Primary Chromaticities          : 0.64 0.33 0.3 0.6 0.15 0.06
Copyright                       : ¬© 2008 Pete Souza
---- ExifIFD ----
Exposure Time                   : 1/125
F Number                        : 10.0
Exposure Program                : Manual
ISO                             : 100
Exif Version                    : 0221
Date/Time Original              : 2009:01:13 17:38:39
Create Date                     : 2009:01:13 17:38:39
...
---- Photoshop ----
Photoshop 0x0425                : Ó\¯ıG›%œrè.ë+finº
XML Data                        : (Binary data 6160 bytes, use -b option to extract)
...
---- XMP-xmpMM ----
Instance ID                     : uuid:1B3097C0FCDADD11A476FD2238D714AD
Document ID                     : uuid:1A3097C0FCDADD11A476FD2238D714AD
Derived From                    : 
...
---- ICC-header ----
Profile CMM Type                : ADBE
Profile Version                 : 2.1.0
Profile Class                   : Display Device Profile
Color Space Data                : RGB
Profile Connection Space        : XYZ
Profile Date Time               : 1999:06:03 00:00:00
Profile File Signature          : acsp
Primary Platform                : Apple Computer Inc.
CMM Flags                       : Not Embedded, Independent

Now we have some interesting data.! Date and time of creation and modification (about 2 days from shoot, to selection, proofing and retouch to final version the 13th to the 15th). Inappropriate copyright 2008 declaration for an item created in 2009? How about creation with Photoshop CS3 on a Mac? Camera type (and potential associated "connect" software)? That looks like a couple of methods for client side exploits there.

How about a few exploits for CS3. (windows, but what about a port for mac?) Try here and here

There are a few other goodies here the bear investigating, such as the unique uuids, and the XML data from photoshop (use the -b flag for exiftool)

So how do we deliver an attack?

The data reveals the photographer (but we already knew that), and we know he's the new official photographer. A Google search for "pete souza obama" give you his website, and the Contact Info page gives you an e-mail address. Now we have a potential delivery method.

What do you think that folks will be e-mailing him about over, say the next 4 years? That history making photo? Chances are.

What are also the chances that the photographer will have his potentially compromised computer gear attached to networks with interesting information on them over the next 4 years? Sure, I'm sure the information on those networks is secure, and segregated, but it only takes one person to make a mistake. We all know that mistakes happen.

Maybe this is evolution to the digital White House is a good thing. I think that it will take a little bit of time before the new technology catches up with some of the older rules; The government already does a good job of redacting sensitive information from documents. I think that in the coming years they will need to look deeper.

Speaking of looking deeper, what about looking for (and possible exploiting) via social networks? Poor Pete has accounts, here, here, and here. I'm not sure how long this contact info will be valid though.

We enter interesting times for us all. Be careful out there. You too Mr. Souza.




Stories For Discussion

SANS 25 Most Dangerous Programming Errors - And How to Fix Them - [PaulDotCom] - This is perhaps one of the most profound and important projects I've seen in my info sec career. It strikes right at the heart of the software security problem, which is probably the largest "Elephant in the room" in this industry. It begs the questions, "why do people buy crappy insecure software?". Because well, maybe we don't know any better, maybe consumers and merchants and corporations don't care, and on and on. The fact is, things won't get better until software gets better and more secure. This starts with the school systems and moves right on through the corporate world. Changing culture is not easy, but maybe with this top 25 most dangerous programming errors defined and documented, we can begin to enact some much needed change.

Attacking Brazilian Routers - [PaulDotCom] - While this has "bob" story written all over it, it points out that many ISPs do not secure the DSL routers giving their customers access to the Internet. To me, this is a nice way for attackers to slip in some firmware, or make changes, that work to their advantage. The possibilities are endless with what you can do, however I think that the barrier to entry for attacking SOHO routers is the wide range of platforms. You'd have to select small subsets, as is described in the article, attack them, them move on to the next group. You could automate this process to a point, provided that you have a database of fingerprints and then associated malware to accompany each platform. Some of the routers appeared to be DSL-500s, according to "bob" that is...

  • Random thought, news was on in the background talkin about "Facebook Hacks". They recommended Firewalls & Anti-Virus software. I'm really, come on now! Why can't the mainstream press pay a little more attention to the details and maybe give people better tips than that. Of course, other than being a smart user, what else is there for today's end user in the way of protection?

MS09-001 Scary Remote Code Execution - [PaulDotCom] - So, we all know remotes like this exist, so why are they scarier now than every before? The scary part for me is that MS06-063 was released a couple of years ago. This was a patch for a remote in SMB that required authentication. MS08-063 fixed a similar problem in SMB, again years later. Now in 2009 the SMB code has a remote that does not require authentication. How can code so closely scrutinized still contain vulnerabilities? This makes me feel like we're all doomed, but yet happy at the same time because I have a new Windows remote to play with :)

Interview With A Malware, er Adware, Author - [PaulDotCom] - This is a fantastic interview with a Malware, er Adware author (I actually did not intend to type "Malware", but it just came out after reading this article, no really it did I swear on the beer I'm drinking!). There are many lessons to learn from this. First, adware is just evil, almost like just borrowing time and resources on your computer so that someone else can make money. That's just sick, if anyone is going to profit from my computer it should be me. Second, the techniques for installation and hiding are well, evil, and really just a short step from kernel-level-root-kit. While some of it sounded almost semi-legit, I know where evil attackers are going in all this, and lets just say the lines are blury. My favorite quote (there were many):

S: In your professional opinion, how can people avoid adware?

M: Um, run UNIX.

S: [ laughs]

M: We did actually get the ad client working under Wine on Linux.

Taking Memory Dumps - [PaulDotCom] - This article features some super cool ways to grab interesting stuff from RAM, including ways to image memory with mantech's mdd, and ways to dump Yahoo! and Gmail passwords from RAM. I'm thinking I want to do this on hosts post exploitation, pull the dump file back, and analyze it for interesting information such as usernames, passwords, or encryption keys. Maybe RAM isn't the place to do this, but certainly if done on a process level if may prove useful, especially on systems that may have been hardened.

Too Much Automation? - [PaulDotCom] - While I agree, automation can have a negative effect on risk identification, its a vital part of every penetration test. Much of the post talked about how automated tools "Don't have the love" and "you need TLC". That's all well and good, but how do you show risk when you've got a meterpreter shell on 30 hosts? What are those hosts? Do you spend 3 weeks of your time and the customer's money to demonstrate risk? No, you automate the mundane tasks and pieces and parts that can be automated. This does not make you any less skilled or a script kiddie, in fact, it makes you more of a master. You know how to run "ipconfig /all", so why not automate running that on all hosts and analyze the results? Maybe you then identify a backup network which gives you further access into the network and the organization's data. CG states several reasons against automation:

1. That much output really saves you no time if you go back and actually go through and validate the results. - Wrong, collecting information and using custom scripts and/or bash commands to parse out the useful information is where you want to be.

2. Seems like no one knows how to enumerate and certainly no one teaches it. Automating all the steps between scan and exploit don't help the lack of enumeration either. - I automate enumeration, especially of Windows hosts. I don't put it between scan and exploit, I jump right to enumeration and review the results (sometimes with more automated scripts). For example, I enumerate open shares on the entire subnet, then pull down all .doc documents, then search them for interesting information from the recon phase (i.e. the name of the CFO).

2. There is no "test" you just ran a bunch of tools, the script did all the "work." - And thank God for that. Now, very important point, before you automate something, do it manually first and be absolutely certain you know exactly how it works. Again, automation is for repeating tasks that you already know how to do, that way if something goes wrong you can troubleshoot it and/or tweak the automated process to be more effective.

3. There is no personal experience or tester analysis if you just run a script. - Wrong, the experienced tester is running the script with knowledge of how it works and what it is doing. And, this is saving the experienced tester a ton of time, letting them focus on the more important portions of the penetration test, such as identifying risk.

4. What about stealth? what about tactics? what about proper footprinting? what about emulating anything besides a script kiddie attacker? - Sometimes its important to be stealthy, and here is one case in which you may want to ditch a lot of you automated tasks, because they are not stealthy. However, this is a conversation to have with your customer. If it is imperitive to be stealthy when identifying risk in the organization, then make sure you communicate the extra time that will take. Its about a 50/50 split, some customers will say "yes, spend the extra time" others will say "no, be as loud as you want and save time". Both have value, but let the customer decide.

5. Where is the fun and challenge in having the script do all the work for you? - The fun an challenge comes with the other parts of the test, such as post-exploitation, social engineering, analyzing the data, and several other things that just can't be automated.

7. There's no TLC with autohack, for the amount of cash you paid for a "real" pentest, there should be some love and work from your tester, that nessus report just aint cutting it. - Automation is not issuing the Nessus report, its running Nessus, manually testing the vulnerabilities, mining the data for interesting targets and vulnerabilities, etc... Customers are not paying for "love" (go to Nevada if you're into that sort of thing), they are paying you to identify risk and recommend remediation.


1.1M Machines in 24 hours - [Larry] - Wow, Downadup/conficker is on the move. Using some social networking, auto run from usb, password brute forcing, and MS 08-067 Server Service exploits seem to be doing it nicely. Later, we hear that 3.5M machines are compromised. Here's some more info on the functionality here.

Maybe it is sinking in? - [Larry] - A nice read in the SANS reading room on wireless threats in the healthcare environment. It includes an analysis of a specific device, but includes the possible threat of a rogue AP, and or custom firmware delivering attacks.

Zero TOR coding bugs - [Larry] By utilizing Coverity's free services, the TOR project was able to reduce the number of bugs found with the Coverity service form 171, to 15, to 0. To me, this is a testament to source code analysis...

Physical Security - [Larry] - Physical security being run by IT professionals? This seems to be an interesting twist. I think that the IT folks may have a little better touch with some of the physical security issues than most traditional physical security functions. My concern? How many psychological and physical aspects will be missing if the geeks take over the physical side. Paul, Irongeek, looks like your other skills are valuable after all.

Netgear SNMP - [Larry] - So, the Netgear WG102 leaks the SNMP wrote password when read with the SNMP read password. So much wrong here, even if 1992 want's their exploit back.

WiFi enforcer - [Larry] - Indian police will soon be searching for and enforcing WiFi security on all networks - the e-mails in the recent Mumbai attacks were sent through unsecured WiFi networks. Interesting approach, and gives new meaning about outsourcing your wireless security to India.

Zero Wine - [John] - A better malware analysis environment? But why? We will cover VM detection and why some environments are better then others

WITOOL - [John] - Not that WII. Yet another SQL injection tool. Why would you need more then one?

Christmas Shopping for Larry - [John] - Easy to use small, electronic systems? You bet. How can we use them as PenTesters?

More Cyber Warfare - [John] - Another example of the battlefields extending to the internet.

MLB Pitching Malware again - [John] - This is not a repeat from 2008.