Thursday, 21 August 2014

XSSYA Tool Usage

XSSYA is a Cross Site Scripting Scanner/Confirmation tool which aims to find XSS
vulnerability written in Python. It allows a penetration tester to scan a website without using the browser and confirm whether the website is vulnerable to XSS - Cross site scripting vulnerability or not by injecting and executing around 28 encoded payload on the specified URL. Now in general when we scan a website it often gives false positive result/vulnerabilities and that may be because of other scanners scanning the website/execute payload at same time. Now if it return a status - 200 then the tool confirms the site is vulnerable and shows/report result but however the defect found may not be actually a defect in real and so in such case, the penetration tester has to test and confirm manually.

What is False Positive ?

False positive is something when you think a specific vulnerability exist in the code, it may be the result that many security scanner returns after test execution. False positive may occur because of weak static checks that security scanner detect. Sometime a security scanner for detecting a vulnerability it may use the algorithm to find one or more predefined signature pattern (i.e. CHECK LOGIC) within an HTTP response and that might go wrong due to which the scanner will deduce that and show the vulnerability exists (which actually doesn't exist in real) and then report it accordingly.

XSSYA - How it Works ?

Written in Python, XSSYA works by executing it library of encoded payload to bypass WAF
(WEB APPLICATION FIREWALLS). This is basically the METHOD 1 which confirm whether the site is vulnerable or not. If the HTTP response returns status - 200 then the tool attempts to execute METHOD 2 which actually then search for the payload decoded in the web page HTML code and if it confirmed then it gets to the last step and execute document.cookie to get the cookie.

XSSYA Features :
  • Support both Windows & Linux ENV
  • Support HTTP & HTTPS
  • Identifies 3 types of WAF (mod_security, WebKnight & F5 BIG IP)
  • XSSYA Continue Library of Encoded Payloads To Bypass WAF (Web Application Firewall) Support Saving The Web HTML Code Before Executing the Payload Viewing the Web HTML Code into the Screen or Terminal
  • After Confirmation (execute payload to get cookies)\
Prerequisite:

The only module which need to be download and used is colorama-0.2.7 - https://pypi.python.org/pypi/colorama

OR

Try to directly install through Run prompt using easy_install colorama if python 2.5 or upper version is available/installed in your machine.


Download & Installation Procedure:

  • You can download XSSYA HERE . Click on the Download ZIP button to download as shown in the screenshot below:

  • Once the file is download, extract all the files to any local drive in your machine. See screenshot below.


Now we are all set to run and execute test using XSSYA.

Test Execution:
  • For executing test, open run prompt and redirect to the directory where you have
extracted the ZIP files (Mostly look for the directory which contains xssya.py file). See
screenshot below.

  • Now to initiate your test enter python xssya.py and hit enter.


  • Enter a Vulnerable Website link and hit enter. (For Demonstration purpose, i am using the following website link which is vulnerable to XSS which i had found from the training video added below - " http://demo.testfire.net/search.aspx?txtsearch= " ).
NoteMake sure to choose a vulnerable link which ends with [ / or = or ? ]



  • As mentioned above, in the next step we need to choice 1 or 2 i.e. we need to select Method 1 or Method 2.
Method 1 - It is used to check the link is vulnerable or not.
Method 2 - If Method 1 returns success i.e. if it confirms the link is vulnerable then it start executing the encoded payload (injecting at the end of specified URL) and search for the same payload in web HTML code to get the cookies information.



  • At the end of the test execution, this tool also allow you to save the web page html
code and print them. See screenshot below.




My Learning Material / References:

Website Address:

http://www.secure-edf.com/xssya.html

https://github.com/yehia-mamdouh/XSSYA

Video Tutorial:




Happy Hunting !! :)

Monday, 11 August 2014

Web Application GUI Checklist

Testing user interface of web application is to ensure that the task within the application is user-friendly for all users and that the application meets specific standards of graphical user-interface. In short, this is the area of testing that complies with standards and conventions. The checklist mentioned below, may be helpful or beneficial for software tester & testing team to perform UI testing as it comprises of some GUI components that can be used as reference while testing/checking in an systematic way.

CONTENT

  • Make sure all the page title or page header content are correct and that they are left aligned.
  • Verify all the error message on the screen and make sure it doesn't have any spelling mistake.
  • Check to ensure that the fonts applied to the content should be same or match to the requirement specification.
  • Check whether the page content are intact when you navigate to another page and move back.
  • Verify all the contents and labels are properly aligned.
  • Check & rectify the spelling errors in the content.
  • Verify all the text or content present for the fields are correct and matches to the required specification.
  • Check that all the screen prompts specified matches the correct screen font as specified in the specification.
  • Check all the content for respective words to be in lower and upper case.

NAVIGATION

  • Verify all the links given in the sitemap prompt to actual page/section as specified in the requirement specification and check for broken links (If any).
  • Ensure all the screens which are accessible via buttons are accessed properly.
  • Make sure the scroll bar appear when the page/dialog content are long.
  • When an error message occurs in a new window, make sure the focus should prompt to the button in the error message which can be used by the user to cancel/close it?
  • Make sure there should be a link to navigate to home page on every single page.
  • Any operation that invokes or opens a page/section in another browser tab should move the focus to the first editable field.
  • Verify whether the tab order specified on the screen move in sequence from Top Left to bottom right. [Note: This should be the default behavior unless otherwise if any other behavior is specified in the requirement specification.]
  • Check all the disabled & read-only fields are avoided in the TAB sequence.

IMAGES

  • Check whether all the image graphics are properly aligned or not.
  • Make sure the text wraps properly around pictures/graphics.
  • Verify there are no broken images all over the website.
  • Check the size for the graphics used or uploaded.
  • Make sure the buttons are all of similar size, shape and are off same font & font size.
  • Check all the banners i.e. banner style, size & display exact same as existing window.
  • Is it visually consistent even without graphics.

COLORS

  • Check the color of the link & hyperlink are standard or not.
  • Test the background color of all pages in the site.
  • Does all the buttons available in the site are off standard format, color and size.
  • Verify that the screen and field colors are being adjusted correctly for non-editable mode.
  • Verify the color of the warning message should be appropriate as per specs.
  • Make sure that any section/pages background (colors) is distraction free.

INSTRUCTIONS

  • Make sure all the error message & tool tip text that display in the site doesn't have any spelling errors & are shown correctly on the screen.
  • Verify all the content/text that is shown as tool tip for every enabled field & button match with the requirement specification.
  • Check the progress messages on load of tabbed/active screen.

OTHER ITEMS RELATED TO USABILITY

  • Verify whether the site is accessible and looks good in different screen resolutions - 640 x 480, 600 x 800, 1366 x 768 etc.
  • Make sure the site have a consistent, clearly recognizable Look & Feel.
  • Verify whether all the pages are printed legibly without cutting off text.
  • Verify all the terminology used in the site are clearly understandable by all site users.
  • Make sure the names specified in button & option box are not abbreviations unless if required/specified in specs.
  • Assure that the fonts used in the site are not too large or small to read.
  • Does the site convey a clear sense and meaningful info to its intended audience.
  • Check whether the site provide or facilitate customer service (info, details etc) i.e. responsive, helpful, accurate for all site users.
  • Check/verify the site accessibility in all supported devices and that it meets all the standard and convention there.  

Happy Testing !! :)

Wednesday, 6 August 2014

Things to keep in mind when limited time is available for test cycle

In your life and field of software testing, you must have come across situation where you may have been asked to test an application in a very short span/period of time and deliver a report with a few or XX days. If you have not faced the situation then either you are lucky or you may not have worked on sufficient projects. A proper skilled testing requires lots of planning, effort to work/test the product with a substantial amount of time. In case if the project get's delayed then ultimately the testing phase gets a hit thereby decreasing the time span of testing cycle.
Now in order to deal with such limited time frame which is allowed/given for testing, you should make the best use of the time and resource available. Begin testing with an assumption that "Test all important features/items by prioritizing them (high to low) within the allowed time frame". From an economic stand point of view, it is always advisable to spend less time on the area of the application where the chances of finding defects/bugs is low. As a rule of thumb always prioritize the items i.e. what items/features to test first and in which sequence so that you spend limited time working/testing on areas that really doesn't matter or are low priority. Now for performing such task, you need to require doing some analysis, develop strategy & intuition based on your experience. Make sure, performing a risk analysis on items/features will help you to identify the areas or function where risk involved is high and that would be used most by the customers/end users.


Now while testing, it is always best suited to use a checklist which would help you to identify the key areas. Here is a checklist that I often use when I get limited time for testing an application:

  • Check the functionality that most frequently be used by End Users. Ask yourself, “Which functionality is most visible to the End users”.
  • Try interacting with all project stakeholder (mostly client, if possible end users) and gather info from them as well which they think are most important to them and include them in your list of test item.
  • Test the functionality which act as a interface with an external systems i.e. third party software. These are often classic areas where you can find integration level defects.
  • Check the most complex functionality which you expect can be easy misunderstood and misinterpreted. Look for parts of the code that are most complex, and thus most prone to errors.
  • Don't forget that the newly added functionality are often least unit tested. Make sure to include them in your test items list.
  • Identify related functionality of similar projects that you may have worked on in past and that caused problems mostly in terms of the defect's raised by customers. So try to co-relate them with current project if possible and try to use it to your advantage.
  • Identify related functionality of similar previous projects that you may have worked on in past that had large maintenance expenses. Correlate the cases/scenario of the project with the current application and try to use it to your advantage.
  • Test the functionality where recent modification where done to code or may be the areas which includes bug fixes.
  • Check the functionality which are part of the requirement and design that are unclear or poorly thought out.
  • Make sure to gather info on the areas/section where development was done in a rush or panic mode or under extreme time pressure. Include those areas in your test item list.
  • Test the section/areas that demand a consistent level of performance and reflect complex business logic.
  • Test the most risky area of the application with the largest safety impact which if broken can bring down the entire application. Talk with developers if possible and gather info or take suggestion on the same is probably a good idea here.
  • Check the part of the application where many programmers/developers have worked on.
  • Check the functionality which are part of new tools, architecture or may be where new technology are involved.
  • Test the area of the application where the largest financial impact lies on the end users and project stakeholders.
  • Device tests that could cover multiple functionality/features at same time & high-risk-coverage at the minimum time.
  • Try to find & test the functionality which in case if it returns incorrect output then it could result in bad publicity.
  • Try to find & test the functionality which could cause most customer support complaints.
Note: This may not be a complete list to check/test under a tight schedule though it covers quite a lot of important areas that usually needs attention. Now being a software tester, I am always aware of the fact that using a checklist will allow you to think & perform testing in an more effective manner within a jam-packed schedule.

Happy Testing :)

Tuesday, 5 August 2014

Software Tester/Quality Assurance Engineer - Can we call them Quality Cops !! :)

Some peoples perception in this software industry that i have interacted seem like Software tester/Quality Assurance Engineer are Quality COP of the products/projects that they seems to be working/testing with.

What do i mean by a Quality COP ?

The answer and reason that i found from few people were little weird where i came to know that Software tester or QA ask too many questions. Few are listed below;
  • Are the requirements in place for the product/project ?
  • Have we developed the test plan, test procedure that we need to use ?
  • Is the test specification metrics and other related documents like SRS, Design Docs, Uses Cases in place ?
  • Did we derive the testcase for every modules of the application based on the specs that we received ?
  • Did the programmer/developers performed unit tests before sending the build to quality department ?
  • Are we suppose to perform testing when the build has been promoted to production few days/hours ago without quality department approval ?

This list goes on…


So i guess these are like the responsibility of every/each software tester/quality assurance engineer to check or validate and assure the software product quality that is been released. So i call them as Quality COP who are responsible to approve and direct whether the product is ready to be delivered/released to the market or may be whether the product is suitable for users to use it or not.

Software tester/Quality assurance engineer (Quality COPS) are responsible to increase the value and life of the product quality being delivered or used by users all over the world as any problem in the product may lead to or affect the lives of people using it. 
Now let come to what we mean by Quality ?


Wikipedia describes quality assurance as follows


Quality Assurance is the activity of providing evidence needed to establish confidence among all concerned, that the quality-related activities are being performed effectively. All those planned or systematic actions necessary to provide adequate confidence that a product or service will satisfy given requirements for quality. Quality Assurance is a part and consistent pair of quality management proving fact-based external confidence to customers and other stakeholders that product meets needs, expectations, and other requirements. QA (quality assurance) assures the existence and effectiveness of procedures that attempt to make sure – in advance – that the expected levels of quality will be reached.


Read more on Quality Assurance from wikipedia


So are we (Software tester/Quality Assurance Engineer) responsible for the Product Quality ?


The Software tester/Quality Assurance Engineer test the product and report bugs. Some of these bugs get fixed for the release and few may not be based on many factors like time limitation/project schedule, improper specification etc. Assuring the quality into Products under development is the responsibility of all the members of the quality department team working on the project but sometime/in some case due to some external/internal factor the product is released with known bugs to the market after communicating the problem/risk/issues to project client which they in turn have to decide or plan to make fixes soon for next release cycle or may be as 'Hot fix'/'Patch' for current release version.


The role of the Software tester/Quality Assurance Engineer is to test software, find bugs, report them so that they can be fixed. The Bug Reports should be clear, easy to reproduce, reduce the time to debug for developers and the report should motivate the developers to fix the issue asap. Software tester/QA should focuses on the software product itself and gathers important information regarding what it does and doesn’t do. The process of gathering should include all the teams associated with the product. Talks to all project stakeholder be it Project client, Project manager, Test manager, Sales/Suppost guys, Development team and gather their expectations over the release.


Finally at the end, i would say that the Role of the Tester is to provide qualitative information on the product to the all project stakeholders for the better decisions. So the big challenge here is to provide accurate, comprehensive, and timely information about the product under development.

Context Driven Information About Bug Report

Dear Blog Reader,

Greeting,


In this post, i would like to explain & explore different stages or cycle of a Bug from it’s inception to closer. The Bug has been found and logged into the BTS (Bug Tracking System) and this is not just the end. Beside this, the bugs has to go through many stages till it die. It is always good to capture the context driven information in the bug report. My experiences with bug reports way back in my initial day's of my career had taught many lessons to improve upon.  
Bug reports is generally used to capture information about what is the problem with the system failure or when a defect exist in a system but mostly people spend very less time in capturing all the details required and there are many reasons for the same.
In general in our day today life, there may be some reasons/excuse what people may quote here;



  • The System functionality and it's features are too complex and tough for novice user to understand the bug or it's reports.
  • I can reproduce it on my machine if developer need it.  
  • We get very less time to test. So we need to test more and give less time to capture & write more information in the Bug Reports.
  • You know capturing all the info is process driven & it may not be worth of efforts or may be some times it is a boring stuff to collate the info & push it. :)

And the list goes on...
I hope you may have come across this situation at least once in your career but here the mission of your bug report is to provide details & context of the problem and convey the importance of it with a user driven stories. On another note, comprehensive report with accurate data always help the programmer/developer to locate or reproduce the error/bug inorder to make a fix. In short, your bug report must be like the voice of customer who play the role of an advocate against the problem.
Please make a note that the Bug Advocacy from Cem Kaner is an excellent and good source to begin with. If your report is unable to specify the need of the (bug) context, then it’s better to avoid writing such reports. Also in some case, it may not be feasible for other users using the system to explore & analyze the bug in that fashion. Another context associated with Bug Reports in relation with each stake holders of the project is that the Bug Tracking System must give the right trends and identify the hot spots. Testers must capture the right kind of data to derive better valuable metrics over the bug repository.
You should take some care about few parameters while capturing a bug/failure/error/defect/... which are listed below;



  • The test environment should be a replica of the production environment and so while reporting a bug make sure to add all the Test Environment details.
  • Add clarity about the Severity & Priority of the bug.
  • Add detailed classification on the feature and also make sure to classify the maximum possible sub-feature/component of the system i.e. in short we can say the detailed steps where the bug was found (STEP TO REPRODUCE).
  • Add info about the Bug types (i.e. Functional, Performance, Usability, Security etc).
  • Versions and Build Numbers also play an important role in bug report which in turn help the programmer to deploy the build and reproduce the issue. Also there should be a build/version number added by the programmer which in turn help's the testing department to retest once the fix is available.
  • Bug Classification would be useful incase if the bugs lies in Requirements or Design or Implementation work etc. (Optional).

Now a bug once pushed into the BTS may go through various stages (simple case is explained below) like;
When a bug/failure/error/defect is found it is always advisable to log that into the Bug Tracking System. At an initial stage, it will be treated as NEW Bug in the System and then would be ASSIGNED to the concerned developer/programmer for Resolution, once the test manager review it. The developer then INVESTIGATE and look in to the possibilities of the resolution & takes a call on Resolution for making a FIX or DEFFER the ticket over the information provided and ASSIGN the ticket back to Reporter/Tester. Tester would then validate (retest) the resolved issue in the build & check for the regression scenarios over the fix and finally if the bug/failure/error/defect is resolved it goes into the CLOSED state and then to archive or else the ticket is RE-OPENED/RE-ASSIGNED back to developer if the bug still exist in the system. Note this cycle continues till the defect move's into the Closed state. Finally check back & push the entire context driven information to the bug repository to identify the trends and risk associated with the release cycle incase if the same bug come back in any future release cycle.
I hope the above info may help a tester/QA who is new in pursuing a career in software testing. This article may help in identifying the trends in bugs and there cycle's to focus on the unstable components / environments.


Happy testing :)