Open main menu

CDOT Wiki β

Firefox Performance Testing : A Python framework for Windows

Revision as of 14:12, 24 November 2006 by Mylau (talk | contribs) (Comments on the Documenatation)

Project Name

Firefox Performance Testing : A Python framework for Windows


Project Description

The goal of this project is to:

  • get the current framework up and running to help work with others
  • get the framework running in an automated fashion
  • help with the creation and execution of new tests
  • work to upgrade the framework to work with a mozilla graph server
  • work with the mozilla community and contribute to an open source project


From this project, you will:

  • learn python
  • learn about white box testing methodologies
  • work with an open source community
  • more generally learn about the functioning of QA in an open source community


This will benefit you in the future when presented with a new program, you'll be able to give an idea of how to approach testing - to give adequate coverage and be able to provide some metric of program stability and functionality


Note: This is NOT the typical mundane black box testing


Project Leader(s)

Project Contributor(s)

Ben Hearsum (bhearsum)

  • Set up the VM for performance testing
  • Helped with the debugging process for report.py, run_tests.py and ts.py

Tom Aratyn (mystic)

  • Introduced Closures in Python

Alice Nodelman

  • Discussion on the things that need to be fixed to improve and strengthen the framework

Michael Lau (mylau)

Project Details

Details

This is different from Tinderbox. Two major differences are:

  • First, it doesn't build, it just runs the performance test given a path to the executable. This is helpful if you're testing the performance of an extension or a build from another server. (You could build on a fast server, and then run performance tests on a machine with low memory).
  • Second, it measures performance characteristics while it's running the pageload tests--you can track cpu speed, memory, or any of the other counters listed here.

Progress

Task Details Priority Contributors Status
Performance Testing Setup Configuration Documentation
  • The current setup configuration documentations are in text files and are very hard to follow.
  • From my experience, I've missed out a few configuration steps because the documents were all over the place and a tad confusing.
High
  • Improving the current documentation so that it's easier to follow
  • Making sure that all the configuration documents are in one place
  • This is done along with the code base work I'm doing
40% completed
Study performance testing framework

The framework has to be strengthened and improved. A discussion with Alice Nodelman is planned to discuss about things that could be done to make the framework stronger.

High
  • Liz Chak
  • Alice Nodelman
    • Discussion on what needs to be done with the framework
  • Ben Hearsum
    • Set up the VM for performance testing
    • Helped with the debugging process for report.py, run_tests.py and ts.py
Tested the framework and went through the coding in the framework. Made a list of the weaknesses of the framework and planned various resolutions.

We have established that the following has to be done:

  • ease configuration of python framework
    • too many config files to edit
      • have to know whole framework to configure it
      • not flexible
    • tedious
    • too many directories to create
    • too many extra libraries to load
      • a lot of dependancies!
    • things have to be copied to special directories
    • bad configurations don't cause errors!


100% completed
Configuration checker

The configuration checker will check if all the configuration is done before running the performance testing. The checker is in run_tests.py and it entails:

  • yaml file validator
  • paths.py validator

This can only be done when the yaml file validator and paths.py validator are completed.

High
0% completed
yaml file validator

In run_tests.py:

The validator of yaml file is weak. It only checks for certain items in the file and will crash if those items are not there or if those items doesn't have any value. It doesn't check for unexpected values and doesn't give the user a clue that their yaml file has a problem.

High

Changed the validator to check if items exist before storing the value. If one of the items doesn't exist, the program will terminate and it will let the user know that the yaml file has to be fixed.

90% completed
paths.py validator

Currently the run_tests.py file doesn't validate the paths.py file. If the user misses a path or does a bad directory configuration, the program will crash and give this error:

Traceback (most recent call last):
  File "C:\proj\mozilla\testing\
performance\win32\run_tests.py", 
line 129, in ?
    test_file(sys.argv[i])
  File "C:\proj\mozilla\testing\
performance\win32\run_tests.py", 
line 122, in test_file
    TP_RESOLUTION)
  File "C:\proj\mozilla\testing\
performance\win32\report.py", line 
152, in GenerateReport
    mean = mean / len(ts_times[i])
ZeroDivisionError: integer division 
or modulo by zero

The following has to be done in the run_tests.py file to validate the paths.py file:

  • checks paths for existance
  • notify user if path doesnt exist and ask user if they want it created
  • check if the directories have contents
High
  • I have fixed the run_tests.py to check if the user's directories exist on their system and it prompts them to make the directories.
  • I'm currently working on checking if the following directories exist:
    • extension_perf_reports
      • The graphs and results will be generated in this folder
    • extension_perf_testing directory and base_profile directory.
      • There are several levels in the directory. Here is the basic outline of the directory structure:
extension_perf_testing(dir)
        |
        |
    base_profile (dir)
        |
        |__ bookmarkbackups (dir)
        |         |
        |         |__ .html files
        |
        |__ Cache (dir)
        |
        |__ .bak, .html, .ini, 
            .dat, .txt, .rdf, 
            .mfl files 
            (most important file - perf.js) 
  • I haven't went through a thorough discussion with Alice on which files should be validated in the base_profile dir. From what I've gathered from the other discussions we had, the perf.js file will crash the program is it's non-existent.
  • I have fixed the program to check for the existence of the base_profile dir and it also checks if the bookmarkbackups, Cache dirs and perf.js dir exist.
60% completed
Get all the configuration in one place

The framework is currently very confusing and the configuration is all over the place! This has to be fixed, but it's not the main priority:

  • paths.py, config.yml, constants
  • have to run both ts and tp at the same time
Medium
0% completed
Get class to test out the Performance Testing framework

The improved framework has to be tested to get constructive feedback from the users.

Low
0% completed


Comments on the Documenatation

Draft ... will be fixed

  • The README.txt file is hard to read in notepad. There's vertical scrolling I opening the file.
  • In the pre-requisites for the READE file, it's missing the amount of size to be allocated in the computer to complete the install.
  • Following the procedures for installation was difficult. The document should number the procedures and have sample outputs to ensure the user is going on the right track
  • Some of the procedures are not
  • Not sure of what to do at the beginning
  • config.yaml
  • base profile
  • should have samples
  • the read me file is hard to read. initialy i openned it with notepad and there was horizontal scrolling

Project News

Saturday, September 23, 2006

Performance tests didn't run sucessfully.

  • There weren't any results generated in the extension_perf_testing\base_profile and extension_perf_reports folders.
  • Output after the performance tests were run:
Traceback (most recent call last):
  File "C:\proj\mozilla\testing\performance\win32\run_tests.py", line 129, in ?
    test_file(sys.argv[i])
  File "C:\proj\mozilla\testing\performance\win32\run_tests.py", line 122, in te
st_file
    TP_RESOLUTION)
  File "C:\proj\mozilla\testing\performance\win32\report.py", line 152, in Gener
ateReport
    mean = mean / len(ts_times[i])
ZeroDivisionError: integer division or modulo by zero

Sunday, September 24, 2006

Understand further the approach to testing with the Python framework


Monday, September 25, 2006

elichak will be working on a resolution with alice to get the results generated in the extension_perf_testing\base_profile and extension_perf_reports folders.


Friday, September 29, 2006

elichak re-configured the environment of the machine to run the tests again. Cleaned up old files to do a clean test. Reinstalled Cygwin (replaced Make 3.80 with Make 3.81) and updated the testing files through CVS.


Sunday, October 1, 2006

Alice has successfully run the tests. The Zero Division error didn't occur again after she updated her test files. There were results generated in the extension_perf_testing\base_profile and extension_perf_reports folders. elichak attempted to run the test with the alice's code but the Zero Divsion Error still occured on her machine.


Wednesday, October 4, 2006

Elichak consulted Robcee about the Zero Division Error and he suggested a few things, like debugging the script. Elichak found out that the value of ts_time in the report.py file is empty but couldn't find out why the value of ts_time isn't assigned. According to alice, she didn't debug the scripts and only had to update the files to make them work.


Friday, October 6, 2006

Ben set up the VM for elichak to run her performance testing in that environment.


Wednesday, October 11, 2006

  • elichak configured the environment in the VM for her testing. The tests still gave the same results as before:
  • Zero Division Error at lines 122 and 129 in run_tests.py and line 153 in report.py
  • 2 files in the extension_perf_reports dir are generated but there are no graphs
  • elichak also changed the TS_NUM_RUNS, TP_NUM_CYCLES, TP_RESOLUTION values to 1 in run_tests.py to shorten the cycles of the performance testing for the purpose of debugging the scripts.
  • The error occurs in report.py because ts_time is empty, therefore, this fails:
for ts_time in ts_times[i]:
  mean += float(ts_time)
mean = mean / len(ts_times[i])
  • We speculate that the thing that is affecting the value of ts_time being generated is in ffprocess.py: RunProcessAndWaitForOutput always returns None in line 232
    return (None, True)
  • Further debugging by elichak is in process


Thusday, October 12, 2006

Work completed

The Zero Division Error is solved. Turns out that it was just a configuration problem. The documentation to set up the environment was rather subtle and needs a re-work.

Solution

Contents in the C:\proj\mozilla\testing\performance\win32\base_profile should also be in C:\extension_perf_testing\base_profile dir.

All work for this project is done on the VM, hera.senecac.on.ca

Work in progress

elichak

  • Trying out a few things in the framework to find out which direction I would like to do to the framework, either building new tests, improving on existing ones, strengthening the framework itself or porting it to other OS's
  • Revise the Firefox Performance Testing documentation

Friday, 20 Oct 2006

Last week, elichak has established to work on automating the setup of the environment and performance testing. The performance testing and environment setup is currently all over the place and is tedious for the developer to set it up.

The automation will entail:

Generating directories, dropping files in directories, installation of libraries, options to configure the performance testings etc.

Tuesday, 31 Oct 2006

Alice and Liz had a meeting and have established the key things that need to be done. What needs to be done:

  • ease configuration of python framework
    • too many config files to edit
      • have to know whole framework to configure it
      • not flexible
    • tedious
    • too many directories to create
    • too many extra libraries to load
      • a lot of dependancies!
    • things have to be copied to special directories
    • bad configurations don't cause errors!

How do we fix this?

  • configuration checker
    • yaml file validator
    • paths.py validator
      • checking the paths for existance
      • notify user if path doesnt exist and ask user if they want it created
      • checking if the directories have contents
  • next steps
    • get all the configuration in one place!
      • paths.py, config.yml, constants
      • have to run both ts and tp at the same time

Tuesday, 21 Nov 2006

Refer to progress chart. Performance Testing Framework progress chart

Project References

Project Events

Bon Echo Community Test Day

Friday, October 06, 2006, from 7am - 5pm PDT
Mozilla QA Community:BonEcho 2.0RC1 prerelease Community Test Day