Chase Qi | 840edf2 | 2016-09-02 16:24:58 +0800 | [diff] [blame^] | 1 | ======================= |
| 2 | Test Writing Guidelines |
| 3 | ======================= |
| 4 | |
| 5 | This document describes guidelines and is intended for anybody who want to write |
| 6 | or modify a test case. It's not a definitive guide and it's not, by any means, a |
| 7 | substitute for common sense. |
| 8 | |
| 9 | General Rules |
| 10 | ============= |
| 11 | |
| 12 | 1. Simplicity |
| 13 | ------------- |
| 14 | |
| 15 | It's worth keep test cases as simple as possible. |
| 16 | |
| 17 | 2. Code duplication |
| 18 | ------------------- |
| 19 | |
| 20 | Whenever you are about to copy a large part of the code from one test case to |
| 21 | another, think if it is possible to move it to a library to reduce code |
| 22 | duplication and the cost of maintenance. |
| 23 | |
| 24 | 3. Coding style |
| 25 | --------------- |
| 26 | |
| 27 | Use common sense and BE CONSISTENT. |
| 28 | |
| 29 | If you are editing code, take a few minutes to look at the code around you and |
| 30 | determine its style. |
| 31 | |
| 32 | The point of having style guidelines is to have a common vocabulary of coding so |
| 33 | people can concentrate on what you are saying, rather than on how you are saying |
| 34 | it. |
| 35 | |
| 36 | 3.1 Shell coding style |
| 37 | ~~~~~~~~~~~~~~~~~~~~~~ |
| 38 | When writing test cases in shell write in *portable shell* only. |
| 39 | |
| 40 | You can either try to run the test cases on Debian which has '/bin/sh' pointing |
| 41 | to 'dash' by default or install 'dash' on your favorite distribution and use |
| 42 | it to run the tests. |
| 43 | |
| 44 | Ref: `Shell Style Guide <https://google.github.io/styleguide/shell.xml>`_ |
| 45 | |
| 46 | 3.2 Python coding style |
| 47 | ~~~~~~~~~~~~~~~~~~~~~~~ |
| 48 | Please follow PEP 8 style guide whenever possible. |
| 49 | |
| 50 | Ref: `PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_ |
| 51 | Easy-to-read version of PEP 8 available at `pep8.org <http://pep8.org>`_ |
| 52 | |
| 53 | 4. Commenting code |
| 54 | ------------------ |
| 55 | |
| 56 | Use useful comments in your program to explain: |
| 57 | |
| 58 | * assumptions |
| 59 | * important decisions |
| 60 | * important details |
| 61 | * problems you're trying to solve |
| 62 | * problems you're trying to overcome in your program, etc. |
| 63 | |
| 64 | Code tells you how, comments should tell you why. |
| 65 | |
| 66 | 5. License |
| 67 | ---------- |
| 68 | Code contributed to this repository should be licensed under GPLv2+ (GNU GPL |
| 69 | version 2 or any later version). |
| 70 | |
| 71 | Writing a test case |
| 72 | =================== |
| 73 | |
| 74 | Linux |
| 75 | ------ |
| 76 | |
| 77 | 1. Structure |
| 78 | ~~~~~~~~~~~~ |
| 79 | |
| 80 | Tests are generally placed under 'linux/' directory. Everything that related to |
| 81 | the test goes under the same folder named with test case name. |
| 82 | |
| 83 | Define 'linux/test-case-name/output' folder in test case to save test output and |
| 84 | result. Using a dedicated folder is helpful to distinguish between test script |
| 85 | and test output. |
| 86 | |
| 87 | 2. Installing dependence |
| 88 | ~~~~~~~~~~~~~~~~~~~~~~~~ |
| 89 | |
| 90 | The same test case should support Debian/Ubuntu, Fedora/CentOS and OE builds. |
| 91 | |
| 92 | When using package management tool like apt or yum/dnf to install dependencies, |
| 93 | package name may vary depending on the distributions you want to support, so you |
| 94 | will need to define dependent packages by distribution. dist_name and |
| 95 | install_deps functions provided in 'lib/sh-test-lib' can be used to detect the |
| 96 | distribution at running time and handle package installation respectively. |
| 97 | |
| 98 | On OSes built using OpenEmbedded that don't support installing additional |
| 99 | packages, even compile and install from source code is impossible when tool |
| 100 | chain isn't available. The required dependencies should be pre-install. To run |
| 101 | test case that contain install steps on this kind of OS: |
| 102 | |
| 103 | * Define 'SKIP_INSTALL' variable with 'False' as default. |
| 104 | * Add parameter '-s <True|False>', so that user can modify 'SKIP_INSTALL'. |
| 105 | * Use "install_deps ${pkgs} ${SKIP_INSTALL}" to install package. It will |
| 106 | check the value of 'SKIP_INSTALL' to determine whether skip the install. |
| 107 | * When you have customized install steps like code downloading, compilation |
| 108 | and install defined, you will need to do the check yourself. |
| 109 | |
| 110 | An example:: |
| 111 | |
| 112 | dist_name |
| 113 | case "${dist}" in |
| 114 | Debian|Ubuntu) pkgs="lsb-release" ;; |
| 115 | Fedora|CentOS) pkgs="redhat-lsb-core" ;; |
| 116 | esac |
| 117 | install_deps "${pkgs}" "${SKIP_INSTALL}" |
| 118 | |
| 119 | 3. Saving output |
| 120 | ~~~~~~~~~~~~~~~~~ |
| 121 | |
| 122 | 'test-case-name/output' directory is recommended to save test log and result |
| 123 | files. |
| 124 | |
| 125 | 4. Parsing result |
| 126 | ~~~~~~~~~~~~~~~~~ |
| 127 | |
| 128 | Saving parsed result in the same format is important for post process such as |
| 129 | sending to LAVA. The following result format should be followed. |
| 130 | |
| 131 | test-caes-id pass/fail/skip |
| 132 | test-case-id pass/fail/skip measurement units |
| 133 | |
| 134 | 'output/result.txt' file is recommended to save result. |
| 135 | |
| 136 | We encourage test writer to use the functions defined in 'sh-test-lib' to format |
| 137 | test result. |
| 138 | |
| 139 | Print "test-case pass/fail" by checking exit code:: |
| 140 | |
| 141 | check_return "${test_case_id}" |
| 142 | |
| 143 | Add a metric for performance test:: |
| 144 | |
| 145 | add_metic "${test-case-id}" "pass/fail/skip" "${measurement}" "${units"} |
| 146 | |
| 147 | |
| 148 | 5. Running in LAVA |
| 149 | ~~~~~~~~~~~~~~~~~~ |
| 150 | |
| 151 | LAVA is the foundation of test automation in Linaro. It is able to handle image |
| 152 | deployment and boot, and provides a test shell for test run. To run a test case |
| 153 | in LAVA, a definition file in YAML format is required. |
| 154 | |
| 155 | Bear in mind, do all the LAVA-specific steps in test definition file, and do not |
| 156 | use any LAVA-specific steps in test script, otherwise you may lock yourself out |
| 157 | of your own test case when LAVA isn't available or the board you want to test |
| 158 | wasn't deployed in LAVA. |
| 159 | |
| 160 | Test script should handle dependencies installation, test execution, result |
| 161 | parsing and other work in a self-contained way, and produce result.txt file with |
| 162 | a format that can be easily parsed and sent to LAVA. This is a more robust way. |
| 163 | Test case works with/without LAVA and can be tested locally. |
| 164 | |
| 165 | A general test definition file should contain the below keywords and steps:: |
| 166 | |
| 167 | metadata: |
| 168 | # Define parameters required by test case with default values. |
| 169 | params: |
| 170 | SKIP_INSTALL: False |
| 171 | run: |
| 172 | # A typical test run in LAVA requires the below steps. |
| 173 | steps: |
| 174 | # Enter the directory of the test case. |
| 175 | - cd ./automated/linux/smoke/ |
| 176 | # Run the test. |
| 177 | - ./smoke.sh -s "${SKIP_INSTALL}" |
| 178 | # Send the results in result.txt to LAVA. |
| 179 | - ../../utils/send-to-lava.sh ./output/result.txt |
| 180 | |
| 181 | Android specific |
| 182 | ---------------- |
| 183 | |
| 184 | The above test writing guidelines also apply to Android test cases. The major |
| 185 | difference is that we run all Android test cases through adb shell. Compare with |
| 186 | local run, adb and adb shell enable us to do more. And this model is well |
| 187 | supported by LAVA V2 LXC protocol. |
| 188 | |
| 189 | A typical Android test case can be written with the following steps:: |
| 190 | |
| 191 | # Check adb connect with initialize_adb funtion |
| 192 | initialize_adb |
| 193 | # Install binaries and scripts |
| 194 | detect_abi |
| 195 | install "../../bin/${abi}/busybox" |
| 196 | install "./device-script.sh" |
| 197 | # Run test script through adb shell. |
| 198 | adb -s "${SN}" shell device-script.sh |
| 199 | # Pull output from device for parsing. |
| 200 | pull_output "${DEVICE_OUTPUT}" "${HOST_OUTPUT}" |
| 201 | |
| 202 | Test Contribution Checklist |
| 203 | =========================== |
| 204 | |
| 205 | * When applicable, check test cases with the following tools with line length |
| 206 | rule relaxed. |
| 207 | |
| 208 | - checkbashisms - check for bashisms in /bin/sh scripts. |
| 209 | - shellcheck - Shell script analysis tool. |
| 210 | - pep8 - check Python code against the style conventions in PEP 8. |
| 211 | - pyflakes - simple Python 2 source checker |
| 212 | - pylint - code analysis for Python |
| 213 | |
| 214 | * Run test cases on local system without LAVA. |
| 215 | * Optionally, run test cases in LAVA and provide job example. |