blob: 06e082884f5dd0370a7e952119d5c3aecdc365e3 [file] [log] [blame]
Chase Qi840edf22016-09-02 16:24:58 +08001=======================
2Test Writing Guidelines
3=======================
4
5This document describes guidelines and is intended for anybody who want to write
6or modify a test case. It's not a definitive guide and it's not, by any means, a
7substitute for common sense.
8
9General Rules
10=============
11
121. Simplicity
13-------------
14
15It's worth keep test cases as simple as possible.
16
172. Code duplication
18-------------------
19
20Whenever you are about to copy a large part of the code from one test case to
21another, think if it is possible to move it to a library to reduce code
22duplication and the cost of maintenance.
23
243. Coding style
25---------------
26
27Use common sense and BE CONSISTENT.
28
29If you are editing code, take a few minutes to look at the code around you and
30determine its style.
31
32The point of having style guidelines is to have a common vocabulary of coding so
33people can concentrate on what you are saying, rather than on how you are saying
34it.
35
363.1 Shell coding style
37~~~~~~~~~~~~~~~~~~~~~~
38When writing test cases in shell write in *portable shell* only.
39
40You can either try to run the test cases on Debian which has '/bin/sh' pointing
41to 'dash' by default or install 'dash' on your favorite distribution and use
42it to run the tests.
43
44Ref: `Shell Style Guide <https://google.github.io/styleguide/shell.xml>`_
45
463.2 Python coding style
47~~~~~~~~~~~~~~~~~~~~~~~
48Please follow PEP 8 style guide whenever possible.
49
50Ref: `PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_
51Easy-to-read version of PEP 8 available at `pep8.org <http://pep8.org>`_
52
534. Commenting code
54------------------
55
56Use useful comments in your program to explain:
57
58 * assumptions
59 * important decisions
60 * important details
61 * problems you're trying to solve
62 * problems you're trying to overcome in your program, etc.
63
64Code tells you how, comments should tell you why.
65
665. License
67----------
68Code contributed to this repository should be licensed under GPLv2+ (GNU GPL
69version 2 or any later version).
70
71Writing a test case
72===================
73
74Linux
75------
76
771. Structure
78~~~~~~~~~~~~
79
80Tests are generally placed under 'linux/' directory. Everything that related to
81the test goes under the same folder named with test case name.
82
83Define 'linux/test-case-name/output' folder in test case to save test output and
84result. Using a dedicated folder is helpful to distinguish between test script
85and test output.
86
872. Installing dependence
88~~~~~~~~~~~~~~~~~~~~~~~~
89
Chase Qic246bb42017-01-13 16:12:35 +080090The same test case should work on Debian/Ubuntu, Fedora/CentOS and OE
91distributions whenever possible. This can be achieved with install_deps
92function. The following is a simple example. "${SKIP_INSTALL}" should be set to
93'true' on distributions that not supported by install_deps. In the unsupported
94case, if "${SKIP_INSTALL}" is 'true', install_desp still will skip package
95installation.
Chase Qi840edf22016-09-02 16:24:58 +080096
Chase Qic246bb42017-01-13 16:12:35 +080097Example 1::
Chase Qi840edf22016-09-02 16:24:58 +080098
Chase Qic246bb42017-01-13 16:12:35 +080099 install_deps "${pkgs}" "${SKIP_INSTALL}"
Chase Qi840edf22016-09-02 16:24:58 +0800100
Chase Qic246bb42017-01-13 16:12:35 +0800101Package name may vary by distribution. In this case, you will need to handle
102package installation with separate lines. dist_name function is designed to
103detect the distribution ID at running time so that you can define package name
104by distribution. Refer to the following example.
Chase Qi840edf22016-09-02 16:24:58 +0800105
Chase Qic246bb42017-01-13 16:12:35 +0800106Example 2::
Chase Qi840edf22016-09-02 16:24:58 +0800107
108 dist_name
109 case "${dist}" in
Chase Qic246bb42017-01-13 16:12:35 +0800110 Debian|Ubuntu) install_deps "lsb-release" "${SKIP_INSTALL}" ;;
111 Fedora|CentOS) install_deps "redhat-lsb-core" "${SKIP_INSTALL}" ;;
112 Unknown) warn_msg "Unsupported distro: package install skipped" ;;
Chase Qi840edf22016-09-02 16:24:58 +0800113 esac
Chase Qic246bb42017-01-13 16:12:35 +0800114
115Except automated package installation, you may also need to download and install
116software manually. If you want to make these steps skippable, here is an
117example.
118
119Example 3::
120
121 if [ "${SKIP_INSTALL}" = "true" ] || [ "${SKIP_INSTALL}" = "True" ]; then
122 dist_name
123 case "${dist}" in
124 Debian|Ubuntu) install_deps "${pkgs}" ;;
125 Fedora|CentOS) install_deps "${pkgs}" ;;
126 Unknown) warn_msg "Unsupported distro: package install skipped" ;;
127 esac
128
129 # manually install steps.
130 git clone "${repo}"
131 cd "${dir}"
132 configure && make install
133 fi
134
135Hopefully, the above 3 examples able to cover most of the user cases. When
136writing test case, in general:
137
138 * Define 'SKIP_INSTALL' variable with 'false' as default.
139 * Add parameter '-s <True|False>', so that user can modify 'SKIP_INSTALL'.
140 * Try to use the above functions, and give unknown distributions more care.
Chase Qi840edf22016-09-02 16:24:58 +0800141
1423. Saving output
143~~~~~~~~~~~~~~~~~
144
145'test-case-name/output' directory is recommended to save test log and result
146files.
147
1484. Parsing result
149~~~~~~~~~~~~~~~~~
150
151Saving parsed result in the same format is important for post process such as
152sending to LAVA. The following result format should be followed.
153
154 test-caes-id pass/fail/skip
155 test-case-id pass/fail/skip measurement units
156
157'output/result.txt' file is recommended to save result.
158
159We encourage test writer to use the functions defined in 'sh-test-lib' to format
160test result.
161
162Print "test-case pass/fail" by checking exit code::
163
164 check_return "${test_case_id}"
165
166Add a metric for performance test::
167
168 add_metic "${test-case-id}" "pass/fail/skip" "${measurement}" "${units"}
169
170
1715. Running in LAVA
172~~~~~~~~~~~~~~~~~~
173
174LAVA is the foundation of test automation in Linaro. It is able to handle image
175deployment and boot, and provides a test shell for test run. To run a test case
176in LAVA, a definition file in YAML format is required.
177
178Bear in mind, do all the LAVA-specific steps in test definition file, and do not
179use any LAVA-specific steps in test script, otherwise you may lock yourself out
180of your own test case when LAVA isn't available or the board you want to test
181wasn't deployed in LAVA.
182
183Test script should handle dependencies installation, test execution, result
184parsing and other work in a self-contained way, and produce result.txt file with
185a format that can be easily parsed and sent to LAVA. This is a more robust way.
186Test case works with/without LAVA and can be tested locally.
187
188A general test definition file should contain the below keywords and steps::
189
190 metadata:
191 # Define parameters required by test case with default values.
192 params:
193 SKIP_INSTALL: False
194 run:
195 # A typical test run in LAVA requires the below steps.
196 steps:
197 # Enter the directory of the test case.
198 - cd ./automated/linux/smoke/
199 # Run the test.
200 - ./smoke.sh -s "${SKIP_INSTALL}"
201 # Send the results in result.txt to LAVA.
202 - ../../utils/send-to-lava.sh ./output/result.txt
203
204Android specific
205----------------
206
207The above test writing guidelines also apply to Android test cases. The major
208difference is that we run all Android test cases through adb shell. Compare with
209local run, adb and adb shell enable us to do more. And this model is well
210supported by LAVA V2 LXC protocol.
211
212A typical Android test case can be written with the following steps::
213
214 # Check adb connect with initialize_adb funtion
215 initialize_adb
216 # Install binaries and scripts
217 detect_abi
218 install "../../bin/${abi}/busybox"
219 install "./device-script.sh"
220 # Run test script through adb shell.
221 adb -s "${SN}" shell device-script.sh
222 # Pull output from device for parsing.
223 pull_output "${DEVICE_OUTPUT}" "${HOST_OUTPUT}"
224
225Test Contribution Checklist
226===========================
227
228* When applicable, check test cases with the following tools with line length
229 rule relaxed.
230
231 - checkbashisms - check for bashisms in /bin/sh scripts.
232 - shellcheck - Shell script analysis tool.
233 - pep8 - check Python code against the style conventions in PEP 8.
234 - pyflakes - simple Python 2 source checker
235 - pylint - code analysis for Python
236
237* Run test cases on local system without LAVA.
238* Optionally, run test cases in LAVA and provide job example.