blob: 0cc3e049e562211bbc49f3bb167fd8f16009f441 [file] [log] [blame]
Chase Qi840edf22016-09-02 16:24:58 +08001=======================
2Test Writing Guidelines
3=======================
4
5This document describes guidelines and is intended for anybody who want to write
6or modify a test case. It's not a definitive guide and it's not, by any means, a
7substitute for common sense.
8
9General Rules
10=============
11
121. Simplicity
13-------------
14
15It's worth keep test cases as simple as possible.
16
172. Code duplication
18-------------------
19
20Whenever you are about to copy a large part of the code from one test case to
21another, think if it is possible to move it to a library to reduce code
22duplication and the cost of maintenance.
23
243. Coding style
25---------------
26
27Use common sense and BE CONSISTENT.
28
29If you are editing code, take a few minutes to look at the code around you and
30determine its style.
31
32The point of having style guidelines is to have a common vocabulary of coding so
33people can concentrate on what you are saying, rather than on how you are saying
34it.
35
363.1 Shell coding style
37~~~~~~~~~~~~~~~~~~~~~~
38When writing test cases in shell write in *portable shell* only.
39
40You can either try to run the test cases on Debian which has '/bin/sh' pointing
41to 'dash' by default or install 'dash' on your favorite distribution and use
42it to run the tests.
43
44Ref: `Shell Style Guide <https://google.github.io/styleguide/shell.xml>`_
45
463.2 Python coding style
47~~~~~~~~~~~~~~~~~~~~~~~
48Please follow PEP 8 style guide whenever possible.
49
50Ref: `PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_
51Easy-to-read version of PEP 8 available at `pep8.org <http://pep8.org>`_
52
534. Commenting code
54------------------
55
56Use useful comments in your program to explain:
57
58 * assumptions
59 * important decisions
60 * important details
61 * problems you're trying to solve
62 * problems you're trying to overcome in your program, etc.
63
64Code tells you how, comments should tell you why.
65
665. License
67----------
68Code contributed to this repository should be licensed under GPLv2+ (GNU GPL
69version 2 or any later version).
70
71Writing a test case
72===================
73
74Linux
75------
76
771. Structure
78~~~~~~~~~~~~
79
80Tests are generally placed under 'linux/' directory. Everything that related to
81the test goes under the same folder named with test case name.
82
83Define 'linux/test-case-name/output' folder in test case to save test output and
84result. Using a dedicated folder is helpful to distinguish between test script
85and test output.
86
872. Installing dependence
88~~~~~~~~~~~~~~~~~~~~~~~~
89
90The same test case should support Debian/Ubuntu, Fedora/CentOS and OE builds.
91
92When using package management tool like apt or yum/dnf to install dependencies,
93package name may vary depending on the distributions you want to support, so you
94will need to define dependent packages by distribution. dist_name and
95install_deps functions provided in 'lib/sh-test-lib' can be used to detect the
96distribution at running time and handle package installation respectively.
97
98On OSes built using OpenEmbedded that don't support installing additional
99packages, even compile and install from source code is impossible when tool
100chain isn't available. The required dependencies should be pre-install. To run
101test case that contain install steps on this kind of OS:
102
103 * Define 'SKIP_INSTALL' variable with 'False' as default.
104 * Add parameter '-s <True|False>', so that user can modify 'SKIP_INSTALL'.
105 * Use "install_deps ${pkgs} ${SKIP_INSTALL}" to install package. It will
106 check the value of 'SKIP_INSTALL' to determine whether skip the install.
107 * When you have customized install steps like code downloading, compilation
108 and install defined, you will need to do the check yourself.
109
110An example::
111
112 dist_name
113 case "${dist}" in
114 Debian|Ubuntu) pkgs="lsb-release" ;;
115 Fedora|CentOS) pkgs="redhat-lsb-core" ;;
116 esac
117 install_deps "${pkgs}" "${SKIP_INSTALL}"
118
1193. Saving output
120~~~~~~~~~~~~~~~~~
121
122'test-case-name/output' directory is recommended to save test log and result
123files.
124
1254. Parsing result
126~~~~~~~~~~~~~~~~~
127
128Saving parsed result in the same format is important for post process such as
129sending to LAVA. The following result format should be followed.
130
131 test-caes-id pass/fail/skip
132 test-case-id pass/fail/skip measurement units
133
134'output/result.txt' file is recommended to save result.
135
136We encourage test writer to use the functions defined in 'sh-test-lib' to format
137test result.
138
139Print "test-case pass/fail" by checking exit code::
140
141 check_return "${test_case_id}"
142
143Add a metric for performance test::
144
145 add_metic "${test-case-id}" "pass/fail/skip" "${measurement}" "${units"}
146
147
1485. Running in LAVA
149~~~~~~~~~~~~~~~~~~
150
151LAVA is the foundation of test automation in Linaro. It is able to handle image
152deployment and boot, and provides a test shell for test run. To run a test case
153in LAVA, a definition file in YAML format is required.
154
155Bear in mind, do all the LAVA-specific steps in test definition file, and do not
156use any LAVA-specific steps in test script, otherwise you may lock yourself out
157of your own test case when LAVA isn't available or the board you want to test
158wasn't deployed in LAVA.
159
160Test script should handle dependencies installation, test execution, result
161parsing and other work in a self-contained way, and produce result.txt file with
162a format that can be easily parsed and sent to LAVA. This is a more robust way.
163Test case works with/without LAVA and can be tested locally.
164
165A general test definition file should contain the below keywords and steps::
166
167 metadata:
168 # Define parameters required by test case with default values.
169 params:
170 SKIP_INSTALL: False
171 run:
172 # A typical test run in LAVA requires the below steps.
173 steps:
174 # Enter the directory of the test case.
175 - cd ./automated/linux/smoke/
176 # Run the test.
177 - ./smoke.sh -s "${SKIP_INSTALL}"
178 # Send the results in result.txt to LAVA.
179 - ../../utils/send-to-lava.sh ./output/result.txt
180
181Android specific
182----------------
183
184The above test writing guidelines also apply to Android test cases. The major
185difference is that we run all Android test cases through adb shell. Compare with
186local run, adb and adb shell enable us to do more. And this model is well
187supported by LAVA V2 LXC protocol.
188
189A typical Android test case can be written with the following steps::
190
191 # Check adb connect with initialize_adb funtion
192 initialize_adb
193 # Install binaries and scripts
194 detect_abi
195 install "../../bin/${abi}/busybox"
196 install "./device-script.sh"
197 # Run test script through adb shell.
198 adb -s "${SN}" shell device-script.sh
199 # Pull output from device for parsing.
200 pull_output "${DEVICE_OUTPUT}" "${HOST_OUTPUT}"
201
202Test Contribution Checklist
203===========================
204
205* When applicable, check test cases with the following tools with line length
206 rule relaxed.
207
208 - checkbashisms - check for bashisms in /bin/sh scripts.
209 - shellcheck - Shell script analysis tool.
210 - pep8 - check Python code against the style conventions in PEP 8.
211 - pyflakes - simple Python 2 source checker
212 - pylint - code analysis for Python
213
214* Run test cases on local system without LAVA.
215* Optionally, run test cases in LAVA and provide job example.