commit | 81ffea2f5d0a6cb72f0709f1943a2fd3933f5798 | [log] [tgz] |
---|---|---|
author | Karsten Tausche <karsten@fairphone.com> | Tue Jun 19 15:34:01 2018 +0200 |
committer | Karsten Tausche <karsten@fairphone.com> | Tue Feb 12 17:15:27 2019 +0100 |
tree | d47eb352e0cdf7c71bbf042ad13e9b88d4cb9f30 | |
parent | ad557e01524d4b06f61e31072c2127fdd7077aa9 [diff] |
Add MultiNode scripts for running tests on multiple adb TCP/IP devices With these test shells and functions, test implementations can make use of multiple devices by connecting remote devices via adb TCP/IP connections. On the MultiNode side, one master role would execute the actual test shell, whereas multiple instances of a worker role allow for remote access to their device adb TCP/IP. The following steps are required: * Start with completely booted and network attached devices. * workers: share-local-device-over-adb-tcpip.yaml - run `adb tcpip` on their local devices and share their IP address with the master * master: connect-to-remote-adb-tcpip-devices.yaml - Connects to all devices shared by the workers via `adb connect` (while also using its own local USB attached device) * workers: wait-and-keep-local-device-accessible.yaml - Wait for commands from the master -- mostly wait and do nothing else, but may need to reset their devices/network connections to make them accessible again if they got lost. * master: remote-adb-devices-smoke-test.yaml - Dummy action for the actual test job. A proper test job would execute some potentially long running tests on all available devices, instruct the worker to reconnect on connection lost etc. * master: release-remote-adb-tcpip-devices.yaml - Test executions end with the release command, so that workers exit their event loop. * workers: wait-for-release-and-reset.yaml - Final synchronization point between worker and the master, so that the master is in control when the workers are shutting down. Brings worker devices back into adb USB mode, so that they are usable for regular local test jobs. Change-Id: I23f22344b9bd758d3898d4345204157cecd7d624 Depends-On: Icd68e9de5a349880c52ec06229cd3f8bcb8eeecc Signed-off-by: Karsten Tausche <karsten@fairphone.com>
A test suite works with and without LAVA. The following two sets of automated tests are supported.
automated/linux/
automated/android/
For each test case, both the test script and the corresponding test definition files are provided in the same folder and are named after the test case name. Test scripts are self-contained and work independently. Test definition files in YAML format are provided for test runs with local test-runner and within LAVA.
Installing the latest development version:
git clone https://github.com/Linaro/test-definitions cd ./test-definitions . ./automated/bin/setenv.sh pip install -r ${REPO_PATH}/automated/utils/requirements.txt
If the above succeeds, try:
test-runner -h
cd ./automated/linux/smoke/ ./smoke.sh
Skip package installation:
./smoke.sh -s true
cd ./automated/android/dd-wr-speed/ ./dd-wr-speed.sh
Specify SN when more than one device connected:
./dd-wr-speed.sh -s "serial_no"
Specify other params:
./dd-wr-speed.sh -i "10" -p "/dev/block/mmcblk1p1"
test-runner -d ./automated/linux/smoke/smoke.yaml
skip package install:
test-runner -d ./automated/linux/smoke/smoke.yaml -s
Run a set of tests defined in agenda file:
test-runner -p ./plans/linux-example.yaml
Apply test plan overlay to skip, amend or add tests:
test-runner -p ./plans/linux-example.yaml -O test-plan-overlay-example.yaml
Test script normally puts test log and parsed results to its own output
directory. e.g.
automated/linux/smoke/output
test-runner needs a separate directory outside the repo to store test and result files. The directory defaults to $HOME/output
and can be changed with -o <dir>
. test-runner converts test definition file to run.sh
and then parses its stdout. Results will be saved to results.{json,csv} by test. e.g.
/root/output/smoke_9879e7fd-a8b6-472d-b266-a20b05d52ed1/result.csv
When using the same output directory for multiple tests, test-runner combines results from all tests and save them to ${OUTPUT}/results.{json,csv}
. e.g.
/root/output/result.json
Please use Github for pull requests: https://github.com/Linaro/test-definitions/pulls
https://git.linaro.org/qa/test-definitions.git is a read-only mirror. New changes in the github repo will be pushed to the mirror every 10 minutes.
Refer to test writing guidelines to modify or add test.
Changes need to be able to pass sanity check, which by default checks files in the most recent commit:
./sanity-check.sh
To develop locally, there are Dockerfiles in test/ that can be used to simulate target environments. The easiest way to use is to run test.sh [debian|centos]
. test.sh will run validate.py, and then build the Docker environment specified, run plans/linux-example.yaml, and then drop into a bash shell inside the container so that things like /root/output can be inspected. It is not (yet) a pass/fail test; merely a development helper and validation environment.