Add MultiNode scripts for running tests on multiple adb TCP/IP devices

With these test shells and functions, test implementations can make use
of multiple devices by connecting remote devices via adb TCP/IP
connections. On the MultiNode side, one master role would execute the
actual test shell, whereas multiple instances of a worker role allow for
remote access to their device adb TCP/IP. The following steps are
required:

* Start with completely booted and network attached devices.
* workers: share-local-device-over-adb-tcpip.yaml
    - run `adb tcpip` on their local devices and share their IP address
      with the master
* master: connect-to-remote-adb-tcpip-devices.yaml
    - Connects to all devices shared by the workers via `adb connect`
      (while also using its own local USB attached device)
* workers: wait-and-keep-local-device-accessible.yaml
    - Wait for commands from the master -- mostly wait and do nothing
      else, but may need to reset their devices/network connections
      to make them accessible again if they got lost.
* master: remote-adb-devices-smoke-test.yaml
    - Dummy action for the actual test job. A proper test job would
      execute some potentially long running tests on all available
      devices, instruct the worker to reconnect on connection lost etc.
* master: release-remote-adb-tcpip-devices.yaml
    - Test executions end with the release command, so that workers exit
      their event loop.
* workers: wait-for-release-and-reset.yaml
    - Final synchronization point between worker and the master, so that
      the master is in control when the workers are shutting down.
      Brings worker devices back into adb USB mode, so that they are
      usable for regular local test jobs.

Change-Id: I23f22344b9bd758d3898d4345204157cecd7d624
Depends-On: Icd68e9de5a349880c52ec06229cd3f8bcb8eeecc
Signed-off-by: Karsten Tausche <karsten@fairphone.com>
9 files changed
tree: d47eb352e0cdf7c71bbf042ad13e9b88d4cb9f30
  1. automated/
  2. manual/
  3. plans/
  4. test/
  5. .gitignore
  6. .travis.yml
  7. CODEOWNERS
  8. COPYING
  9. README.md
  10. sanity-check.sh
  11. test.sh
  12. validate.py
README.md

Test Definitions

A test suite works with and without LAVA. The following two sets of automated tests are supported.

  • automated/linux/
  • automated/android/

For each test case, both the test script and the corresponding test definition files are provided in the same folder and are named after the test case name. Test scripts are self-contained and work independently. Test definition files in YAML format are provided for test runs with local test-runner and within LAVA.

Installation

Installing the latest development version:

git clone https://github.com/Linaro/test-definitions
cd ./test-definitions
. ./automated/bin/setenv.sh
pip install -r ${REPO_PATH}/automated/utils/requirements.txt

If the above succeeds, try:

test-runner -h

Running test

Running test script

linux

cd ./automated/linux/smoke/
./smoke.sh

Skip package installation:

./smoke.sh -s true

android

cd ./automated/android/dd-wr-speed/
./dd-wr-speed.sh

Specify SN when more than one device connected:

./dd-wr-speed.sh -s "serial_no"

Specify other params:

./dd-wr-speed.sh -i "10" -p "/dev/block/mmcblk1p1"

Using test-runner

single test run

test-runner -d ./automated/linux/smoke/smoke.yaml

skip package install:

test-runner -d ./automated/linux/smoke/smoke.yaml -s

running test plan

Run a set of tests defined in agenda file:

test-runner -p ./plans/linux-example.yaml

Apply test plan overlay to skip, amend or add tests:

test-runner -p ./plans/linux-example.yaml -O test-plan-overlay-example.yaml

Collecting result

Using test script

Test script normally puts test log and parsed results to its own output directory. e.g.

automated/linux/smoke/output

Using test-runner

test-runner needs a separate directory outside the repo to store test and result files. The directory defaults to $HOME/output and can be changed with -o <dir>. test-runner converts test definition file to run.sh and then parses its stdout. Results will be saved to results.{json,csv} by test. e.g.

/root/output/smoke_9879e7fd-a8b6-472d-b266-a20b05d52ed1/result.csv

When using the same output directory for multiple tests, test-runner combines results from all tests and save them to ${OUTPUT}/results.{json,csv}. e.g.

/root/output/result.json

Contributing

Please use Github for pull requests: https://github.com/Linaro/test-definitions/pulls

https://git.linaro.org/qa/test-definitions.git is a read-only mirror. New changes in the github repo will be pushed to the mirror every 10 minutes.

Refer to test writing guidelines to modify or add test.

Changes need to be able to pass sanity check, which by default checks files in the most recent commit:

./sanity-check.sh

To develop locally, there are Dockerfiles in test/ that can be used to simulate target environments. The easiest way to use is to run test.sh [debian|centos]. test.sh will run validate.py, and then build the Docker environment specified, run plans/linux-example.yaml, and then drop into a bash shell inside the container so that things like /root/output can be inspected. It is not (yet) a pass/fail test; merely a development helper and validation environment.