automated: linux: add tcpreplay testing framework

This patch introduces an automated testing setup for tcpreplay-based traffic
replay and analysis. The system consists of:

1. `tcpreplay.py`: A test runner that:
   - Creates and configures a TAP interface
   - Replays PCAP files using `tcpreplay`
   - Tracks results and produces a summary for each test
   - Handles both expected and unexpected results (e.g., xfail -> pass)

2. `generate_pcap.py`: A Scapy-based script to:
   - Generate a suite of PCAP files for functional and edge case testing
   - Include both valid packets and malformed/false-positive scenarios
   - Provide coverage for multiple protocols (TCP, UDP, ICMP, DNS, etc.)
   - Simulate problematic flows like fragmented packets and invalid flags

Highlights:
- xfail support: tests expected to fail are counted as 'pass' if they do fail
- xpass detection: alerts if a known-broken test unexpectedly succeeds
- Easy extensibility for new PCAPs and expectations

Example test cases include:
- TCP lifecycle (`tcp_basic`, `tcp_full_cycle`)
- Bad flag scenarios (`bad_tcp_flags`)
- Noise/overlap (`false_positive_overlap`)
- Fragmentation and malformed headers

Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
3 files changed
tree: 1c00728c622ef3236961b932f12e154b5382fe5d
  1. .github/
  2. .reuse/
  3. automated/
  4. docs/
  5. LICENSES/
  6. manual/
  7. mkdocs_plugin/
  8. plans/
  9. test/
  10. .gitignore
  11. .readthedocs.yml
  12. COPYING
  13. COPYRIGHTS
  14. mkdocs.yml
  15. README.md
  16. sanity-check.sh
  17. test.sh
  18. validate.py
README.md

Build Status REUSE Compliance Check

Test Definitions

A set of testing scripts designed to work with LAVA. Also contains test-runner script that allows execution outside LAVA.

More details in docs