summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPaul Sokolovsky <paul.sokolovsky@linaro.org>2014-12-16 00:56:05 +0200
committerPaul Sokolovsky <paul.sokolovsky@linaro.org>2014-12-16 00:56:05 +0200
commit31871d2be1b2180896a44452019943757996a3d6 (patch)
treef3a4aeb22e947968cd82d7398c5f871d0723b585
parentc82b9999a51246e7bbb6e9e2169a1d39b4e2b824 (diff)
Remove "new-publish" prototype, superseded by Publishing API.
Change-Id: Iad77837e9da8b271c37729534b1e220934b33f5c
-rw-r--r--new-publish/README111
-rwxr-xr-xnew-publish/clean-uploads6
-rwxr-xr-xnew-publish/integr-test/staging-a-b-login.sh10
-rwxr-xr-xnew-publish/integr-test/staging-a-b-sftp.sh8
-rw-r--r--new-publish/integr-test/testfile1
-rwxr-xr-xnew-publish/propagate.py95
-rwxr-xr-xnew-publish/publib.py121
-rwxr-xr-xnew-publish/publish80
-rwxr-xr-xnew-publish/setup.sh124
-rw-r--r--new-publish/test/data/dir1/dir1_file0
-rw-r--r--new-publish/test/data/dir2/dir2_file0
-rw-r--r--new-publish/test/data/dir2/subdir1/dir2_subdir1_file0
-rw-r--r--new-publish/test/data/top_file1
-rw-r--r--new-publish/test/test_publish.py73
14 files changed, 0 insertions, 630 deletions
diff --git a/new-publish/README b/new-publish/README
deleted file mode 100644
index b9be0c4..0000000
--- a/new-publish/README
+++ /dev/null
@@ -1,111 +0,0 @@
-Prototype of new Publishing API for Linaro CI
-=============================================
-
-Background
----------------------
-Builds of various products and components must finish with publishing
-their artifacts to a central server, hereafter called "snapshots".
-Builds also must be queued for testing in LAVA. All publishing
-should happen in secure manner, prohibiting direct system break-ins
-and minimizing types of other attacks, like denial of service.
-
-This prototype tries to establish consistent external interface reusable
-for wild variety of Linaro builds, and initial implementation which
-works with existing infrastructure and setup in place.
-
-Generalize publishing process:
-
-Builder -> Snapshots
-
-
-External Interface
-------------------
-Build jobs use publishing API using the shell command calls. To
-perform publishing build calls following script:
-
-publish --token=<token> --type=<build_type> --strip=<strip> <build_id> <glob_pattern>...
-
-<token>
- Token to authenticate publishing request. It is expected that security
- token is injected into build process by top-level scheduler. [Not
- implemented in prototype.]
-<build_type>
- Type of the build from a predefined set, like "android", "kernel",
- "openembedded", etc. Generally, this selects target area for publishing,
- but may influence other build parameters, like directory structure,
- metadata, etc.
-<strip>
- String number of components from paths produced by <glob_pattern>.
-<build_id>
- Build ID of the form <job_name>/<build_no>. This allows identification
- of particular build job and its specific build case. build_id is usually
- used directly as path (URL) component to access build artifacts.
-<glob_pattern>
- Shell glob patterns to capture artifact files. There may be more than one,
- separate by spaces, or (for compatibility with Jenkins), by commas (in this
- case no spaces allowed). Patterns must follow shell syntax, i.e.
- multi-level match (**) is not supported.
-
-Example:
-
-$ publish --token=SECRET --type=android --strip=2 panda/10 out/target/*.tar.bz2
-
-With this command, artifacts can be expected to be found on URL like
-
-http://snapshosts/android/panda/10/*.tar.bz2
-
-Internal Implementation
------------------------
-There's currently no token-based authentication for publishing services,
-and instead SSH auth used. Consequenetly, for security reasons, the accounts
-used for publishing should be as restricted as possible, in practice we
-use few accounts for each step of the process, each fortified to disallow
-opportunity of direct shell access. SFTP is used as a transport (due to
-historical reasons).
-
-Current publisher process goes as:
-
-Builder -> Master -> Snapshots
-
-Publishing starts on build slave with SFTPing artifact files to master
-(using one account with chrooted SFTP access), then triggering further
-processing by calling out (by SSH) sshd-config fixed script on master.
-This script recursively applies same processing (chroot SFTP, fixed script)
-to publish files to snapshots.
-
-Conclusions and Future Work
----------------------------
-The biggest management and security issue with the implementation described
-above is authentication of publishing clients to publishing service.
-Implementation described above is cumbersome to setup and maintain and
-doesn't adhere to strictest security practices.
-
-To adress this problem, implementation of publishing as a web service may be
-suggested - this way, authentication handling on server side is confined to
-a single custom component, web application. It thus can be very flexible
-and featureful, for example, we can implement "publishing tockens", each
-associated with set of constraints, like "active not before 30min from
-time of issuance", "active not after 2hr from time of issuance", "can
-be used for publishing type 'android'", "publisher IP should be X.X.X.X",
-etc., etc. However, there still remains problems of issuing tockens for
-build hosts. Essentially, tockens should be "injected" into builds by
-a trusted party (a kind of build scheduling frontend). We already have
-frontend on android-build, but ci.linaro.org presents "raw" Jenkins. It
-might be possible to integrate needed functionality into Jenkins via plugin.
-
-But publishing few moderately-sized files is not the only usecase for
-Publishing Service. For OpenEmbedded builds, we need to publish used sources/
-cache files, which may be thousands of files totalling gigabytes. Except
-that any particular build would like likely change only reasonably small
-subset of these files, and only those need to bt actually published.
-This is clearly a usecase for rsync, but with rsync, we would need to deal
-with PAM for any custom authentication, and it's still unclear if it will
-possible to achieve flexibility simalar to tokens described.
-
-That's the dichotomy we have - we need efficient transfer protocol, as
-we potentially deal with many files and large amounts of data, and yet
-we need flexible token/ticket style authentication. It may be possible
-to choose a compromise between the two - implement a webservice with
-rudimentary "file freshness" protocol (which would work on the level of
-entire file, not sub-blocks). Existing system-level ticketing systems
-like Kerberos can be also considered.
diff --git a/new-publish/clean-uploads b/new-publish/clean-uploads
deleted file mode 100755
index 0852530..0000000
--- a/new-publish/clean-uploads
+++ /dev/null
@@ -1,6 +0,0 @@
-#!/bin/sh
-#
-# Cronjob to clean up uploads area on master
-#
-
-find /mnt/publish/uploads/ -mindepth 1 -mmin +30 | xargs -n100 rm -rf
diff --git a/new-publish/integr-test/staging-a-b-login.sh b/new-publish/integr-test/staging-a-b-login.sh
deleted file mode 100755
index 40f0212..0000000
--- a/new-publish/integr-test/staging-a-b-login.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-# Keys should be in ../../../ansible-secrets-jenkins/jenkins-publish/
-echo "================="
-# Expected result: successful connection, message about "sftp only"
-ssh -i linaro-android-build-publish \
- linaro-android-build-publish@staging.snapshots.linaro.org
-echo "================="
-# Expected result: successful connection, usage from "publish_to_snapshots.py"
-ssh -i linaro-android-build-publish-trigger \
- linaro-android-build-publish-trigger@staging.snapshots.linaro.org
-echo "================="
diff --git a/new-publish/integr-test/staging-a-b-sftp.sh b/new-publish/integr-test/staging-a-b-sftp.sh
deleted file mode 100755
index 3570e36..0000000
--- a/new-publish/integr-test/staging-a-b-sftp.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-# Keys should be in ../../../ansible-secrets-jenkins/jenkins-publish/
-echo "ls
-cd uploads/android/
-put testfile
-" | sftp -b - -i linaro-android-build-publish \
- linaro-android-build-publish@staging.snapshots.linaro.org
-
-# I/O error during upload may mean disk full on the server
diff --git a/new-publish/integr-test/testfile b/new-publish/integr-test/testfile
deleted file mode 100644
index 2691857..0000000
--- a/new-publish/integr-test/testfile
+++ /dev/null
@@ -1 +0,0 @@
-testfile
diff --git a/new-publish/propagate.py b/new-publish/propagate.py
deleted file mode 100755
index cd1ee61..0000000
--- a/new-publish/propagate.py
+++ /dev/null
@@ -1,95 +0,0 @@
-#!/usr/bin/env python
-"""This script propagates build artifacts from build master host
-to actual publishing location (snapshots)."""
-import sys
-import os
-import optparse
-
-import paramiko
-
-import publib
-
-
-# If this file exists, publishing will be to staging
-STAGING_FLAG_FILE = "/etc/linaro/staging-publishing"
-REMOTE_HOST_PRODUCTION = "snapshots.linaro.org"
-REMOTE_HOST_STAGING = "staging.snapshots.linaro.org"
-PUBLISH_USER_NAME = "linaro-android-build-publish"
-TRIGGER_USER_NAME = "linaro-android-build-publish-trigger"
-#PUBLISH_KEY_FILE = "/home/ubuntu/snapshots-sync2.new/linaro-android-build-publish"
-#TRIGGER_KEY_FILE = "/home/ubuntu/snapshots-sync2.new/linaro-android-build-publish-trigger"
-PUBLISH_KEY_FILE = "/var/lib/jenkins/.ssh/linaro-android-build-publish"
-TRIGGER_KEY_FILE = "/var/lib/jenkins/.ssh/linaro-android-build-publish-trigger"
-LOCAL_UPLOAD_DIR = "/mnt/publish/uploads"
-REMOTE_UPLOAD_DIR = "/uploads/android"
-
-def log(msg):
- print msg
- sys.stdout.flush()
-
-
-if __name__ == "__main__":
- optparser = optparse.OptionParser(usage="%prog <job/build>")
- optparser.add_option("-s", "--staging", action="store_true", help="Publish to staging server")
- optparser.add_option("--identity-publish", metavar="KEY", default=PUBLISH_KEY_FILE, help="Publish SSH key file")
- optparser.add_option("--identity-trigger", metavar="KEY", default=TRIGGER_KEY_FILE, help="Trigger SSH key file")
- optparser.add_option("-n", "--dry-run", action="store_true", help="Don't actually publish files, log commands")
- optparser.add_option("--host", help="Override destination publishing host, for debugging")
- optparser.add_option("--step", default="all", help="Run only specific step")
- options, args = optparser.parse_args(sys.argv[1:])
- if len(args) != 1:
- optparser.error("Wrong number of arguments")
-
- publib.validate_build_id(args[0])
-
- print "Starting propagation phase"
-
- staging_flag = os.path.exists(STAGING_FLAG_FILE)
-
- if options.staging or staging_flag:
- remote_host = REMOTE_HOST_STAGING
- opt_staging = "-s"
- else:
- remote_host = REMOTE_HOST_PRODUCTION
- opt_staging = ""
- if options.host:
- remote_host = options.host
-
- print "Publishing to:", remote_host
-
- if options.step in ("all", "1"):
- file_list = []
- for root, dirs, files in os.walk(os.path.join(LOCAL_UPLOAD_DIR, args[0])):
- file_list.extend([os.path.join(root, f) for f in files])
- print "Files:", file_list
- strip = len(LOCAL_UPLOAD_DIR.strip("/").split("/"))
- dir_list = publib.make_dir_struct(file_list, REMOTE_UPLOAD_DIR, strip=strip)
- print "Dirs:", dir_list
- if not options.dry_run:
- log("Creating dir structure on upload server")
- publib.create_dir_struct(dir_list, remote_host, PUBLISH_USER_NAME,
- options.identity_publish)
- log("Done creating dir structure on upload server")
- upload_script = publib.make_upload_script(file_list, REMOTE_UPLOAD_DIR, strip=strip)
- log("Uploading files to upload server")
- publib.upload_files(upload_script, remote_host, PUBLISH_USER_NAME,
- options.identity_publish, options)
- log("Done uploading files to upload server")
-
- if options.step in ("all", "2"):
- job, build = args[0].split("/")
- client = paramiko.SSHClient()
- client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
- client.connect(remote_host, username=TRIGGER_USER_NAME, key_filename=TRIGGER_KEY_FILE)
- log("Triggering moving of files from upload to download area")
- stdin, stdout, stderr = client.exec_command("reshuffle-files -t android -j %s -n %s -m %s" % (job, build, opt_staging))
- stdin.close()
- rc = stdout.channel.recv_exit_status()
- print "Moving phase completed with result: %d" % rc
- print "=== stdout ==="
- print stdout.read()
- print "=== stderr ==="
- print stderr.read()
- print "=============="
- client.close()
- sys.exit(rc)
diff --git a/new-publish/publib.py b/new-publish/publib.py
deleted file mode 100755
index d593a7d..0000000
--- a/new-publish/publib.py
+++ /dev/null
@@ -1,121 +0,0 @@
-#!/usr/bin/env python
-import sys
-import os
-import glob
-import tempfile
-import optparse
-
-import paramiko
-
-
-def validate_build_id(id):
- if "\\" in id:
- pass
- elif ".." in id:
- pass
- elif id[0] == "/":
- pass
- elif len(id.split("/")) != 2:
- pass
- else:
- return True
-
- print "Invalid build ID"
- sys.exit(1)
-
-
-def get_file_list(patterns):
- expanded = []
- for p in patterns:
- for direl in glob.glob(p):
- if os.path.isfile(direl):
- expanded.append(direl)
- expanded.sort(lambda a, b: cmp(a.rsplit("/", 1), b.rsplit("/", 1)))
- return expanded
-
-def common_path_prefix(s1, s2):
- "Find common prefix string of s1 & s2. Return (prefix, rest_of_s1)"
- s1 = s1.strip("/").split("/")
- s2 = s2.strip("/").split("/")
- if len(s1) > len(s2):
- l = len(s2)
- else:
- l = len(s1)
- for i in xrange(l):
- if s1[i] != s2[i]:
- return s1[:i], s1[i:]
- return s1[:l], s1[l:]
-
-def strip_path_comps(path, num):
- path = path.strip("/")
- return "/".join(path.split("/")[num:])
-
-def make_dir_struct(file_list, upload_dir, build_dir="", strip=0):
- script = []
- file_list = map(lambda f: strip_path_comps(f, strip), file_list)
-
- file_list = map(lambda f: os.path.join(build_dir, f), file_list)
- # Skip top-level files - dir for them is pre-existing
- file_list = filter(lambda f: "/" in f, file_list)
- dir_list = map(lambda f: os.path.dirname(f), file_list)
- dir_list = list(set(dir_list))
- dir_list.sort()
- prev_d = ''
- for d in dir_list:
- created, to_create = common_path_prefix(d, prev_d)
- p = os.path.join(upload_dir, "/".join(created))
- for comp in to_create:
- p = os.path.join(p, comp)
- script.append(p)
- prev_d = d
- return script
-
-def create_dir_struct(dir_list, host, user, key):
- client = paramiko.SSHClient()
- client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
- client.connect(host, username=user, key_filename=key)
- sftp = paramiko.SFTPClient.from_transport(client.get_transport())
- for d in dir_list:
- try:
- sftp.listdir(d)
- except:
- sftp.mkdir(d, 0755)
-
-
-def make_upload_script(file_list, upload_dir, build_dir="", strip=0):
- script = []
-# if dir and dir[0] != "/":
-# dir = "/" + dir
- last_dir = None
- for f in file_list:
- target_f = strip_path_comps(f, strip)
- # Prepend UPLOAD_DIR now, to avoid special case of no dir name
- # in file name
- target_f = os.path.join(upload_dir, build_dir, target_f)
- dirname, fname = target_f.rsplit("/", 1)
- if dirname != last_dir:
- script.append("cd %s" % dirname)
- last_dir = dirname
- script.append("put %s" % f)
- return script
-
-
-def upload_files(upload_script, host, user, key, options):
- fd, fname = tempfile.mkstemp(prefix="sftp_script")
- os.close(fd)
- f = open(fname, "w")
- f.write("\n".join(upload_script) + "\n")
- f.close()
-
- cmd = "sftp -i %s -b %s %s@%s" % (key, fname, user, host)
- print cmd
- sys.stdout.flush()
- if not options.dry_run:
- rc = os.system(cmd)
- try:
- os.remove(fname)
- except:
- pass
- if rc != 0:
- print "ERROR: sftp transfer finished with error"
- sys.exit(1)
diff --git a/new-publish/publish b/new-publish/publish
deleted file mode 100755
index eefa39e..0000000
--- a/new-publish/publish
+++ /dev/null
@@ -1,80 +0,0 @@
-#!/usr/bin/env python
-import sys
-import os
-import glob
-import optparse
-import urlparse
-
-import publib
-
-
-REMOTE_HOST = "android-build.linaro.org"
-COPY_USER_NAME = "publish-copy"
-TRIGGER_USER_NAME = "publish-trigger"
-UPLOAD_DIR = "/uploads"
-COPY_KEY_FILE = "/var/run/lava/publish-copy"
-TRIGGER_KEY_FILE = "/var/run/lava/publish-trigger"
-
-def log(msg):
- print msg
- sys.stdout.flush()
-
-if __name__ == "__main__":
- optparser = optparse.OptionParser(usage="%prog <job/build> <pattern>...")
- optparser.add_option("-t", "--type", help="Build type")
- optparser.add_option("-p", "--strip", metavar="NUM", type=int, default=0,
- help="Strip leading components from dest file names.")
- optparser.add_option("-s", "--staging", action="store_true", help="Publish to staging")
- optparser.add_option("--identity-copy", metavar="KEY", default=COPY_KEY_FILE, help="SSH key file")
- optparser.add_option("--identity-trigger", metavar="KEY", default=TRIGGER_KEY_FILE, help="SSH key file")
- optparser.add_option("-n", "--dry-run", action="store_true", help="Don't actually publish files, log commands")
- optparser.add_option("--host", help="Override destination publishing host, for debugging")
- optparser.add_option("-i", "--intermediate-host-url", help="Use given intermediate host (can be domain name or URL)")
- options, args = optparser.parse_args(sys.argv[1:])
- if len(args) < 2:
- optparser.error("Wrong number of arguments")
-
- if options.intermediate_host_url:
- if "//" in options.intermediate_host_url:
- REMOTE_HOST = urlparse.urlparse(options.intermediate_host_url).hostname
- else:
- REMOTE_HOST = options.intermediate_host_url
- print "Using intermediate host:", REMOTE_HOST
-
- build_id = args[0]
- patterns = args[1:]
-
- publib.validate_build_id(build_id)
-
- # Support Jenkins syntax, with comma as separator
- if len(patterns) == 1 and "," in patterns[0]:
- patterns = patterns[0].split(",")
-
- for p in patterns:
- if "**" in p:
- print "** in glob patterns is not supported"
- sys.exit(1)
-
- file_list = publib.get_file_list(patterns)
- print "File list:", file_list
- dir_list = publib.make_dir_struct(file_list, UPLOAD_DIR, build_id, options.strip)
- print "Dir list:", dir_list
- log("Creating dir structure on intermediate server")
- publib.create_dir_struct(dir_list, REMOTE_HOST, COPY_USER_NAME, options.identity_copy)
- log("Done creating dir structure on intermediate server")
- upload_script = publib.make_upload_script(file_list, UPLOAD_DIR, build_id, options.strip)
- log("Uploading files to intermediate server")
- publib.upload_files(upload_script, REMOTE_HOST, COPY_USER_NAME, options.identity_copy, options)
- log("Done uploading files to intermediate server")
-
- log("Triggering propagation of files to downloads server")
- rc = os.system("ssh -i %s %s@%s /home/ubuntu/linaro-jenkins-tools/new-publish/propagate.py %s %s %s" % (options.identity_trigger,
- TRIGGER_USER_NAME, REMOTE_HOST,
- "-s" if options.staging else "",
- "--host=%s" % options.host if options.host else "",
- build_id))
- if rc != 0:
- log("Publishing failed")
- sys.exit(1)
-
- log("Publishing successful")
diff --git a/new-publish/setup.sh b/new-publish/setup.sh
deleted file mode 100755
index 2d27d27..0000000
--- a/new-publish/setup.sh
+++ /dev/null
@@ -1,124 +0,0 @@
-#!/bin/bash
-set -e
-
-publish_home=/mnt/publish
-
-# Append content to a file if grep test fails
-function add_if_absent() {
- file=$1
- exists_regexp=$2
- to_add=$3
- if grep -q -E "$exists_regexp" "$file"; then
- echo "Warning $file matched $exists_regexp, adding new content skipped"
- return
- fi
- echo "$to_add" >>$file
-}
-
-# Comment out (#) a line if it matches regexp
-function comment_if_present() {
- file=$1
- regexp=$2
- awk "\
-/$regexp/ {print \"#\" \$0; next; }
- { print }
-" $file > $file.tmp
- mv $file.tmp $file
-}
-
-function setup_accounts() {
-
- mkdir -p $publish_home
-
- groupadd publish || true
-
- useradd \
- --comment "Publishing - transfer user" \
- --home $publish_home \
- --gid publish \
- -M \
- --no-user-group \
- publish-copy || true
-
- useradd \
- --comment "Publishing - finalization user" \
- --home $publish_home \
- --gid publish \
- -M \
- --no-user-group \
- publish-trigger || true
-
- # Home dir must be owned by root for ssh ChrootDirectory to work
- chown root.root $publish_home
- chmod 755 $publish_home
-
- # Actual uploads will happen to this dir
- mkdir -p $publish_home/uploads
- # publish-copy should have write access there, publish-trigger
- # generally only read (cleanup can be handled by cronjob)
- chown publish-copy.publish $publish_home/uploads
- chmod 755 $publish_home/uploads
-}
-
-
-function setup_sshd_config() {
- sshd_config=/etc/ssh/sshd_config
-# sshd_config=sshd_config
- if [ ! -f $sshd_config.before-new-pub ]; then
- # Capture pristine config for rollback
- cp $sshd_config $sshd_config.before-new-pub
- fi
-
- add_if_absent $sshd_config "^AuthorizedKeysFile /etc/ssh/user-authorized-keys/%u" "\
-AuthorizedKeysFile /etc/ssh/user-authorized-keys/%u
-AuthorizedKeysFile2 /dev/null"
-
- comment_if_present $sshd_config "^Subsystem sftp"
- add_if_absent $sshd_config "^Subsystem sftp internal-sftp" "Subsystem sftp internal-sftp"
-
- add_if_absent $sshd_config "^Match User publish-copy" "\
-Match User publish-copy
- ChrootDirectory $publish_home
- ForceCommand internal-sftp
- AllowTcpForwarding no
- X11Forwarding no
-"
-}
-
-function setup_ssh_keys() {
- mkdir -p /etc/ssh/user-authorized-keys/
- # Move only real file, don't do this for symlink
- if [ ! -L $HOME/.ssh/authorized_keys ]; then
- echo "Moving current account's authorized_keys to /etc/ssh/user-authorized-keys/"
- cp $HOME/.ssh/authorized_keys /etc/ssh/user-authorized-keys/$SUDO_USER
- rm $HOME/.ssh/authorized_keys
- ln -s /etc/ssh/user-authorized-keys/$SUDO_USER $HOME/.ssh/authorized_keys
- fi
- # Unlike when reside in ~/.ssh/, in /etc/ssh/... auth keys must be readable enough,
- # or won't be picked up by sshd => login lockout.
- chmod 644 /etc/ssh/user-authorized-keys/$SUDO_USER
-
- mkdir -p ~/snapshots-sync3
- if [ ! -f ~/snapshots-sync3/publish-copy ]; then
- ssh-keygen -t rsa -N "" -f ~/snapshots-sync3/publish-copy
- fi
- if [ ! -f ~/snapshots-sync3/publish-trigger ]; then
- ssh-keygen -t rsa -N "" -f ~/snapshots-sync3/publish-trigger
- fi
-
-
- echo -n 'command="/usr/lib/sftp-server",no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding ' \
- >/etc/ssh/user-authorized-keys/publish-copy
- pubkey=$(cat ~/snapshots-sync3/publish-copy.pub)
- add_if_absent /etc/ssh/user-authorized-keys/publish-copy "^$pubkey" "$pubkey"
-
- echo -n 'command="/home/ubuntu/new-publish/utils/new-publish/trigger ${SSH_ORIGINAL_COMMAND#* }",no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding ' \
- >/etc/ssh/user-authorized-keys/publish-trigger
- pubkey=$(cat ~/snapshots-sync3/publish-trigger.pub)
- add_if_absent /etc/ssh/user-authorized-keys/publish-trigger "^$pubkey" "$pubkey"
-}
-
-setup_accounts
-# Setup new ssh keys structure first, or there's a chance of SSH lock-out
-setup_ssh_keys
-setup_sshd_config
diff --git a/new-publish/test/data/dir1/dir1_file b/new-publish/test/data/dir1/dir1_file
deleted file mode 100644
index e69de29..0000000
--- a/new-publish/test/data/dir1/dir1_file
+++ /dev/null
diff --git a/new-publish/test/data/dir2/dir2_file b/new-publish/test/data/dir2/dir2_file
deleted file mode 100644
index e69de29..0000000
--- a/new-publish/test/data/dir2/dir2_file
+++ /dev/null
diff --git a/new-publish/test/data/dir2/subdir1/dir2_subdir1_file b/new-publish/test/data/dir2/subdir1/dir2_subdir1_file
deleted file mode 100644
index e69de29..0000000
--- a/new-publish/test/data/dir2/subdir1/dir2_subdir1_file
+++ /dev/null
diff --git a/new-publish/test/data/top_file b/new-publish/test/data/top_file
deleted file mode 100644
index e772b13..0000000
--- a/new-publish/test/data/top_file
+++ /dev/null
@@ -1 +0,0 @@
-top_file \ No newline at end of file
diff --git a/new-publish/test/test_publish.py b/new-publish/test/test_publish.py
deleted file mode 100644
index 57d4c6c..0000000
--- a/new-publish/test/test_publish.py
+++ /dev/null
@@ -1,73 +0,0 @@
-import os
-import publib
-
-
-def data_path():
-# return os.path.join(os.path.dirname(__file__), "data")
- return "test/data/"
-
-def test_file_list1():
- l = publib.get_file_list([data_path() + "/*"])
- assert l == ['test/data/top_file'], l
-
-def test_file_list2():
- l = publib.get_file_list([data_path() + "/*/*"])
- assert l == ['test/data/dir1/dir1_file', 'test/data/dir2/dir2_file'], l
-
-def test_file_list3():
- d = data_path()
- l = publib.get_file_list([d + "/*", d + "/*/*"])
- assert l == ['test/data/top_file', 'test/data/dir1/dir1_file', 'test/data/dir2/dir2_file'], l
-
-def test_file_list_order():
- d = data_path()
- l = publib.get_file_list([d + "*", d + "dir2/*", d + "dir1/*"])
- assert l == ['test/data/top_file', 'test/data/dir1/dir1_file', 'test/data/dir2/dir2_file'], l
-
-def test_common_prefix():
- v = publib.common_path_prefix("a", "b")
- assert v == ([], ["a"]), v
- v = publib.common_path_prefix("a", "a")
- assert v == (["a"], []), v
- v = publib.common_path_prefix("a/c", "a/b")
- assert v == (["a"], ["c"]), v
-
-def test_dir_creation():
- l = publib.make_dir_struct(["file1"], "/uploads")
- assert l == []
- l = publib.make_dir_struct(["file1", "dir2/f", "dir1/f"], "/uploads")
- assert l == ['/uploads/dir1', '/uploads/dir2'], l
- l = publib.make_dir_struct(["dir/subdir/f", "dir/f"], "/uploads")
- assert l == ['/uploads/dir', '/uploads/dir/subdir'], l
- l = publib.make_dir_struct(["dir/s1/s2/f", "dir/f"], "/uploads")
- assert l == ['/uploads/dir', '/uploads/dir/s1', '/uploads/dir/s1/s2'], l
-
-
-def test_ftp_script_trivial():
- l = publib.make_upload_script(["file1"], "/uploads")
- assert l == ['cd /uploads', 'put file1'], l
- l = publib.make_upload_script(["dir/file1"], "/uploads")
- assert l == ['cd /uploads/dir', 'put dir/file1'], l
-
-
-def test_validate_build_id():
- assert publib.validate_build_id("foo/bar")
- assert publib.validate_build_id("foo/bar-2.5")
-
- try:
- publib.validate_build_id("foo/bar/baz")
- assert False
- except SystemExit:
- pass
-
- try:
- publib.validate_build_id("foo/bar\\baz")
- assert False
- except SystemExit:
- pass
-
- try:
- publib.validate_build_id("../passwd")
- assert False
- except SystemExit:
- pass