diff options
author | Kelley Spoon <kelley.spoon@linaro.org> | 2019-12-10 07:48:31 -0600 |
---|---|---|
committer | Benjamin Copeland <ben.copeland@linaro.org> | 2020-01-22 13:30:07 +0000 |
commit | 526f40cb9bc4062f9272077a53642ed27fdac18c (patch) | |
tree | 941ed6067d2bf5de99cecafa1539e8464627c5a3 | |
parent | 38b6c3eec91f2221320a38b825a44d6f7d471d85 (diff) |
llp: make delete for link_latest aware of bucket versioning
Currently LLP will call Bucket.list() on a bucket to
generate a list of keys to be deleted from the target
directory of the link_latest() API function.
This can cause a problem if the underlying bucket is
using S3 versioning as the S3.Key information returned
by Bucket.list() does not include version information.
S3 will interpret this as a request to insert deletemarkers
at the top of all keys in the list rather than actually
removing them, and this can dramatically increase the object
count of the bucket.
This change will check to see if a bucket is using versioning,
and if so use the Bucket.list_versions() call in order to make
sure all S3 objects in the target directory are removed.
Change-Id: Ifb14aaa7b4166c83741abcd98339d36e72b5ee5c
Reviewed-on: https://review.linaro.org/c/infrastructure/linaro-license-protection/+/33579
Reviewed-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Benjamin Copeland <ben.copeland@linaro.org>
-rw-r--r-- | license_protected_downloads/api/v3.py | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/license_protected_downloads/api/v3.py b/license_protected_downloads/api/v3.py index 37b8a1a..fbb1621 100644 --- a/license_protected_downloads/api/v3.py +++ b/license_protected_downloads/api/v3.py @@ -75,8 +75,14 @@ class LatestLinkResource(PublishResource): raise HttpResponseError('Invalid link name', 401) dst = os.path.join(os.path.dirname(path), link_name) - keys = b.list(dst) + + v = b.get_versioning_status() + if v['Versioning']: + keys = b.list_versions(dst) + else: + keys = b.list(dst) b.delete_keys(keys) + for k in items: newkeyname = k.name.replace(path, dst) b.copy_key(newkeyname, k.bucket.name, k.name) |