aboutsummaryrefslogtreecommitdiff
path: root/drivers/md/dm-thin-metadata.h
diff options
context:
space:
mode:
authorJoe Thornber <ejt@redhat.com>2016-07-01 14:00:02 +0100
committerMike Snitzer <snitzer@redhat.com>2016-07-20 12:43:35 -0400
commit2a0fbffb1e50939a969d5efe495667a3aa0f72f7 (patch)
tree9d7dead9be062d72613d20bb3a98499054f81d1f /drivers/md/dm-thin-metadata.h
parente7e0f730477dea190fbc18c2d93338dacee82cea (diff)
dm thin: fix a race condition between discarding and provisioning a block
The discard passdown was being issued after the block was unmapped, which meant the block could be reprovisioned whilst the passdown discard was still in flight. We can only identify unshared blocks (safe to do a passdown a discard to) once they're unmapped and their ref count hits zero. Block ref counts are now used to guard against concurrent allocation of these blocks that are being discarded. So now we unmap the block, issue passdown discards, and the immediately increment ref counts for regions that have been discarded via passed down (this is safe because allocation occurs within the same thread). We then decrement ref counts once the passdown discard IO is complete -- signaling these blocks may now be allocated. This fixes the potential for corruption that was reported here: https://www.redhat.com/archives/dm-devel/2016-June/msg00311.html Reported-by: Dennis Yang <dennisyang@qnap.com> Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Diffstat (limited to 'drivers/md/dm-thin-metadata.h')
-rw-r--r--drivers/md/dm-thin-metadata.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/drivers/md/dm-thin-metadata.h b/drivers/md/dm-thin-metadata.h
index a938babe4258..35e954ea20a9 100644
--- a/drivers/md/dm-thin-metadata.h
+++ b/drivers/md/dm-thin-metadata.h
@@ -197,6 +197,9 @@ int dm_pool_get_data_dev_size(struct dm_pool_metadata *pmd, dm_block_t *result);
int dm_pool_block_is_used(struct dm_pool_metadata *pmd, dm_block_t b, bool *result);
+int dm_pool_inc_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e);
+int dm_pool_dec_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e);
+
/*
* Returns -ENOSPC if the new size is too small and already allocated
* blocks would be lost.