diff mbox series

[RFC,3/5] dm-table: Atomic writes support

Message ID 20250106124119.1318428-4-john.g.garry@oracle.com (mailing list archive)
State New
Headers show
Series device mapper atomic write support | expand

Commit Message

John Garry Jan. 6, 2025, 12:41 p.m. UTC
Support stacking atomic write limits for DM devices.

All the pre-existing code in blk_stack_atomic_writes_limits() already takes
care of finding the aggregate limits from the bottom devices.

Feature flag DM_TARGET_ATOMIC_WRITES is introduced so that atomic writes
can be enabled on personalities selectively. This is to ensure that atomic
writes are only enabled when verified to be working properly (for a
specific personality). In addition, it just may not make sense to enable
atomic writes on some personalities (so this flag also helps there).

When testing for bottom device atomic writes support, only the bdev
queue limits are tested. There is no need to test the bottom bdev
start sector (like which bdev_can_atomic_write() does), as this would
already be checked in the dm_calculate_queue_limits() -> ..
iterate_devices() -> dm_set_device_limits() -> blk_stack_limits()
callchain.

Signed-off-by: John Garry <john.g.garry@oracle.com>
---
 drivers/md/dm-table.c         | 12 ++++++++++++
 include/linux/device-mapper.h |  3 +++
 2 files changed, 15 insertions(+)

Comments

Mike Snitzer Jan. 6, 2025, 5:49 p.m. UTC | #1
On Mon, Jan 06, 2025 at 12:41:17PM +0000, John Garry wrote:
> Support stacking atomic write limits for DM devices.
> 
> All the pre-existing code in blk_stack_atomic_writes_limits() already takes
> care of finding the aggregate limits from the bottom devices.
> 
> Feature flag DM_TARGET_ATOMIC_WRITES is introduced so that atomic writes
> can be enabled on personalities selectively. This is to ensure that atomic
> writes are only enabled when verified to be working properly (for a
> specific personality). In addition, it just may not make sense to enable
> atomic writes on some personalities (so this flag also helps there).
> 
> When testing for bottom device atomic writes support, only the bdev
> queue limits are tested. There is no need to test the bottom bdev
> start sector (like which bdev_can_atomic_write() does), as this would
> already be checked in the dm_calculate_queue_limits() -> ..
> iterate_devices() -> dm_set_device_limits() -> blk_stack_limits()
> callchain.
> 
> Signed-off-by: John Garry <john.g.garry@oracle.com>
> ---
>  drivers/md/dm-table.c         | 12 ++++++++++++
>  include/linux/device-mapper.h |  3 +++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
> index bd8b796ae683..1e0b7e364546 100644
> --- a/drivers/md/dm-table.c
> +++ b/drivers/md/dm-table.c
> @@ -1593,6 +1593,7 @@ int dm_calculate_queue_limits(struct dm_table *t,
>  	struct queue_limits ti_limits;
>  	unsigned int zone_sectors = 0;
>  	bool zoned = false;
> +	bool atomic_writes = true;
>  
>  	dm_set_stacking_limits(limits);
>  
> @@ -1602,8 +1603,12 @@ int dm_calculate_queue_limits(struct dm_table *t,
>  
>  		if (!dm_target_passes_integrity(ti->type))
>  			t->integrity_supported = false;
> +		if (!dm_target_supports_atomic_writes(ti->type))
> +			atomic_writes = false;
>  	}
>  
> +	if (atomic_writes)
> +		limits->features |= BLK_FEAT_ATOMIC_WRITES_STACKED;
>  	for (unsigned int i = 0; i < t->num_targets; i++) {
>  		struct dm_target *ti = dm_table_get_target(t, i);
>  
> @@ -1616,6 +1621,13 @@ int dm_calculate_queue_limits(struct dm_table *t,
>  			goto combine_limits;
>  		}
>  
> +		/*
> +		 * dm_set_device_limits() -> blk_stack_limits() considers
> +		 * ti_limits as 'top', so set BLK_FEAT_ATOMIC_WRITES_STACKED
> +		 * here also.
> +		 */
> +		if (atomic_writes)
> +			ti_limits.features |= BLK_FEAT_ATOMIC_WRITES_STACKED;
>  		/*
>  		 * Combine queue limits of all the devices this target uses.
>  		 */

You're referring to this code that immediately follows this ^ comment
which stacks up the limits of a target's potential to have N component
data devices:

                ti->type->iterate_devices(ti, dm_set_device_limits,
                                          &ti_limits);

Your comment and redundant feature flag setting is feels wrong.  I'd
expect code comparable to what is done for zoned, e.g.:

                if (!zoned && (ti_limits.features & BLK_FEAT_ZONED)) {
                        /*
                         * After stacking all limits, validate all devices
                         * in table support this zoned model and zone sectors.
                         */
                        zoned = (ti_limits.features & BLK_FEAT_ZONED);
                        zone_sectors = ti_limits.chunk_sectors;
                }

Meaning, for zoned devices, a side-effect of the
ti->type->iterate_devices() call (and N blk_stack_limits calls) is
ti_limits.features having BLK_FEAT_ZONED enabled.  Why wouldn't the same
side-effect happen for BLK_FEAT_ATOMIC_WRITES_STACKED (speaks to
blk_stack_limits being different/wrong for atomic writes support)?

Just feels not quite right... but I could be wrong, please see if
there is any "there" there ;)

Thanks,
Mike


> diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
> index 8321f65897f3..bcc6d7b69470 100644
> --- a/include/linux/device-mapper.h
> +++ b/include/linux/device-mapper.h
> @@ -299,6 +299,9 @@ struct target_type {
>  #define dm_target_supports_mixed_zoned_model(type) (false)
>  #endif
>  
> +#define DM_TARGET_ATOMIC_WRITES		0x00000400
> +#define dm_target_supports_atomic_writes(type) ((type)->features & DM_TARGET_ATOMIC_WRITES)
> +
>  struct dm_target {
>  	struct dm_table *table;
>  	struct target_type *type;
> -- 
> 2.31.1
>
John Garry Jan. 6, 2025, 6:18 p.m. UTC | #2
On 06/01/2025 17:49, Mike Snitzer wrote:
>> +++ b/drivers/md/dm-table.c
>> @@ -1593,6 +1593,7 @@ int dm_calculate_queue_limits(struct dm_table *t,
>>   	struct queue_limits ti_limits;
>>   	unsigned int zone_sectors = 0;
>>   	bool zoned = false;
>> +	bool atomic_writes = true;
>>   
>>   	dm_set_stacking_limits(limits);
>>   
>> @@ -1602,8 +1603,12 @@ int dm_calculate_queue_limits(struct dm_table *t,
>>   
>>   		if (!dm_target_passes_integrity(ti->type))
>>   			t->integrity_supported = false;
>> +		if (!dm_target_supports_atomic_writes(ti->type))
>> +			atomic_writes = false;
>>   	}
>>   
>> +	if (atomic_writes)
>> +		limits->features |= BLK_FEAT_ATOMIC_WRITES_STACKED;
>>   	for (unsigned int i = 0; i < t->num_targets; i++) {
>>   		struct dm_target *ti = dm_table_get_target(t, i);
>>   
>> @@ -1616,6 +1621,13 @@ int dm_calculate_queue_limits(struct dm_table *t,
>>   			goto combine_limits;
>>   		}
>>   
>> +		/*
>> +		 * dm_set_device_limits() -> blk_stack_limits() considers
>> +		 * ti_limits as 'top', so set BLK_FEAT_ATOMIC_WRITES_STACKED
>> +		 * here also.
>> +		 */
>> +		if (atomic_writes)
>> +			ti_limits.features |= BLK_FEAT_ATOMIC_WRITES_STACKED;
>>   		/*
>>   		 * Combine queue limits of all the devices this target uses.
>>   		 */
> You're referring to this code that immediately follows this ^ comment
> which stacks up the limits of a target's potential to have N component
> data devices:
> 
>                  ti->type->iterate_devices(ti, dm_set_device_limits,
>                                            &ti_limits);
> 
> Your comment and redundant feature flag setting is feels wrong.  I'd
> expect code comparable to what is done for zoned, e.g.:
> 
>                  if (!zoned && (ti_limits.features & BLK_FEAT_ZONED)) {
>                          /*
>                           * After stacking all limits, validate all devices
>                           * in table support this zoned model and zone sectors.
>                           */
>                          zoned = (ti_limits.features & BLK_FEAT_ZONED);
>                          zone_sectors = ti_limits.chunk_sectors;
>                  }
> 
> Meaning, for zoned devices, a side-effect of the
> ti->type->iterate_devices() call (and N blk_stack_limits calls) is
> ti_limits.features having BLK_FEAT_ZONED enabled.  Why wouldn't the same
> side-effect happen for BLK_FEAT_ATOMIC_WRITES_STACKED (speaks to
> blk_stack_limits being different/wrong for atomic writes support)?

ok, do I admit that my code did not feel quite right, so I will check 
the zoned code as a reference.

> 
> Just feels not quite right... but I could be wrong, please see if
> there is any "there" there

Will do.

Cheers,
John
diff mbox series

Patch

diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index bd8b796ae683..1e0b7e364546 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1593,6 +1593,7 @@  int dm_calculate_queue_limits(struct dm_table *t,
 	struct queue_limits ti_limits;
 	unsigned int zone_sectors = 0;
 	bool zoned = false;
+	bool atomic_writes = true;
 
 	dm_set_stacking_limits(limits);
 
@@ -1602,8 +1603,12 @@  int dm_calculate_queue_limits(struct dm_table *t,
 
 		if (!dm_target_passes_integrity(ti->type))
 			t->integrity_supported = false;
+		if (!dm_target_supports_atomic_writes(ti->type))
+			atomic_writes = false;
 	}
 
+	if (atomic_writes)
+		limits->features |= BLK_FEAT_ATOMIC_WRITES_STACKED;
 	for (unsigned int i = 0; i < t->num_targets; i++) {
 		struct dm_target *ti = dm_table_get_target(t, i);
 
@@ -1616,6 +1621,13 @@  int dm_calculate_queue_limits(struct dm_table *t,
 			goto combine_limits;
 		}
 
+		/*
+		 * dm_set_device_limits() -> blk_stack_limits() considers
+		 * ti_limits as 'top', so set BLK_FEAT_ATOMIC_WRITES_STACKED
+		 * here also.
+		 */
+		if (atomic_writes)
+			ti_limits.features |= BLK_FEAT_ATOMIC_WRITES_STACKED;
 		/*
 		 * Combine queue limits of all the devices this target uses.
 		 */
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 8321f65897f3..bcc6d7b69470 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -299,6 +299,9 @@  struct target_type {
 #define dm_target_supports_mixed_zoned_model(type) (false)
 #endif
 
+#define DM_TARGET_ATOMIC_WRITES		0x00000400
+#define dm_target_supports_atomic_writes(type) ((type)->features & DM_TARGET_ATOMIC_WRITES)
+
 struct dm_target {
 	struct dm_table *table;
 	struct target_type *type;