diff mbox series

[PATCHv2] blk-lib: check for kill signal

Message ID 20240221222013.582613-1-kbusch@meta.com (mailing list archive)
State New, archived
Headers show
Series [PATCHv2] blk-lib: check for kill signal | expand

Commit Message

Keith Busch Feb. 21, 2024, 10:20 p.m. UTC
From: Keith Busch <kbusch@kernel.org>

Some of these block operations can access the entire device capacity,
and can take a lot longer than the user expected. The user may change
their mind about wanting to run that command and attempt to kill the
process, but we're running uninterruptable, so they have to wait for it
to finish, which could be hours.

Check for a fatal signal at each iteration so the user doesn't have to
wait for their regretted operation to complete.

Cc: Ming Lei <ming.lei@redhat.com>
Cc: Nilay Shroff <nilay@linux.ibm.com>
Cc: Chaitanya Kulkarni <chaitanyak@nvidia.com>
Reported-by: Conrad Meyer <conradmeyer@meta.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
Changes from v1:

  Check the kill signal on all the long operations, not just the
  zero-out fallback.

  Be sure to return -EINTR on the condition.

  After the kill signal is observered, instead of submitting and waiting
  for the current parent bio in the chain, abort it by ending it
  immediately and do the final bio_put() after every previously submitted
  chained bio completes.

 block/blk-lib.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

Comments

Ming Lei Feb. 22, 2024, 4:02 a.m. UTC | #1
On Wed, Feb 21, 2024 at 02:20:13PM -0800, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> Some of these block operations can access the entire device capacity,
> and can take a lot longer than the user expected. The user may change
> their mind about wanting to run that command and attempt to kill the
> process, but we're running uninterruptable, so they have to wait for it
> to finish, which could be hours.
> 
> Check for a fatal signal at each iteration so the user doesn't have to
> wait for their regretted operation to complete.
> 
> Cc: Ming Lei <ming.lei@redhat.com>
> Cc: Nilay Shroff <nilay@linux.ibm.com>
> Cc: Chaitanya Kulkarni <chaitanyak@nvidia.com>
> Reported-by: Conrad Meyer <conradmeyer@meta.com>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> Changes from v1:
> 
>   Check the kill signal on all the long operations, not just the
>   zero-out fallback.
> 
>   Be sure to return -EINTR on the condition.
> 
>   After the kill signal is observered, instead of submitting and waiting
>   for the current parent bio in the chain, abort it by ending it
>   immediately and do the final bio_put() after every previously submitted
>   chained bio completes.

I feel this way is fragile:

1) user sends KILL signal

2) discard API returns

3) submitted discard requests are still run in background, and there
can be thousands of such bios

4) what if application or FS code(such as meta) starts to write data to
the discard range?

> 
>  block/blk-lib.c | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/block/blk-lib.c b/block/blk-lib.c
> index e59c3069e8351..88f6a4aebe75e 100644
> --- a/block/blk-lib.c
> +++ b/block/blk-lib.c
> @@ -35,6 +35,17 @@ static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector)
>  	return round_down(UINT_MAX, discard_granularity) >> SECTOR_SHIFT;
>  }
>  
> +static void abort_bio_endio(struct bio *bio)
> +{
> +	bio_put(bio);
> +}
> +
> +static void abort_bio(struct bio *bio)
> +{
> +	bio->bi_end_io = abort_bio_endio;
> +	bio_endio(bio);
> +}
> +
>  int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
>  		sector_t nr_sects, gfp_t gfp_mask, struct bio **biop)
>  {
> @@ -77,6 +88,10 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
>  		 * is disabled.
>  		 */
>  		cond_resched();
> +		if (fatal_signal_pending(current)) {
> +			abort_bio(bio);
> +			return -EINTR;
> +		}
>  	}
>  
>  	*biop = bio;
> @@ -146,6 +161,10 @@ static int __blkdev_issue_write_zeroes(struct block_device *bdev,
>  			nr_sects = 0;
>  		}
>  		cond_resched();
> +		if (fatal_signal_pending(current)) {
> +			abort_bio(bio);
> +			return -EINTR;
> +		}
>  	}
>  
>  	*biop = bio;
> @@ -190,6 +209,10 @@ static int __blkdev_issue_zero_pages(struct block_device *bdev,
>  				break;
>  		}
>  		cond_resched();
> +		if (fatal_signal_pending(current)) {
> +			abort_bio(bio);
> +			return -EINTR;
> +		}
>  	}
>  
>  	*biop = bio;
> @@ -337,6 +360,11 @@ int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector,
>  			break;
>  		}
>  		cond_resched();
> +		if (fatal_signal_pending(current)) {
> +			abort_bio(bio);
> +			ret = -EINTR;
> +			bio = NULL;
> +		}

The handling for blkdev_issue_secure_erase is different with others, and
actually it doesn't return immediately, care to add comment?


Thanks,
Ming
Keith Busch Feb. 22, 2024, 4:18 a.m. UTC | #2
On Thu, Feb 22, 2024 at 12:02:36PM +0800, Ming Lei wrote:
> On Wed, Feb 21, 2024 at 02:20:13PM -0800, Keith Busch wrote:
> >   After the kill signal is observered, instead of submitting and waiting
> >   for the current parent bio in the chain, abort it by ending it
> >   immediately and do the final bio_put() after every previously submitted
> >   chained bio completes.
> 
> I feel this way is fragile:
> 
> 1) user sends KILL signal
> 
> 2) discard API returns
> 
> 3) submitted discard requests are still run in background, and there
> can be thousands of such bios
> 
> 4) what if application or FS code(such as meta) starts to write data to
> the discard range?

Right, there's no IO order guarantee there, and sounds reasonable to
expect no potential conflicts after the function returns. We could add a
similiar completion that submit_bio_wait() uses to ensure the previous
bio's are all done before returning. At least that looks safe to do for
any case where fatal signal would apply.
 
> > +		if (fatal_signal_pending(current)) {
> > +			abort_bio(bio);
> > +			ret = -EINTR;
> > +			bio = NULL;
> > +		}
> 
> The handling for blkdev_issue_secure_erase is different with others, and
> actually it doesn't return immediately, care to add comment?

Ha, I actually prepared a patch to make secure_erase look like everyone
else. I chose the smaller diff, but it does look weird. I'll reconsider
that for the next version.
Chaitanya Kulkarni Feb. 22, 2024, 10:22 a.m. UTC | #3
On 2/21/24 20:18, Keith Busch wrote:
> On Thu, Feb 22, 2024 at 12:02:36PM +0800, Ming Lei wrote:
>> On Wed, Feb 21, 2024 at 02:20:13PM -0800, Keith Busch wrote:
>>>    After the kill signal is observered, instead of submitting and waiting
>>>    for the current parent bio in the chain, abort it by ending it
>>>    immediately and do the final bio_put() after every previously submitted
>>>    chained bio completes.
>> I feel this way is fragile:
>>
>> 1) user sends KILL signal
>>
>> 2) discard API returns
>>
>> 3) submitted discard requests are still run in background, and there
>> can be thousands of such bios
>>
>> 4) what if application or FS code(such as meta) starts to write data to
>> the discard range?
> Right, there's no IO order guarantee there, and sounds reasonable to
> expect no potential conflicts after the function returns. We could add a
> similiar completion that submit_bio_wait() uses to ensure the previous
> bio's are all done before returning. At least that looks safe to do for
> any case where fatal signal would apply.
>   

We need to wait and make sure previously submitted bios are completed
before returning, it will add more time to terminate the process given
that there could be many bios submitted and outstanding but I guess that
is the right thing to do to make sure nothing pending when we return with
-EINTR ...

-ck
diff mbox series

Patch

diff --git a/block/blk-lib.c b/block/blk-lib.c
index e59c3069e8351..88f6a4aebe75e 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -35,6 +35,17 @@  static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector)
 	return round_down(UINT_MAX, discard_granularity) >> SECTOR_SHIFT;
 }
 
+static void abort_bio_endio(struct bio *bio)
+{
+	bio_put(bio);
+}
+
+static void abort_bio(struct bio *bio)
+{
+	bio->bi_end_io = abort_bio_endio;
+	bio_endio(bio);
+}
+
 int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
 		sector_t nr_sects, gfp_t gfp_mask, struct bio **biop)
 {
@@ -77,6 +88,10 @@  int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
 		 * is disabled.
 		 */
 		cond_resched();
+		if (fatal_signal_pending(current)) {
+			abort_bio(bio);
+			return -EINTR;
+		}
 	}
 
 	*biop = bio;
@@ -146,6 +161,10 @@  static int __blkdev_issue_write_zeroes(struct block_device *bdev,
 			nr_sects = 0;
 		}
 		cond_resched();
+		if (fatal_signal_pending(current)) {
+			abort_bio(bio);
+			return -EINTR;
+		}
 	}
 
 	*biop = bio;
@@ -190,6 +209,10 @@  static int __blkdev_issue_zero_pages(struct block_device *bdev,
 				break;
 		}
 		cond_resched();
+		if (fatal_signal_pending(current)) {
+			abort_bio(bio);
+			return -EINTR;
+		}
 	}
 
 	*biop = bio;
@@ -337,6 +360,11 @@  int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector,
 			break;
 		}
 		cond_resched();
+		if (fatal_signal_pending(current)) {
+			abort_bio(bio);
+			ret = -EINTR;
+			bio = NULL;
+		}
 	}
 	blk_finish_plug(&plug);