diff mbox

[2/4] block: check virt boundary in bio_will_gap()

Message ID 1455519687-23873-3-git-send-email-ming.lei@canonical.com (mailing list archive)
State New, archived
Headers show

Commit Message

Ming Lei Feb. 15, 2016, 7:01 a.m. UTC
In the following patch, the way for figuring out
the last bvec will be changed with a bit cost introduced,
so return immediately if the queue doesn't have virt
boundary limit. Actually most of devices have not
this limit.

Cc: Sagi Grimberg <sagig@dev.mellanox.co.il>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
---
 include/linux/blkdev.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Sagi Grimberg Feb. 15, 2016, 8:22 a.m. UTC | #1
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 4571ef1..b8ff6a3 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -1388,7 +1388,7 @@ static inline bool bvec_gap_to_prev(struct request_queue *q,
>   static inline bool bio_will_gap(struct request_queue *q, struct bio *prev,
>   			 struct bio *next)
>   {
> -	if (!bio_has_data(prev))
> +	if (!bio_has_data(prev) || !queue_virt_boundary(q))
>   		return false;

Can we not do that?

bvec_gap_to_prev is already checking the virt_boundary and I'd sorta
like to keep the motivation to optimize bio_get_last_bvec() to be O(1).
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ming Lei Feb. 15, 2016, 10:27 a.m. UTC | #2
On Mon, Feb 15, 2016 at 4:22 PM, Sagi Grimberg <sagig@dev.mellanox.co.il> wrote:
>
>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>> index 4571ef1..b8ff6a3 100644
>> --- a/include/linux/blkdev.h
>> +++ b/include/linux/blkdev.h
>> @@ -1388,7 +1388,7 @@ static inline bool bvec_gap_to_prev(struct
>> request_queue *q,
>>   static inline bool bio_will_gap(struct request_queue *q, struct bio
>> *prev,
>>                          struct bio *next)
>>   {
>> -       if (!bio_has_data(prev))
>> +       if (!bio_has_data(prev) || !queue_virt_boundary(q))
>>                 return false;
>
>
> Can we not do that?

Given there are only 3 drivers which set virt boundary, I think
it is reasonable to do that.

>
> bvec_gap_to_prev is already checking the virt_boundary and I'd sorta
> like to keep the motivation to optimize bio_get_last_bvec() to be O(1).

Currently the approaches I thought of still need to iterate bvec by bvec,
not sure if O(1) can be reached easily, but I am happy to discuss the
optimized implementation.

Thanks,
Ming
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sagi Grimberg Feb. 15, 2016, 8:27 p.m. UTC | #3
>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>>> index 4571ef1..b8ff6a3 100644
>>> --- a/include/linux/blkdev.h
>>> +++ b/include/linux/blkdev.h
>>> @@ -1388,7 +1388,7 @@ static inline bool bvec_gap_to_prev(struct
>>> request_queue *q,
>>>    static inline bool bio_will_gap(struct request_queue *q, struct bio
>>> *prev,
>>>                           struct bio *next)
>>>    {
>>> -       if (!bio_has_data(prev))
>>> +       if (!bio_has_data(prev) || !queue_virt_boundary(q))
>>>    bio_integrity_add_page              return false;
>>
>>
>> Can we not do that?
>
> Given there are only 3 drivers which set virt boundary, I think
> it is reasonable to do that.

3 drivers that are really performance critical. I don't think we
should add optimized branching for some of the drivers especially
when the drivers that do set virt_boundary *really* care about latency.

>> bvec_gap_to_prev is already checking the virt_boundary and I'd sorta
>> like to keep the motivation to optimize bio_get_last_bvec() to be O(1).
>
> Currently the approaches I thought of still need to iterate bvec by bvec,
> not sure if O(1) can be reached easily, but I am happy to discuss the
> optimized implementation.

Me too. Note that I don't mind if the bio split code won't be optimized,
but I do want req_gap_back_merge/req_gap_front_merge to be...

Also, are the bvec_gap_to_prev usages in bio_add_pc_page and
bio_integrity_add_page safe? I didn't test this stuff with integrity
payloads...
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ming Lei Feb. 16, 2016, 1:05 p.m. UTC | #4
On Tue, Feb 16, 2016 at 4:27 AM, Sagi Grimberg <sagig@dev.mellanox.co.il> wrote:
>
>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>>>> index 4571ef1..b8ff6a3 100644
>>>> --- a/include/linux/blkdev.h
>>>> +++ b/include/linux/blkdev.h
>>>> @@ -1388,7 +1388,7 @@ static inline bool bvec_gap_to_prev(struct
>>>> request_queue *q,
>>>>    static inline bool bio_will_gap(struct request_queue *q, struct bio
>>>> *prev,
>>>>                           struct bio *next)
>>>>    {
>>>> -       if (!bio_has_data(prev))
>>>> +       if (!bio_has_data(prev) || !queue_virt_boundary(q))
>>>>    bio_integrity_add_page              return false;
>>>
>>>
>>>
>>> Can we not do that?
>>
>>
>> Given there are only 3 drivers which set virt boundary, I think
>> it is reasonable to do that.
>
>
> 3 drivers that are really performance critical. I don't think we
> should add optimized branching for some of the drivers especially
> when the drivers that do set virt_boundary *really* care about latency.
>
>>> bvec_gap_to_prev is already checking the virt_boundary and I'd sorta
>>> like to keep the motivation to optimize bio_get_last_bvec() to be O(1).
>>
>>
>> Currently the approaches I thought of still need to iterate bvec by bvec,
>> not sure if O(1) can be reached easily, but I am happy to discuss the
>> optimized implementation.
>
>
> Me too. Note that I don't mind if the bio split code won't be optimized,
> but I do want req_gap_back_merge/req_gap_front_merge to be...
>
> Also, are the bvec_gap_to_prev usages in bio_add_pc_page and
> bio_integrity_add_page safe? I didn't test this stuff with integrity

Yes, because both are non-cloned bvec table.

> payloads...

Thanks,
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ming Lei Feb. 16, 2016, 1:08 p.m. UTC | #5
On Tue, Feb 16, 2016 at 4:27 AM, Sagi Grimberg <sagig@dev.mellanox.co.il> wrote:
>
>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>>>> index 4571ef1..b8ff6a3 100644
>>>> --- a/include/linux/blkdev.h
>>>> +++ b/include/linux/blkdev.h
>>>> @@ -1388,7 +1388,7 @@ static inline bool bvec_gap_to_prev(struct
>>>> request_queue *q,
>>>>    static inline bool bio_will_gap(struct request_queue *q, struct bio
>>>> *prev,
>>>>                           struct bio *next)
>>>>    {
>>>> -       if (!bio_has_data(prev))
>>>> +       if (!bio_has_data(prev) || !queue_virt_boundary(q))
>>>>    bio_integrity_add_page              return false;
>>>
>>>
>>>
>>> Can we not do that?
>>
>>
>> Given there are only 3 drivers which set virt boundary, I think
>> it is reasonable to do that.
>
>
> 3 drivers that are really performance critical. I don't think we
> should add optimized branching for some of the drivers especially
> when the drivers that do set virt_boundary *really* care about latency.

I don't think the extra check on bvec_gap_to_prev() can make any
difference, but if you do care we can introduce __bvec_gap_to_prev()
in which the check is moved into bio_will_gap().

Thanks,
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 4571ef1..b8ff6a3 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1388,7 +1388,7 @@  static inline bool bvec_gap_to_prev(struct request_queue *q,
 static inline bool bio_will_gap(struct request_queue *q, struct bio *prev,
 			 struct bio *next)
 {
-	if (!bio_has_data(prev))
+	if (!bio_has_data(prev) || !queue_virt_boundary(q))
 		return false;
 
 	return bvec_gap_to_prev(q, &prev->bi_io_vec[prev->bi_vcnt - 1],