diff mbox series

[v2,09/20] nvme: Add helper to execute Reservation Report

Message ID 20220809000419.10674-10-michael.christie@oracle.com (mailing list archive)
State Changes Requested
Headers show
Series Use block pr_ops in LIO | expand

Commit Message

Mike Christie Aug. 9, 2022, 12:04 a.m. UTC
This adds a helper to execute the Reservation Report. The next patches
will then convert call it and convert that info to read_keys and
read_reservation.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/nvme/host/core.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

Comments

Chaitanya Kulkarni Aug. 9, 2022, 10:55 a.m. UTC | #1
On 8/8/22 17:04, Mike Christie wrote:
> This adds a helper to execute the Reservation Report. The next patches
> will then convert call it and convert that info to read_keys and
> read_reservation.
> 
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
>   drivers/nvme/host/core.c | 27 +++++++++++++++++++++++++++
>   1 file changed, 27 insertions(+)
> 

from the comments I've received in the past, please add a function
in the patch where it is actually using it.

Also, please consider if we can move pr code into its separate file
if others are okay with it as host/core.c file is getting bigger.

-ck
Chaitanya Kulkarni Aug. 9, 2022, 10:56 a.m. UTC | #2
On 8/8/22 17:04, Mike Christie wrote:
> This adds a helper to execute the Reservation Report. The next patches
> will then convert call it and convert that info to read_keys and
> read_reservation.
> 
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
>   drivers/nvme/host/core.c | 27 +++++++++++++++++++++++++++
>   1 file changed, 27 insertions(+)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 0dc768ae0c16..6b22a5dec122 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2196,6 +2196,33 @@ static int nvme_pr_release(struct block_device *bdev, u64 key, enum pr_type type
>   	return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release);
>   }
>   
> +static int nvme_pr_resv_report(struct block_device *bdev, u8 *data,
> +		u32 data_len, bool *eds)
> +{
> +	struct nvme_command c = { };
> +	int ret;
> +
> +	c.common.opcode = nvme_cmd_resv_report;
> +	c.common.cdw10 = cpu_to_le32(nvme_bytes_to_numd(data_len));
> +	c.common.cdw11 = 1;
> +	*eds = true;
> +
> +retry:
> +	if (IS_ENABLED(CONFIG_NVME_MULTIPATH) &&
> +	    bdev->bd_disk->fops == &nvme_ns_head_ops)
> +		ret = nvme_send_ns_head_pr_command(bdev, &c, data, data_len);
> +	else
> +		ret = nvme_send_ns_pr_command(bdev->bd_disk->private_data, &c,
> +					      data, data_len);
> +	if (ret == NVME_SC_HOST_ID_INCONSIST && c.common.cdw11) {
> +		c.common.cdw11 = 0;
> +		*eds = false;
> +		goto retry;

Unconditional retries without any limit can create problems,
perhaps consider adding some soft limits.

-ck
Keith Busch Aug. 9, 2022, 2:51 p.m. UTC | #3
On Tue, Aug 09, 2022 at 10:56:55AM +0000, Chaitanya Kulkarni wrote:
> On 8/8/22 17:04, Mike Christie wrote:
> > +
> > +	c.common.opcode = nvme_cmd_resv_report;
> > +	c.common.cdw10 = cpu_to_le32(nvme_bytes_to_numd(data_len));
> > +	c.common.cdw11 = 1;
> > +	*eds = true;
> > +
> > +retry:
> > +	if (IS_ENABLED(CONFIG_NVME_MULTIPATH) &&
> > +	    bdev->bd_disk->fops == &nvme_ns_head_ops)
> > +		ret = nvme_send_ns_head_pr_command(bdev, &c, data, data_len);
> > +	else
> > +		ret = nvme_send_ns_pr_command(bdev->bd_disk->private_data, &c,
> > +					      data, data_len);
> > +	if (ret == NVME_SC_HOST_ID_INCONSIST && c.common.cdw11) {
> > +		c.common.cdw11 = 0;
> > +		*eds = false;
> > +		goto retry;
> 
> Unconditional retries without any limit can create problems,
> perhaps consider adding some soft limits.

It's already conditioned on cdw11, which is cleared to 0 on the 2nd try. Not
that that's particularly clear. I'd suggest naming an enum value for it so the
code tells us what the signficance of cdw11 is in this context (it's the
Extended Data Structure control flag).
Mike Christie Aug. 9, 2022, 4:18 p.m. UTC | #4
On 8/9/22 5:55 AM, Chaitanya Kulkarni wrote:
> On 8/8/22 17:04, Mike Christie wrote:
>> This adds a helper to execute the Reservation Report. The next patches
>> will then convert call it and convert that info to read_keys and
>> read_reservation.
>>
>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>> ---
>>   drivers/nvme/host/core.c | 27 +++++++++++++++++++++++++++
>>   1 file changed, 27 insertions(+)
>>
> 
> from the comments I've received in the past, please add a function
> in the patch where it is actually using it.
> 

For your comment and Christoph's comment I'll merge patch 8 - 11 together
so the helpers and users are together.


> Also, please consider if we can move pr code into its separate file
> if others are okay with it as host/core.c file is getting bigger.
> 

Ok.
Mike Christie Aug. 9, 2022, 4:21 p.m. UTC | #5
On 8/9/22 9:51 AM, Keith Busch wrote:
> On Tue, Aug 09, 2022 at 10:56:55AM +0000, Chaitanya Kulkarni wrote:
>> On 8/8/22 17:04, Mike Christie wrote:
>>> +
>>> +	c.common.opcode = nvme_cmd_resv_report;
>>> +	c.common.cdw10 = cpu_to_le32(nvme_bytes_to_numd(data_len));
>>> +	c.common.cdw11 = 1;
>>> +	*eds = true;
>>> +
>>> +retry:
>>> +	if (IS_ENABLED(CONFIG_NVME_MULTIPATH) &&
>>> +	    bdev->bd_disk->fops == &nvme_ns_head_ops)
>>> +		ret = nvme_send_ns_head_pr_command(bdev, &c, data, data_len);
>>> +	else
>>> +		ret = nvme_send_ns_pr_command(bdev->bd_disk->private_data, &c,
>>> +					      data, data_len);
>>> +	if (ret == NVME_SC_HOST_ID_INCONSIST && c.common.cdw11) {
>>> +		c.common.cdw11 = 0;
>>> +		*eds = false;
>>> +		goto retry;
>>
>> Unconditional retries without any limit can create problems,
>> perhaps consider adding some soft limits.
> 
> It's already conditioned on cdw11, which is cleared to 0 on the 2nd try. Not
> that that's particularly clear. I'd suggest naming an enum value for it so the
> code tells us what the signficance of cdw11 is in this context (it's the
> Extended Data Structure control flag).

Will do that.

Chaitanya for your comment, with a bad device we could hit an issue where we
we cleared the Extended Data Structure control flag and it also returned 
NVME_SC_HOST_ID_INCONSIST and we'd be in an infinite loop, so I'll handle that.
Chaitanya Kulkarni Aug. 10, 2022, 1:45 a.m. UTC | #6
On 8/9/22 09:21, Mike Christie wrote:
> On 8/9/22 9:51 AM, Keith Busch wrote:
>> On Tue, Aug 09, 2022 at 10:56:55AM +0000, Chaitanya Kulkarni wrote:
>>> On 8/8/22 17:04, Mike Christie wrote:
>>>> +
>>>> +	c.common.opcode = nvme_cmd_resv_report;
>>>> +	c.common.cdw10 = cpu_to_le32(nvme_bytes_to_numd(data_len));
>>>> +	c.common.cdw11 = 1;
>>>> +	*eds = true;
>>>> +
>>>> +retry:
>>>> +	if (IS_ENABLED(CONFIG_NVME_MULTIPATH) &&
>>>> +	    bdev->bd_disk->fops == &nvme_ns_head_ops)
>>>> +		ret = nvme_send_ns_head_pr_command(bdev, &c, data, data_len);
>>>> +	else
>>>> +		ret = nvme_send_ns_pr_command(bdev->bd_disk->private_data, &c,
>>>> +					      data, data_len);
>>>> +	if (ret == NVME_SC_HOST_ID_INCONSIST && c.common.cdw11) {
>>>> +		c.common.cdw11 = 0;
>>>> +		*eds = false;
>>>> +		goto retry;
>>>
>>> Unconditional retries without any limit can create problems,
>>> perhaps consider adding some soft limits.
>>
>> It's already conditioned on cdw11, which is cleared to 0 on the 2nd try. Not
>> that that's particularly clear. I'd suggest naming an enum value for it so the
>> code tells us what the signficance of cdw11 is in this context (it's the
>> Extended Data Structure control flag).
> 

true, my concern is if controller went bad (not a common case but it is
H/W afterall) then we should have some soft limit to avoid infinite
retries.

> Will do that.
> 
> Chaitanya for your comment, with a bad device we could hit an issue where we
> we cleared the Extended Data Structure control flag and it also returned
> NVME_SC_HOST_ID_INCONSIST and we'd be in an infinite loop, so I'll handle that.
> 

that will be great.

-ck
Keith Busch Aug. 10, 2022, 3:17 a.m. UTC | #7
On Wed, Aug 10, 2022 at 01:45:48AM +0000, Chaitanya Kulkarni wrote:
> On 8/9/22 09:21, Mike Christie wrote:
> > On 8/9/22 9:51 AM, Keith Busch wrote:
> >> On Tue, Aug 09, 2022 at 10:56:55AM +0000, Chaitanya Kulkarni wrote:
> >>> On 8/8/22 17:04, Mike Christie wrote:
> >>>> +
> >>>> +	c.common.opcode = nvme_cmd_resv_report;
> >>>> +	c.common.cdw10 = cpu_to_le32(nvme_bytes_to_numd(data_len));
> >>>> +	c.common.cdw11 = 1;
> >>>> +	*eds = true;
> >>>> +
> >>>> +retry:
> >>>> +	if (IS_ENABLED(CONFIG_NVME_MULTIPATH) &&
> >>>> +	    bdev->bd_disk->fops == &nvme_ns_head_ops)
> >>>> +		ret = nvme_send_ns_head_pr_command(bdev, &c, data, data_len);
> >>>> +	else
> >>>> +		ret = nvme_send_ns_pr_command(bdev->bd_disk->private_data, &c,
> >>>> +					      data, data_len);
> >>>> +	if (ret == NVME_SC_HOST_ID_INCONSIST && c.common.cdw11) {
> >>>> +		c.common.cdw11 = 0;
> >>>> +		*eds = false;
> >>>> +		goto retry;
> >>>
> >>> Unconditional retries without any limit can create problems,
> >>> perhaps consider adding some soft limits.
> >>
> >> It's already conditioned on cdw11, which is cleared to 0 on the 2nd try. Not
> >> that that's particularly clear. I'd suggest naming an enum value for it so the
> >> code tells us what the signficance of cdw11 is in this context (it's the
> >> Extended Data Structure control flag).
> > 
> 
> true, my concern is if controller went bad (not a common case but it is
> H/W afterall) then we should have some soft limit to avoid infinite
> retries.

cdw11 is '0' on the 2nd try, and the 'goto' is conditioned on cdw11 being
non-zero. There's no infinite retry here.
Chaitanya Kulkarni Aug. 10, 2022, 4:54 a.m. UTC | #8
> cdw11 is '0' on the 2nd try, and the 'goto' is conditioned on cdw11 being
> non-zero. There's no infinite retry here.

Right I have misread the code, thanks for pointing that out.

-ck
diff mbox series

Patch

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0dc768ae0c16..6b22a5dec122 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2196,6 +2196,33 @@  static int nvme_pr_release(struct block_device *bdev, u64 key, enum pr_type type
 	return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release);
 }
 
+static int nvme_pr_resv_report(struct block_device *bdev, u8 *data,
+		u32 data_len, bool *eds)
+{
+	struct nvme_command c = { };
+	int ret;
+
+	c.common.opcode = nvme_cmd_resv_report;
+	c.common.cdw10 = cpu_to_le32(nvme_bytes_to_numd(data_len));
+	c.common.cdw11 = 1;
+	*eds = true;
+
+retry:
+	if (IS_ENABLED(CONFIG_NVME_MULTIPATH) &&
+	    bdev->bd_disk->fops == &nvme_ns_head_ops)
+		ret = nvme_send_ns_head_pr_command(bdev, &c, data, data_len);
+	else
+		ret = nvme_send_ns_pr_command(bdev->bd_disk->private_data, &c,
+					      data, data_len);
+	if (ret == NVME_SC_HOST_ID_INCONSIST && c.common.cdw11) {
+		c.common.cdw11 = 0;
+		*eds = false;
+		goto retry;
+	}
+
+	return ret;
+}
+
 const struct pr_ops nvme_pr_ops = {
 	.pr_register	= nvme_pr_register,
 	.pr_reserve	= nvme_pr_reserve,