Message ID | 571A94AF.7000609@sandisk.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
On Fri, Apr 22, 2016 at 02:16:31PM -0700, Bart Van Assche wrote: > If both max_sectors and the queue_depth are high enough it can > happen that the MR pool is depleted temporarily. This causes > the SRP initiator to report mapping failures. Although the SRP > initiator recovers from such mapping failures, prevent that > this can happen by limiting max_sectors. > > Reported-by: Laurence Oberman <loberman@redhat.com> > Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Sagi Grimberg <sagi@grimberg.me> > --- > drivers/infiniband/ulp/srp/ib_srp.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c > index a173ec4..ebd4d90 100644 > --- a/drivers/infiniband/ulp/srp/ib_srp.c > +++ b/drivers/infiniband/ulp/srp/ib_srp.c > @@ -3172,6 +3172,7 @@ static ssize_t srp_create_target(struct device *dev, > struct srp_device *srp_dev = host->srp_dev; > struct ib_device *ibdev = srp_dev->dev; > int ret, node_idx, node, cpu, i; > + unsigned int max_max_sectors; > bool multich = false; > > target_host = scsi_host_alloc(&srp_template, > @@ -3228,6 +3229,25 @@ static ssize_t srp_create_target(struct device *dev, > target->sg_tablesize = target->cmd_sg_cnt; > } > > + if (srp_dev->use_fast_reg || srp_dev->use_fmr) { > + /* > + * FR and FMR can only map one HCA page per entry. If the > + * start address is not aligned on a HCA page boundary two > + * entries will be used for the head and the tail although > + * these two entries combined contain at most one HCA page of > + * data. Hence the "- 1" in the calculation below. > + */ > + max_max_sectors = (srp_dev->max_pages_per_mr - 1) << > + (ilog2(srp_dev->mr_page_size) - 9); From my understanding, srp_dev->max_pages_per_mr can be equal to 1, in case device will advertise it. Is this understanding correct? Will the code work in such case? > + if (target->scsi_host->max_sectors > max_max_sectors) { > + shost_printk(KERN_WARNING, target->scsi_host, > + PFX "Reducing max_sectors from %d to %d\n", > + target->scsi_host->max_sectors, > + max_max_sectors); > + target->scsi_host->max_sectors = max_max_sectors; > + } > + } > + > target_host->sg_tablesize = target->sg_tablesize; > target->mr_pool_size = target->scsi_host->can_queue; > target->indirect_size = target->sg_tablesize * > -- > 2.8.1 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Leon, I never considered that because I have never seen it set to 1. The values for my setup are shown below as 512. In what situation would srp_dev->max_pages_per_mr=1. I am using [root@srptest ~]# cat /etc/modprobe.d/ib_srp.conf options ib_srp cmd_sg_entries=64 indirect_sg_entries=512 [root@srptest ~]# cat /etc/modprobe.d/ib_srpt.conf options ib_srpt srp_max_req_size=8296 [ 221.378728] scsi host4: ib_srp: Reducing max_sectors from 8192 to 4088 [ 229.563814] scsi host5: ib_srp: Reducing max_sectors from 8192 to 4088 [ 245.620980] scsi host5: ib_srp: Reducing max_sectors from 8192 to 4088 [ 253.677979] scsi host6: ib_srp: Reducing max_sectors from 8192 to 4088 [ 221.303479] RHDEBUG: srp_dev->max_pages_per_mr - 1 = 511 [ 221.332902] RHDEBUG: srp_dev->mr_page_size = 4096 and ilog(srp_dev->mr_page_size) -9 = 3 [ 229.563810] RHDEBUG: srp_dev->max_pages_per_mr - 1 = 511 [ 229.563811] RHDEBUG: srp_dev->mr_page_size = 4096 and ilog(srp_dev->mr_page_size) -9 = 3 [ 245.544452] RHDEBUG: srp_dev->max_pages_per_mr - 1 = 511 [ 245.574899] RHDEBUG: srp_dev->mr_page_size = 4096 and ilog(srp_dev->mr_page_size) -9 = 3 [ 253.677977] RHDEBUG: srp_dev->max_pages_per_mr - 1 = 511 [ 253.677978] RHDEBUG: srp_dev->mr_page_size = 4096 and ilog(srp_dev->mr_page_size) -9 = 3 I can test the defaults if you like Laurence Oberman Principal Software Maintenance Engineer Red Hat Global Support Services ----- Original Message ----- From: "Leon Romanovsky" <leon@kernel.org> To: "Bart Van Assche" <bart.vanassche@sandisk.com> Cc: "Doug Ledford" <dledford@redhat.com>, "Christoph Hellwig" <hch@lst.de>, "Sagi Grimberg" <sagi@grimberg.me>, "Laurence Oberman" <loberman@redhat.com>, linux-rdma@vger.kernel.org Sent: Sunday, April 24, 2016 4:35:38 AM Subject: Re: [PATCH 11/11] IB/srp: Prevent mapping failures On Fri, Apr 22, 2016 at 02:16:31PM -0700, Bart Van Assche wrote: > If both max_sectors and the queue_depth are high enough it can > happen that the MR pool is depleted temporarily. This causes > the SRP initiator to report mapping failures. Although the SRP > initiator recovers from such mapping failures, prevent that > this can happen by limiting max_sectors. > > Reported-by: Laurence Oberman <loberman@redhat.com> > Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Sagi Grimberg <sagi@grimberg.me> > --- > drivers/infiniband/ulp/srp/ib_srp.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c > index a173ec4..ebd4d90 100644 > --- a/drivers/infiniband/ulp/srp/ib_srp.c > +++ b/drivers/infiniband/ulp/srp/ib_srp.c > @@ -3172,6 +3172,7 @@ static ssize_t srp_create_target(struct device *dev, > struct srp_device *srp_dev = host->srp_dev; > struct ib_device *ibdev = srp_dev->dev; > int ret, node_idx, node, cpu, i; > + unsigned int max_max_sectors; > bool multich = false; > > target_host = scsi_host_alloc(&srp_template, > @@ -3228,6 +3229,25 @@ static ssize_t srp_create_target(struct device *dev, > target->sg_tablesize = target->cmd_sg_cnt; > } > > + if (srp_dev->use_fast_reg || srp_dev->use_fmr) { > + /* > + * FR and FMR can only map one HCA page per entry. If the > + * start address is not aligned on a HCA page boundary two > + * entries will be used for the head and the tail although > + * these two entries combined contain at most one HCA page of > + * data. Hence the "- 1" in the calculation below. > + */ > + max_max_sectors = (srp_dev->max_pages_per_mr - 1) << > + (ilog2(srp_dev->mr_page_size) - 9); From my understanding, srp_dev->max_pages_per_mr can be equal to 1, in case device will advertise it. Is this understanding correct? Will the code work in such case? > + if (target->scsi_host->max_sectors > max_max_sectors) { > + shost_printk(KERN_WARNING, target->scsi_host, > + PFX "Reducing max_sectors from %d to %d\n", > + target->scsi_host->max_sectors, > + max_max_sectors); > + target->scsi_host->max_sectors = max_max_sectors; > + } > + } > + > target_host->sg_tablesize = target->sg_tablesize; > target->mr_pool_size = target->scsi_host->can_queue; > target->indirect_size = target->sg_tablesize * > -- > 2.8.1 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 04/24/16 01:35, Leon Romanovsky wrote: > On Fri, Apr 22, 2016 at 02:16:31PM -0700, Bart Van Assche wrote: >> + max_max_sectors = (srp_dev->max_pages_per_mr - 1) << >> + (ilog2(srp_dev->mr_page_size) - 9); > > From my understanding, srp_dev->max_pages_per_mr can be equal to 1, in > case device will advertise it. > > Is this understanding correct? > Will the code work in such case? Hello Leon, I'm not sure a HCA driver exists for which max_pages_per_mr == 1. Additionally, I'm not sure such a driver would be useful because such a driver would limit the maximum data transfer size for the iSER protocol to 4 KB. That is too low to reach good performance. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, Apr 24, 2016 at 05:48:53PM -0700, Bart Van Assche wrote: > On 04/24/16 01:35, Leon Romanovsky wrote: > >On Fri, Apr 22, 2016 at 02:16:31PM -0700, Bart Van Assche wrote: > >>+ max_max_sectors = (srp_dev->max_pages_per_mr - 1) << > >>+ (ilog2(srp_dev->mr_page_size) - 9); > > > > From my understanding, srp_dev->max_pages_per_mr can be equal to 1, in > >case device will advertise it. > > > >Is this understanding correct? > >Will the code work in such case? > > Hello Leon, > > I'm not sure a HCA driver exists for which max_pages_per_mr == 1. > Additionally, I'm not sure such a driver would be useful because such a > driver would limit the maximum data transfer size for the iSER protocol to 4 > KB. That is too low to reach good performance. Bart and Laurence, You are raising valuable arguments about sanity of setting srp_dev->max_pages_per_mr to be equal 1. I totally agree with you that it is insane, however my original question was "does the code support such value?" > > Bart. > > -- > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>> + max_max_sectors = (srp_dev->max_pages_per_mr - 1) << >>>> + (ilog2(srp_dev->mr_page_size) - 9); >>> >>> From my understanding, srp_dev->max_pages_per_mr can be equal to 1, in >>> case device will advertise it. >>> >>> Is this understanding correct? >>> Will the code work in such case? >> >> Hello Leon, >> >> I'm not sure a HCA driver exists for which max_pages_per_mr == 1. >> Additionally, I'm not sure such a driver would be useful because such a >> driver would limit the maximum data transfer size for the iSER protocol to 4 >> KB. That is too low to reach good performance. > > Bart and Laurence, > You are raising valuable arguments about sanity of setting srp_dev->max_pages_per_mr > to be equal 1. I totally agree with you that it is insane, however my > original question was "does the code support such value?" I don't see why we should worry about imaginary devices. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Apr 25, 2016 at 12:46:59PM +0300, Sagi Grimberg wrote: > > >>>>+ max_max_sectors = (srp_dev->max_pages_per_mr - 1) << > >>>>+ (ilog2(srp_dev->mr_page_size) - 9); > >>> > >>> From my understanding, srp_dev->max_pages_per_mr can be equal to 1, in > >>>case device will advertise it. > >>> > >>>Is this understanding correct? > >>>Will the code work in such case? > >> > >>Hello Leon, > >> > >>I'm not sure a HCA driver exists for which max_pages_per_mr == 1. > >>Additionally, I'm not sure such a driver would be useful because such a > >>driver would limit the maximum data transfer size for the iSER protocol to 4 > >>KB. That is too low to reach good performance. > > > >Bart and Laurence, > >You are raising valuable arguments about sanity of setting srp_dev->max_pages_per_mr > >to be equal 1. I totally agree with you that it is insane, however my > >original question was "does the code support such value?" > > I don't see why we should worry about imaginary devices. It is called corner case and the code supports it.
On 04/25/2016 08:53 AM, Leon Romanovsky wrote: > On Mon, Apr 25, 2016 at 12:46:59PM +0300, Sagi Grimberg wrote: >> >>>>>> + max_max_sectors = (srp_dev->max_pages_per_mr - 1) << >>>>>> + (ilog2(srp_dev->mr_page_size) - 9); >>>>> >>>>> From my understanding, srp_dev->max_pages_per_mr can be equal to 1, in >>>>> case device will advertise it. >>>>> >>>>> Is this understanding correct? >>>>> Will the code work in such case? >>>> >>>> Hello Leon, >>>> >>>> I'm not sure a HCA driver exists for which max_pages_per_mr == 1. >>>> Additionally, I'm not sure such a driver would be useful because such a >>>> driver would limit the maximum data transfer size for the iSER protocol to 4 >>>> KB. That is too low to reach good performance. >>> >>> Bart and Laurence, >>> You are raising valuable arguments about sanity of setting srp_dev->max_pages_per_mr >>> to be equal 1. I totally agree with you that it is insane, however my >>> original question was "does the code support such value?" >> >> I don't see why we should worry about imaginary devices. > > It is called corner case and the code supports it. I can add a WARN_ON_ONCE() statement for max_pages_per_mr == 1. That will make it easy to figure out where to start looking if in the future a HCA driver would be added that causes max_pages_per_mr == 1. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Apr 25, 2016 at 09:16:18AM -0700, Bart Van Assche wrote: > On 04/25/2016 08:53 AM, Leon Romanovsky wrote: > >On Mon, Apr 25, 2016 at 12:46:59PM +0300, Sagi Grimberg wrote: > >> > >>>>>>+ max_max_sectors = (srp_dev->max_pages_per_mr - 1) << > >>>>>>+ (ilog2(srp_dev->mr_page_size) - 9); > >>>>> > >>>>> From my understanding, srp_dev->max_pages_per_mr can be equal to 1, in > >>>>>case device will advertise it. > >>>>> > >>>>>Is this understanding correct? > >>>>>Will the code work in such case? > >>>> > >>>>Hello Leon, > >>>> > >>>>I'm not sure a HCA driver exists for which max_pages_per_mr == 1. > >>>>Additionally, I'm not sure such a driver would be useful because such a > >>>>driver would limit the maximum data transfer size for the iSER protocol to 4 > >>>>KB. That is too low to reach good performance. > >>> > >>>Bart and Laurence, > >>>You are raising valuable arguments about sanity of setting srp_dev->max_pages_per_mr > >>>to be equal 1. I totally agree with you that it is insane, however my > >>>original question was "does the code support such value?" > >> > >>I don't see why we should worry about imaginary devices. > > > >It is called corner case and the code supports it. > > I can add a WARN_ON_ONCE() statement for max_pages_per_mr == 1. That will > make it easy to figure out where to start looking if in the future a HCA > driver would be added that causes max_pages_per_mr == 1. > Thanks Bart, It is fair enough. > Bart.
Looks good,
Reviewed-by: Sagi Grimberg <sagi@grimberg.>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
>> I can add a WARN_ON_ONCE() statement for max_pages_per_mr == 1. That will >> make it easy to figure out where to start looking if in the future a HCA >> driver would be added that causes max_pages_per_mr == 1. If we're on the imaginary path, you can flat out fail the host creation. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Apr 22, 2016 at 02:16:31PM -0700, Bart Van Assche wrote: > If both max_sectors and the queue_depth are high enough it can > happen that the MR pool is depleted temporarily. This causes > the SRP initiator to report mapping failures. Although the SRP > initiator recovers from such mapping failures, prevent that > this can happen by limiting max_sectors. FYI, even with this patch I see tons of errors like: [ 2237.161106] scsi host7: ib_srp: Failed to map data (-12) > + /* > + * FR and FMR can only map one HCA page per entry. If the > + * start address is not aligned on a HCA page boundary two > + * entries will be used for the head and the tail although > + * these two entries combined contain at most one HCA page of > + * data. Hence the "- 1" in the calculation below. > + */ > + max_max_sectors = (srp_dev->max_pages_per_mr - 1) << > + (ilog2(srp_dev->mr_page_size) - 9); > + if (target->scsi_host->max_sectors > max_max_sectors) { > + shost_printk(KERN_WARNING, target->scsi_host, > + PFX "Reducing max_sectors from %d to %d\n", > + target->scsi_host->max_sectors, > + max_max_sectors); > + target->scsi_host->max_sectors = max_max_sectors; > + } I don't think there is any good reason to printk a warning here - limited hardware is a totally normal thing. E.g. if we merge your RDMA/CM support and someone runs SRP on chelsio hardware they'd probably hit this all the time.. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/03/2016 02:33 AM, Christoph Hellwig wrote: > On Fri, Apr 22, 2016 at 02:16:31PM -0700, Bart Van Assche wrote: >> If both max_sectors and the queue_depth are high enough it can >> happen that the MR pool is depleted temporarily. This causes >> the SRP initiator to report mapping failures. Although the SRP >> initiator recovers from such mapping failures, prevent that >> this can happen by limiting max_sectors. > > FYI, even with this patch I see tons of errors like: > > [ 2237.161106] scsi host7: ib_srp: Failed to map data (-12) That's unintended. I can reproduce this and will analyze this further. >> + /* >> + * FR and FMR can only map one HCA page per entry. If the >> + * start address is not aligned on a HCA page boundary two >> + * entries will be used for the head and the tail although >> + * these two entries combined contain at most one HCA page of >> + * data. Hence the "- 1" in the calculation below. >> + */ >> + max_max_sectors = (srp_dev->max_pages_per_mr - 1) << >> + (ilog2(srp_dev->mr_page_size) - 9); >> + if (target->scsi_host->max_sectors > max_max_sectors) { >> + shost_printk(KERN_WARNING, target->scsi_host, >> + PFX "Reducing max_sectors from %d to %d\n", >> + target->scsi_host->max_sectors, >> + max_max_sectors); >> + target->scsi_host->max_sectors = max_max_sectors; >> + } > > I don't think there is any good reason to printk a warning here - > limited hardware is a totally normal thing. E.g. if we merge > your RDMA/CM support and someone runs SRP on chelsio hardware they'd > probably hit this all the time.. Are you sure? What I see in the v4.6-rc6 tree seems to indicate that Chelsio hardware supports large page lists: $ git grep -nHw T[34]_MAX_MR_SIZE drivers/infiniband/hw/cxgb3/cxio_hal.h:58:#define T3_MAX_MR_SIZE 0x100000000ULL drivers/infiniband/hw/cxgb3/iwch.c:125: rnicp->attr.max_mr_size = T3_MAX_MR_SIZE; drivers/infiniband/hw/cxgb4/provider.c:328: props->max_mr_size = T4_MAX_MR_SIZE; drivers/infiniband/hw/cxgb4/t4.h:41:#define T4_MAX_MR_SIZE (~0ULL) Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
----- Original Message ----- > From: "Bart Van Assche" <bart.vanassche@sandisk.com> > To: "Christoph Hellwig" <hch@lst.de> > Cc: "Doug Ledford" <dledford@redhat.com>, "Sagi Grimberg" <sagi@grimberg.me>, "Laurence Oberman" > <loberman@redhat.com>, linux-rdma@vger.kernel.org > Sent: Tuesday, May 3, 2016 5:13:32 PM > Subject: Re: [PATCH 11/11] IB/srp: Prevent mapping failures > > On 05/03/2016 02:33 AM, Christoph Hellwig wrote: > > On Fri, Apr 22, 2016 at 02:16:31PM -0700, Bart Van Assche wrote: > >> If both max_sectors and the queue_depth are high enough it can > >> happen that the MR pool is depleted temporarily. This causes > >> the SRP initiator to report mapping failures. Although the SRP > >> initiator recovers from such mapping failures, prevent that > >> this can happen by limiting max_sectors. > > > > FYI, even with this patch I see tons of errors like: > > > > [ 2237.161106] scsi host7: ib_srp: Failed to map data (-12) > > That's unintended. I can reproduce this and will analyze this further. > > >> + /* > >> + * FR and FMR can only map one HCA page per entry. If the > >> + * start address is not aligned on a HCA page boundary two > >> + * entries will be used for the head and the tail although > >> + * these two entries combined contain at most one HCA page of > >> + * data. Hence the "- 1" in the calculation below. > >> + */ > >> + max_max_sectors = (srp_dev->max_pages_per_mr - 1) << > >> + (ilog2(srp_dev->mr_page_size) - 9); > >> + if (target->scsi_host->max_sectors > max_max_sectors) { > >> + shost_printk(KERN_WARNING, target->scsi_host, > >> + PFX "Reducing max_sectors from %d to %d\n", > >> + target->scsi_host->max_sectors, > >> + max_max_sectors); > >> + target->scsi_host->max_sectors = max_max_sectors; > >> + } > > > > I don't think there is any good reason to printk a warning here - > > limited hardware is a totally normal thing. E.g. if we merge > > your RDMA/CM support and someone runs SRP on chelsio hardware they'd > > probably hit this all the time.. > > Are you sure? What I see in the v4.6-rc6 tree seems to indicate that Chelsio > hardware supports large page lists: > > $ git grep -nHw T[34]_MAX_MR_SIZE > drivers/infiniband/hw/cxgb3/cxio_hal.h:58:#define T3_MAX_MR_SIZE > 0x100000000ULL > drivers/infiniband/hw/cxgb3/iwch.c:125: rnicp->attr.max_mr_size = > T3_MAX_MR_SIZE; > drivers/infiniband/hw/cxgb4/provider.c:328: props->max_mr_size = > T4_MAX_MR_SIZE; > drivers/infiniband/hw/cxgb4/t4.h:41:#define T4_MAX_MR_SIZE (~0ULL) > > Bart. > -- > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > With the latest patches from Bart, if I keep the max_sectors_kb throttled per the patch, I seem to avoid the "ib_srp: Failed to map data (-12)" messages. Without that patch I can set it to 4M and then immediately will see the failures. Granted I did not leave it running all of last weekend. I will go back to it as I already gave a tested by: Christoph: I think I fixed my mailer per your liking now, I hope so. :) Thanks Laurence -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, May 03, 2016 at 02:13:32PM -0700, Bart Van Assche wrote: > > I don't think there is any good reason to printk a warning here - > > limited hardware is a totally normal thing. E.g. if we merge > > your RDMA/CM support and someone runs SRP on chelsio hardware they'd > > probably hit this all the time.. > > Are you sure? What I see in the v4.6-rc6 tree seems to indicate that Chelsio > hardware supports large page lists: You're right - I was confused about the small data sizes it allows without MRs. Either way, limited hardware on the initiator side is something totally normal that happens all the time especiall with SCSI HBAs, so I'd prefer to not spew a warning about it. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/04/2016 02:09 AM, Christoph Hellwig wrote: > Either way, limited hardware on the initiator side is > something totally normal that happens all the time especially with > SCSI HBAs, so I'd prefer to not spew a warning about it. OK, I will change the shost_printk() into a pr_debug(). That will ensure that this message doesn't show up in the system log without enabling that message explicitly through configfs. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index a173ec4..ebd4d90 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -3172,6 +3172,7 @@ static ssize_t srp_create_target(struct device *dev, struct srp_device *srp_dev = host->srp_dev; struct ib_device *ibdev = srp_dev->dev; int ret, node_idx, node, cpu, i; + unsigned int max_max_sectors; bool multich = false; target_host = scsi_host_alloc(&srp_template, @@ -3228,6 +3229,25 @@ static ssize_t srp_create_target(struct device *dev, target->sg_tablesize = target->cmd_sg_cnt; } + if (srp_dev->use_fast_reg || srp_dev->use_fmr) { + /* + * FR and FMR can only map one HCA page per entry. If the + * start address is not aligned on a HCA page boundary two + * entries will be used for the head and the tail although + * these two entries combined contain at most one HCA page of + * data. Hence the "- 1" in the calculation below. + */ + max_max_sectors = (srp_dev->max_pages_per_mr - 1) << + (ilog2(srp_dev->mr_page_size) - 9); + if (target->scsi_host->max_sectors > max_max_sectors) { + shost_printk(KERN_WARNING, target->scsi_host, + PFX "Reducing max_sectors from %d to %d\n", + target->scsi_host->max_sectors, + max_max_sectors); + target->scsi_host->max_sectors = max_max_sectors; + } + } + target_host->sg_tablesize = target->sg_tablesize; target->mr_pool_size = target->scsi_host->can_queue; target->indirect_size = target->sg_tablesize *
If both max_sectors and the queue_depth are high enough it can happen that the MR pool is depleted temporarily. This causes the SRP initiator to report mapping failures. Although the SRP initiator recovers from such mapping failures, prevent that this can happen by limiting max_sectors. Reported-by: Laurence Oberman <loberman@redhat.com> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Sagi Grimberg <sagi@grimberg.me> --- drivers/infiniband/ulp/srp/ib_srp.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+)