Message ID | 20190605190836.32354-9-hch@lst.de (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [01/13] nvme-pci: don't limit DMA segement size | expand |
On Wed, Jun 05, 2019 at 09:08:31PM +0200, Christoph Hellwig wrote: > This ensures all proper DMA layer handling is taken care of by the > SCSI midlayer. Maybe not entirely related to this series, but it looks like the SCSI layer is changing the device global dma_set_max_seg_size() - at least in RDMA the dma device is being shared between many users, so we really don't want SCSI to make this value smaller. Can we do something about this? Wondering about other values too, and the interaction with the new combining stuff in umem.c Thanks, Jason
>> This ensures all proper DMA layer handling is taken care of by the >> SCSI midlayer. > > Maybe not entirely related to this series, but it looks like the SCSI > layer is changing the device global dma_set_max_seg_size() - at least > in RDMA the dma device is being shared between many users, so we > really don't want SCSI to make this value smaller. > > Can we do something about this? srp seems to do the right thing: target_host->max_segment_size = ib_dma_max_seg_size(ibdev); But iser does not, which means that scsi limits it to: BLK_MAX_SEGMENT_SIZE (64k) I can send a fix to iser. > Wondering about other values too, and the interaction with the new > combining stuff in umem.c The only other values AFAICT is the dma_boundary that rdma llds don't set...
On Wed, Jun 05, 2019 at 05:22:35PM -0300, Jason Gunthorpe wrote: > On Wed, Jun 05, 2019 at 09:08:31PM +0200, Christoph Hellwig wrote: > > This ensures all proper DMA layer handling is taken care of by the > > SCSI midlayer. > > Maybe not entirely related to this series, but it looks like the SCSI > layer is changing the device global dma_set_max_seg_size() - at least > in RDMA the dma device is being shared between many users, so we > really don't want SCSI to make this value smaller. > > Can we do something about this? We could do something about it as outlined in my mail - pass the dma_params explicitly to the dma_map_sg call. But that isn't really suitable for a short term fix and will take a little more time. Until we've sorted that out the device paramter needs to be set to the smallest value supported.
On Thu, Jun 06, 2019 at 08:24:41AM +0200, Christoph Hellwig wrote: > On Wed, Jun 05, 2019 at 05:22:35PM -0300, Jason Gunthorpe wrote: > > On Wed, Jun 05, 2019 at 09:08:31PM +0200, Christoph Hellwig wrote: > > > This ensures all proper DMA layer handling is taken care of by the > > > SCSI midlayer. > > > > Maybe not entirely related to this series, but it looks like the SCSI > > layer is changing the device global dma_set_max_seg_size() - at least > > in RDMA the dma device is being shared between many users, so we > > really don't want SCSI to make this value smaller. > > > > Can we do something about this? > > We could do something about it as outlined in my mail - pass the > dma_params explicitly to the dma_map_sg call. But that isn't really > suitable for a short term fix and will take a little more time. Sounds good to me, having every dma mapping specify its restrictions makes a lot more sense than a device global setting, IMHO. In RDMA the restrictions to build a SGL, create a device queue or build a MR are all a little different. ie for MRs alignment of the post-IOMMU DMA address is very important for performance as the MR logic can only build device huge pages out of properly aligned DMA addresses. While for SGLs we don't care about this, instead SGLs usually have the 32 bit per-element length limit in the HW that MRs do not. > Until we've sorted that out the device paramter needs to be set to > the smallest value supported. smallest? largest? We've been setting it to the largest value the device can handle (ie 2G) Jason
On Thu, Jun 06, 2019 at 09:59:35AM -0300, Jason Gunthorpe wrote: > > Until we've sorted that out the device paramter needs to be set to > > the smallest value supported. > > smallest? largest? We've been setting it to the largest value the > device can handle (ie 2G) Well, in general we need the smallest value supported by any ULP, because if any ULP can't support a larger segment size, we must not allow the IOMMU to merge it to that size. That being said I can't really see why any RDMA ULP should limit the size given how the MRs work.
diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c index 9c185a8dabd3..841b66397a57 100644 --- a/drivers/infiniband/ulp/iser/iscsi_iser.c +++ b/drivers/infiniband/ulp/iser/iscsi_iser.c @@ -613,6 +613,7 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep, struct Scsi_Host *shost; struct iser_conn *iser_conn = NULL; struct ib_conn *ib_conn; + struct ib_device *ib_dev; u32 max_fr_sectors; shost = iscsi_host_alloc(&iscsi_iser_sht, 0, 0); @@ -643,16 +644,19 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep, } ib_conn = &iser_conn->ib_conn; + ib_dev = ib_conn->device->ib_device; if (ib_conn->pi_support) { - u32 sig_caps = ib_conn->device->ib_device->attrs.sig_prot_cap; + u32 sig_caps = ib_dev->attrs.sig_prot_cap; scsi_host_set_prot(shost, iser_dif_prot_caps(sig_caps)); scsi_host_set_guard(shost, SHOST_DIX_GUARD_IP | SHOST_DIX_GUARD_CRC); } - if (iscsi_host_add(shost, - ib_conn->device->ib_device->dev.parent)) { + if (!(ib_dev->attrs.device_cap_flags & IB_DEVICE_SG_GAPS_REG)) + shost->virt_boundary_mask = ~MASK_4K; + + if (iscsi_host_add(shost, ib_dev->dev.parent)) { mutex_unlock(&iser_conn->state_mutex); goto free_host; } @@ -958,30 +962,6 @@ static umode_t iser_attr_is_visible(int param_type, int param) return 0; } -static int iscsi_iser_slave_alloc(struct scsi_device *sdev) -{ - struct iscsi_session *session; - struct iser_conn *iser_conn; - struct ib_device *ib_dev; - - mutex_lock(&unbind_iser_conn_mutex); - - session = starget_to_session(scsi_target(sdev))->dd_data; - iser_conn = session->leadconn->dd_data; - if (!iser_conn) { - mutex_unlock(&unbind_iser_conn_mutex); - return -ENOTCONN; - } - ib_dev = iser_conn->ib_conn.device->ib_device; - - if (!(ib_dev->attrs.device_cap_flags & IB_DEVICE_SG_GAPS_REG)) - blk_queue_virt_boundary(sdev->request_queue, ~MASK_4K); - - mutex_unlock(&unbind_iser_conn_mutex); - - return 0; -} - static struct scsi_host_template iscsi_iser_sht = { .module = THIS_MODULE, .name = "iSCSI Initiator over iSER", @@ -994,7 +974,6 @@ static struct scsi_host_template iscsi_iser_sht = { .eh_device_reset_handler= iscsi_eh_device_reset, .eh_target_reset_handler = iscsi_eh_recover_target, .target_alloc = iscsi_target_alloc, - .slave_alloc = iscsi_iser_slave_alloc, .proc_name = "iscsi_iser", .this_id = -1, .track_queue_depth = 1,
This ensures all proper DMA layer handling is taken care of by the SCSI midlayer. Signed-off-by: Christoph Hellwig <hch@lst.de> --- drivers/infiniband/ulp/iser/iscsi_iser.c | 35 +++++------------------- 1 file changed, 7 insertions(+), 28 deletions(-)