Message ID | 1452206045-18332-1-git-send-email-mchristi@redhat.com (mailing list archive) |
---|---|
State | Accepted, archived |
Headers | show |
>>>>> "Mike" == mchristi <mchristi@redhat.com> writes:
Mike> Another iscsi target that cannot handle large IOs, but does not
Mike> tell us a limit.
Applied.
On 07/01/2016 23:34, mchristi@redhat.com wrote: > From: Mike Christie <mchristi@redhat.com> > > Another iscsi target that cannot handle large IOs, > but does not tell us a limit. > > The Synology iSCSI targets report: > > Block limits VPD page (SBC): > Write same no zero (WSNZ): 0 > Maximum compare and write length: 0 blocks > Optimal transfer length granularity: 0 blocks > Maximum transfer length: 0 blocks > Optimal transfer length: 0 blocks > Maximum prefetch length: 0 blocks > Maximum unmap LBA count: 0 > Maximum unmap block descriptor count: 0 > Optimal unmap granularity: 0 > Unmap granularity alignment valid: 0 > Unmap granularity alignment: 0 > Maximum write same length: 0x0 blocks > > and the size of the command it can handle seems to depend on how much > memory it can allocate at the time. This results in IO errors when > handling large IOs. This patch just has us use the old 1024 default > sectors for this target by adding it to the scsi blacklist. We do > not have good contacs with this vendors, so I have not been able to > try and fix on their side. Synology is just (an old fork of?) LIO. IIRC I saw similar problems a couple years ago with LIO because iscsit_map_iovec maps everything a page at a time and produced too large an iovec for the underlying storage. I'm afraid you're going to get this for pretty much every user of LIO. Paolo -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Paolo & MNC, On Tue, 2016-01-12 at 18:54 +0100, Paolo Bonzini wrote: > > On 07/01/2016 23:34, mchristi@redhat.com wrote: > > From: Mike Christie <mchristi@redhat.com> > > > > Another iscsi target that cannot handle large IOs, > > but does not tell us a limit. > > > > The Synology iSCSI targets report: > > > > Block limits VPD page (SBC): > > Write same no zero (WSNZ): 0 > > Maximum compare and write length: 0 blocks > > Optimal transfer length granularity: 0 blocks > > Maximum transfer length: 0 blocks > > Optimal transfer length: 0 blocks > > Maximum prefetch length: 0 blocks > > Maximum unmap LBA count: 0 > > Maximum unmap block descriptor count: 0 > > Optimal unmap granularity: 0 > > Unmap granularity alignment valid: 0 > > Unmap granularity alignment: 0 > > Maximum write same length: 0x0 blocks > > > > and the size of the command it can handle seems to depend on how much > > memory it can allocate at the time. This results in IO errors when > > handling large IOs. This patch just has us use the old 1024 default > > sectors for this target by adding it to the scsi blacklist. We do > > not have good contacs with this vendors, so I have not been able to > > try and fix on their side. > > Synology is just (an old fork of?) LIO. Last time I dissembled their build a few year back, Synology was still using a 2010 vintage pre-mainline version of LIO 3.x along with a substantial mess of extra junk. > IIRC I saw similar problems a > couple years ago with LIO because iscsit_map_iovec maps everything a > page at a time and produced too large an iovec for the underlying > storage. I'm afraid you're going to get this for pretty much every user > of LIO. > Two points here. We've been exposing backend dev->dev_attrib.hw_max_sectors settings for block limits EVPD for FILEIO based on iov limits, and IBLOCK based on queue_max_hw_sectors() for some time now. So initiators that honor block limits EVPD will work as expected. However, it was only last year when commit 8f9b5654 was added for generating OVERFLOW based on backend driver limit, in order to play nice with older initiators that do not honor block limits EVPD. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 13/01/2016 10:33, Nicholas A. Bellinger wrote: >>> This results in IO errors when >>> handling large IOs. This patch just has us use the old 1024 default >>> sectors for this target by adding it to the scsi blacklist. We do >>> not have good contacs with this vendors, so I have not been able to >>> try and fix on their side. >> >> IIRC I saw similar problems a >> couple years ago with LIO because iscsit_map_iovec maps everything a >> page at a time and produced too large an iovec for the underlying >> storage. I'm afraid you're going to get this for pretty much every user >> of LIO. > > Two points here. > > We've been exposing backend dev->dev_attrib.hw_max_sectors settings for > block limits EVPD for FILEIO based on iov limits, and IBLOCK based on > queue_max_hw_sectors() for some time now. > > So initiators that honor block limits EVPD will work as expected. What I was describing is more like the backend request_queue's queue_max_segments influencing the backend's hw_max_sectors. Is that covered as well? Paolo -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, 2016-01-13 at 10:41 +0100, Paolo Bonzini wrote: > > On 13/01/2016 10:33, Nicholas A. Bellinger wrote: > >>> This results in IO errors when > >>> handling large IOs. This patch just has us use the old 1024 default > >>> sectors for this target by adding it to the scsi blacklist. We do > >>> not have good contacs with this vendors, so I have not been able to > >>> try and fix on their side. > >> > >> IIRC I saw similar problems a > >> couple years ago with LIO because iscsit_map_iovec maps everything a > >> page at a time and produced too large an iovec for the underlying > >> storage. I'm afraid you're going to get this for pretty much every user > >> of LIO. > > > > Two points here. > > > > We've been exposing backend dev->dev_attrib.hw_max_sectors settings for > > block limits EVPD for FILEIO based on iov limits, and IBLOCK based on > > queue_max_hw_sectors() for some time now. > > > > So initiators that honor block limits EVPD will work as expected. > > What I was describing is more like the backend request_queue's > queue_max_segments influencing the backend's hw_max_sectors. Is that > covered as well? Nope, or at least not in iblock_configure_device() code. The MAXIMUM TRANSFER LENGTH in block limits EVPD for IBLOCK is queue_max_hw_sectors() * bdev_logical_block_size, and queue_max_segments() is not considered atm. Is there a case where MTL needs to be the smaller of the two..? -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 13/01/2016 10:56, Nicholas A. Bellinger wrote: > Nope, or at least not in iblock_configure_device() code. > > The MAXIMUM TRANSFER LENGTH in block limits EVPD for IBLOCK is > queue_max_hw_sectors() * bdev_logical_block_size, and > queue_max_segments() is not considered atm. > > Is there a case where MTL needs to be the smaller of the two..? Given enough fragmentation of the t_data_sg, the actual MTL will be queue_max_segments() * PAGE_SIZE bytes. I think it's a bug in LIO that target_alloc_sgl always does order-0 allocation. But until that is fixed, LIO will be susceptible to this problem, at least for PSCSI backends (for BLOCK and FILEIO, the block and VFS layers can always split one I/O down to multiple requests). Paolo -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c index 2c1160c7..47b9d13 100644 --- a/drivers/scsi/scsi_devinfo.c +++ b/drivers/scsi/scsi_devinfo.c @@ -227,6 +227,7 @@ static struct { {"Promise", "VTrak E610f", NULL, BLIST_SPARSELUN | BLIST_NO_RSOC}, {"Promise", "", NULL, BLIST_SPARSELUN}, {"QNAP", "iSCSI Storage", NULL, BLIST_MAX_1024}, + {"SYNOLOGY", "iSCSI Storage", NULL, BLIST_MAX_1024}, {"QUANTUM", "XP34301", "1071", BLIST_NOTQ}, {"REGAL", "CDC-4X", NULL, BLIST_MAX5LUN | BLIST_SINGLELUN}, {"SanDisk", "ImageMate CF-SD1", NULL, BLIST_FORCELUN},