diff mbox

[1/1] scsi: add Synology to 1024 sector blacklist

Message ID 1452206045-18332-1-git-send-email-mchristi@redhat.com (mailing list archive)
State Accepted, archived
Headers show

Commit Message

Mike Christie Jan. 7, 2016, 10:34 p.m. UTC
From: Mike Christie <mchristi@redhat.com>

Another iscsi target that cannot handle large IOs,
but does not tell us a limit.

The Synology iSCSI targets report:

Block limits VPD page (SBC):
  Write same no zero (WSNZ): 0
  Maximum compare and write length: 0 blocks
  Optimal transfer length granularity: 0 blocks
  Maximum transfer length: 0 blocks
  Optimal transfer length: 0 blocks
  Maximum prefetch length: 0 blocks
  Maximum unmap LBA count: 0
  Maximum unmap block descriptor count: 0
  Optimal unmap granularity: 0
  Unmap granularity alignment valid: 0
  Unmap granularity alignment: 0
  Maximum write same length: 0x0 blocks

and the size of the command it can handle seems to depend on how much
memory it can allocate at the time. This results in IO errors when
handling large IOs. This patch just has us use the old 1024 default
sectors for this target by adding it to the scsi blacklist. We do
not have good contacs with this vendors, so I have not been able to
try and fix on their side.

I have posted this a long while back, but it was not merged. This
version just fixes it up for merge/patch failures in the original
version.

Reported-by: Ancoron Luciferis <ancoron.luciferis@googlemail.com>
Reported-by: Michael Meyers <steltek@tcnnet.com>
Signed-off-by: Mike Christie <mchristi@redhat.com>

---
 drivers/scsi/scsi_devinfo.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Martin K. Petersen Jan. 8, 2016, 2:44 a.m. UTC | #1
>>>>> "Mike" == mchristi  <mchristi@redhat.com> writes:

Mike> Another iscsi target that cannot handle large IOs, but does not
Mike> tell us a limit.

Applied.
Paolo Bonzini Jan. 12, 2016, 5:54 p.m. UTC | #2
On 07/01/2016 23:34, mchristi@redhat.com wrote:
> From: Mike Christie <mchristi@redhat.com>
> 
> Another iscsi target that cannot handle large IOs,
> but does not tell us a limit.
> 
> The Synology iSCSI targets report:
> 
> Block limits VPD page (SBC):
>   Write same no zero (WSNZ): 0
>   Maximum compare and write length: 0 blocks
>   Optimal transfer length granularity: 0 blocks
>   Maximum transfer length: 0 blocks
>   Optimal transfer length: 0 blocks
>   Maximum prefetch length: 0 blocks
>   Maximum unmap LBA count: 0
>   Maximum unmap block descriptor count: 0
>   Optimal unmap granularity: 0
>   Unmap granularity alignment valid: 0
>   Unmap granularity alignment: 0
>   Maximum write same length: 0x0 blocks
> 
> and the size of the command it can handle seems to depend on how much
> memory it can allocate at the time. This results in IO errors when
> handling large IOs. This patch just has us use the old 1024 default
> sectors for this target by adding it to the scsi blacklist. We do
> not have good contacs with this vendors, so I have not been able to
> try and fix on their side.

Synology is just (an old fork of?) LIO.  IIRC I saw similar problems a
couple years ago with LIO because iscsit_map_iovec maps everything a
page at a time and produced too large an iovec for the underlying
storage.  I'm afraid you're going to get this for pretty much every user
of LIO.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Nicholas A. Bellinger Jan. 13, 2016, 9:33 a.m. UTC | #3
Hi Paolo & MNC,

On Tue, 2016-01-12 at 18:54 +0100, Paolo Bonzini wrote:
> 
> On 07/01/2016 23:34, mchristi@redhat.com wrote:
> > From: Mike Christie <mchristi@redhat.com>
> > 
> > Another iscsi target that cannot handle large IOs,
> > but does not tell us a limit.
> > 
> > The Synology iSCSI targets report:
> > 
> > Block limits VPD page (SBC):
> >   Write same no zero (WSNZ): 0
> >   Maximum compare and write length: 0 blocks
> >   Optimal transfer length granularity: 0 blocks
> >   Maximum transfer length: 0 blocks
> >   Optimal transfer length: 0 blocks
> >   Maximum prefetch length: 0 blocks
> >   Maximum unmap LBA count: 0
> >   Maximum unmap block descriptor count: 0
> >   Optimal unmap granularity: 0
> >   Unmap granularity alignment valid: 0
> >   Unmap granularity alignment: 0
> >   Maximum write same length: 0x0 blocks
> > 
> > and the size of the command it can handle seems to depend on how much
> > memory it can allocate at the time. This results in IO errors when
> > handling large IOs. This patch just has us use the old 1024 default
> > sectors for this target by adding it to the scsi blacklist. We do
> > not have good contacs with this vendors, so I have not been able to
> > try and fix on their side.
> 
> Synology is just (an old fork of?) LIO.

Last time I dissembled their build a few year back, Synology was still
using a 2010 vintage pre-mainline version of LIO 3.x along with a
substantial mess of extra junk.

> IIRC I saw similar problems a
> couple years ago with LIO because iscsit_map_iovec maps everything a
> page at a time and produced too large an iovec for the underlying
> storage.  I'm afraid you're going to get this for pretty much every user
> of LIO.
> 

Two points here.

We've been exposing backend dev->dev_attrib.hw_max_sectors settings for
block limits EVPD for FILEIO based on iov limits, and IBLOCK based on
queue_max_hw_sectors() for some time now.

So initiators that honor block limits EVPD will work as expected.

However, it was only last year when commit 8f9b5654 was added for
generating OVERFLOW based on backend driver limit, in order to play nice
with older initiators that do not honor block limits EVPD.

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini Jan. 13, 2016, 9:41 a.m. UTC | #4
On 13/01/2016 10:33, Nicholas A. Bellinger wrote:
>>> This results in IO errors when
>>> handling large IOs. This patch just has us use the old 1024 default
>>> sectors for this target by adding it to the scsi blacklist. We do
>>> not have good contacs with this vendors, so I have not been able to
>>> try and fix on their side.
>>
>> IIRC I saw similar problems a
>> couple years ago with LIO because iscsit_map_iovec maps everything a
>> page at a time and produced too large an iovec for the underlying
>> storage.  I'm afraid you're going to get this for pretty much every user
>> of LIO.
> 
> Two points here.
> 
> We've been exposing backend dev->dev_attrib.hw_max_sectors settings for
> block limits EVPD for FILEIO based on iov limits, and IBLOCK based on
> queue_max_hw_sectors() for some time now.
> 
> So initiators that honor block limits EVPD will work as expected.

What I was describing is more like the backend request_queue's
queue_max_segments influencing the backend's hw_max_sectors.  Is that
covered as well?

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Nicholas A. Bellinger Jan. 13, 2016, 9:56 a.m. UTC | #5
On Wed, 2016-01-13 at 10:41 +0100, Paolo Bonzini wrote:
> 
> On 13/01/2016 10:33, Nicholas A. Bellinger wrote:
> >>> This results in IO errors when
> >>> handling large IOs. This patch just has us use the old 1024 default
> >>> sectors for this target by adding it to the scsi blacklist. We do
> >>> not have good contacs with this vendors, so I have not been able to
> >>> try and fix on their side.
> >>
> >> IIRC I saw similar problems a
> >> couple years ago with LIO because iscsit_map_iovec maps everything a
> >> page at a time and produced too large an iovec for the underlying
> >> storage.  I'm afraid you're going to get this for pretty much every user
> >> of LIO.
> > 
> > Two points here.
> > 
> > We've been exposing backend dev->dev_attrib.hw_max_sectors settings for
> > block limits EVPD for FILEIO based on iov limits, and IBLOCK based on
> > queue_max_hw_sectors() for some time now.
> > 
> > So initiators that honor block limits EVPD will work as expected.
> 
> What I was describing is more like the backend request_queue's
> queue_max_segments influencing the backend's hw_max_sectors.  Is that
> covered as well?

Nope, or at least not in iblock_configure_device() code.

The MAXIMUM TRANSFER LENGTH in block limits EVPD for IBLOCK is
queue_max_hw_sectors() * bdev_logical_block_size, and
queue_max_segments() is not considered atm.

Is there a case where MTL needs to be the smaller of the two..?

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini Jan. 13, 2016, 10:27 a.m. UTC | #6
On 13/01/2016 10:56, Nicholas A. Bellinger wrote:
> Nope, or at least not in iblock_configure_device() code.
> 
> The MAXIMUM TRANSFER LENGTH in block limits EVPD for IBLOCK is
> queue_max_hw_sectors() * bdev_logical_block_size, and
> queue_max_segments() is not considered atm.
> 
> Is there a case where MTL needs to be the smaller of the two..?

Given enough fragmentation of the t_data_sg, the actual MTL will be
queue_max_segments() * PAGE_SIZE bytes.

I think it's a bug in LIO that target_alloc_sgl always does order-0
allocation.  But until that is fixed, LIO will be susceptible to this
problem, at least for PSCSI backends (for BLOCK and FILEIO, the block
and VFS layers can always split one I/O down to multiple requests).

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
index 2c1160c7..47b9d13 100644
--- a/drivers/scsi/scsi_devinfo.c
+++ b/drivers/scsi/scsi_devinfo.c
@@ -227,6 +227,7 @@  static struct {
 	{"Promise", "VTrak E610f", NULL, BLIST_SPARSELUN | BLIST_NO_RSOC},
 	{"Promise", "", NULL, BLIST_SPARSELUN},
 	{"QNAP", "iSCSI Storage", NULL, BLIST_MAX_1024},
+	{"SYNOLOGY", "iSCSI Storage", NULL, BLIST_MAX_1024},
 	{"QUANTUM", "XP34301", "1071", BLIST_NOTQ},
 	{"REGAL", "CDC-4X", NULL, BLIST_MAX5LUN | BLIST_SINGLELUN},
 	{"SanDisk", "ImageMate CF-SD1", NULL, BLIST_FORCELUN},