Message ID | 20170309052829.GA854@bbox (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Jens, It seems you miss this. Could you handle this? Thanks. On Thu, Mar 9, 2017 at 2:28 PM, Minchan Kim <minchan@kernel.org> wrote: < snip> > Jens, Could you replace the one merged with this? And I don't want > to add stable mark in this patch because I feel it need enough > testing in 64K page system I don't have. ;( > > From bb73e75ab0e21016f60858fd61e7dc6a6813e359 Mon Sep 17 00:00:00 2001 > From: Minchan Kim <minchan@kernel.org> > Date: Thu, 9 Mar 2017 14:00:40 +0900 > Subject: [PATCH] zram: handle multiple pages attached bio's bvec > > Johannes Thumshirn reported system goes the panic when using NVMe over > Fabrics loopback target with zram. > > The reason is zram expects each bvec in bio contains a single page > but nvme can attach a huge bulk of pages attached to the bio's bvec > so that zram's index arithmetic could be wrong so that out-of-bound > access makes panic. > > This patch solves the problem via removing the limit(a bvec should > contains a only single page). > > Cc: Hannes Reinecke <hare@suse.com> > Reported-by: Johannes Thumshirn <jthumshirn@suse.de> > Tested-by: Johannes Thumshirn <jthumshirn@suse.de> > Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> > Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de> > Signed-off-by: Minchan Kim <minchan@kernel.org> > --- > I don't add stable mark intentionally because I think it's rather risky > without enough testing on 64K page system(ie, partial IO part). > > Thanks for the help, Johannes and Hannes!! > > drivers/block/zram/zram_drv.c | 37 ++++++++++--------------------------- > 1 file changed, 10 insertions(+), 27 deletions(-) > > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c > index 01944419b1f3..fefdf260503a 100644 > --- a/drivers/block/zram/zram_drv.c > +++ b/drivers/block/zram/zram_drv.c > @@ -137,8 +137,7 @@ static inline bool valid_io_request(struct zram *zram, > > static void update_position(u32 *index, int *offset, struct bio_vec *bvec) > { > - if (*offset + bvec->bv_len >= PAGE_SIZE) > - (*index)++; > + *index += (*offset + bvec->bv_len) / PAGE_SIZE; > *offset = (*offset + bvec->bv_len) % PAGE_SIZE; > } > > @@ -838,34 +837,20 @@ static void __zram_make_request(struct zram *zram, struct bio *bio) > } > > bio_for_each_segment(bvec, bio, iter) { > - int max_transfer_size = PAGE_SIZE - offset; > - > - if (bvec.bv_len > max_transfer_size) { > - /* > - * zram_bvec_rw() can only make operation on a single > - * zram page. Split the bio vector. > - */ > - struct bio_vec bv; > - > - bv.bv_page = bvec.bv_page; > - bv.bv_len = max_transfer_size; > - bv.bv_offset = bvec.bv_offset; > + struct bio_vec bv = bvec; > + unsigned int remained = bvec.bv_len; > > + do { > + bv.bv_len = min_t(unsigned int, PAGE_SIZE, remained); > if (zram_bvec_rw(zram, &bv, index, offset, > - op_is_write(bio_op(bio))) < 0) > + op_is_write(bio_op(bio))) < 0) > goto out; > > - bv.bv_len = bvec.bv_len - max_transfer_size; > - bv.bv_offset += max_transfer_size; > - if (zram_bvec_rw(zram, &bv, index + 1, 0, > - op_is_write(bio_op(bio))) < 0) > - goto out; > - } else > - if (zram_bvec_rw(zram, &bvec, index, offset, > - op_is_write(bio_op(bio))) < 0) > - goto out; > + bv.bv_offset += bv.bv_len; > + remained -= bv.bv_len; > > - update_position(&index, &offset, &bvec); > + update_position(&index, &offset, &bv); > + } while (remained); > } > > bio_endio(bio); > @@ -882,8 +867,6 @@ static blk_qc_t zram_make_request(struct request_queue *queue, struct bio *bio) > { > struct zram *zram = queue->queuedata; > > - blk_queue_split(queue, &bio, queue->bio_split); > - > if (!valid_io_request(zram, bio->bi_iter.bi_sector, > bio->bi_iter.bi_size)) { > atomic64_inc(&zram->stats.invalid_io); > -- > 2.7.4 > >
On 03/30/2017 09:08 AM, Minchan Kim wrote: > Hi Jens, > > It seems you miss this. > Could you handle this? I can, but I'm a little confused. The comment talks about replacing the one I merged with this one, I can't do that. I'm assuming you are talking about this commit: commit 0bc315381fe9ed9fb91db8b0e82171b645ac008f Author: Johannes Thumshirn <jthumshirn@suse.de> Date: Mon Mar 6 11:23:35 2017 +0100 zram: set physical queue limits to avoid array out of bounds accesses which is in mainline. The patch still applies, though. Do we really REALLY need this for 4.11, or can we queue for 4.12 and mark it stable?
On Thu, Mar 30, 2017 at 09:35:56AM -0600, Jens Axboe wrote: > On 03/30/2017 09:08 AM, Minchan Kim wrote: > > Hi Jens, > > > > It seems you miss this. > > Could you handle this? > > I can, but I'm a little confused. The comment talks about replacing > the one I merged with this one, I can't do that. I'm assuming you > are talking about this commit: Right. > > commit 0bc315381fe9ed9fb91db8b0e82171b645ac008f > Author: Johannes Thumshirn <jthumshirn@suse.de> > Date: Mon Mar 6 11:23:35 2017 +0100 > > zram: set physical queue limits to avoid array out of bounds accesses > > which is in mainline. The patch still applies, though. You mean it's already in mainline so you cannot replace but can revert. Right? If so, please revert it and merge this one. > > Do we really REALLY need this for 4.11, or can we queue for 4.12 and > mark it stable? Not urgent because one in mainline fixes the problem so I'm okay with 4.12 but I don't want mark it as -stable. Thanks!
On 03/30/2017 05:45 PM, Minchan Kim wrote: > On Thu, Mar 30, 2017 at 09:35:56AM -0600, Jens Axboe wrote: >> On 03/30/2017 09:08 AM, Minchan Kim wrote: >>> Hi Jens, >>> >>> It seems you miss this. >>> Could you handle this? >> >> I can, but I'm a little confused. The comment talks about replacing >> the one I merged with this one, I can't do that. I'm assuming you >> are talking about this commit: > > Right. > >> >> commit 0bc315381fe9ed9fb91db8b0e82171b645ac008f >> Author: Johannes Thumshirn <jthumshirn@suse.de> >> Date: Mon Mar 6 11:23:35 2017 +0100 >> >> zram: set physical queue limits to avoid array out of bounds accesses >> >> which is in mainline. The patch still applies, though. > > You mean it's already in mainline so you cannot replace but can revert. > Right? > If so, please revert it and merge this one. Let's please fold it into the other patch. That's cleaner and it makes logical sense. >> Do we really REALLY need this for 4.11, or can we queue for 4.12 and >> mark it stable? > > Not urgent because one in mainline fixes the problem so I'm okay > with 4.12 but I don't want mark it as -stable. OK good, please resend with the two-line revert in your current patch, and I'll get it queued up for 4.12.
Hi Jens, On Thu, Mar 30, 2017 at 07:38:26PM -0600, Jens Axboe wrote: > On 03/30/2017 05:45 PM, Minchan Kim wrote: > > On Thu, Mar 30, 2017 at 09:35:56AM -0600, Jens Axboe wrote: > >> On 03/30/2017 09:08 AM, Minchan Kim wrote: > >>> Hi Jens, > >>> > >>> It seems you miss this. > >>> Could you handle this? > >> > >> I can, but I'm a little confused. The comment talks about replacing > >> the one I merged with this one, I can't do that. I'm assuming you > >> are talking about this commit: > > > > Right. > > > >> > >> commit 0bc315381fe9ed9fb91db8b0e82171b645ac008f > >> Author: Johannes Thumshirn <jthumshirn@suse.de> > >> Date: Mon Mar 6 11:23:35 2017 +0100 > >> > >> zram: set physical queue limits to avoid array out of bounds accesses > >> > >> which is in mainline. The patch still applies, though. > > > > You mean it's already in mainline so you cannot replace but can revert. > > Right? > > If so, please revert it and merge this one. > > Let's please fold it into the other patch. That's cleaner and it makes > logical sense. Understood. > > >> Do we really REALLY need this for 4.11, or can we queue for 4.12 and > >> mark it stable? > > > > Not urgent because one in mainline fixes the problem so I'm okay > > with 4.12 but I don't want mark it as -stable. > > OK good, please resend with the two-line revert in your current > patch, and I'll get it queued up for 4.12. Yeb. If so, now that I think about it, it would be better to handle it via Andrew's tree because Andrew have been handled zram's patches and I have several pending patches based on it. So, I will send new patchset with it to Andrew. Thanks!
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 01944419b1f3..fefdf260503a 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -137,8 +137,7 @@ static inline bool valid_io_request(struct zram *zram, static void update_position(u32 *index, int *offset, struct bio_vec *bvec) { - if (*offset + bvec->bv_len >= PAGE_SIZE) - (*index)++; + *index += (*offset + bvec->bv_len) / PAGE_SIZE; *offset = (*offset + bvec->bv_len) % PAGE_SIZE; } @@ -838,34 +837,20 @@ static void __zram_make_request(struct zram *zram, struct bio *bio) } bio_for_each_segment(bvec, bio, iter) { - int max_transfer_size = PAGE_SIZE - offset; - - if (bvec.bv_len > max_transfer_size) { - /* - * zram_bvec_rw() can only make operation on a single - * zram page. Split the bio vector. - */ - struct bio_vec bv; - - bv.bv_page = bvec.bv_page; - bv.bv_len = max_transfer_size; - bv.bv_offset = bvec.bv_offset; + struct bio_vec bv = bvec; + unsigned int remained = bvec.bv_len; + do { + bv.bv_len = min_t(unsigned int, PAGE_SIZE, remained); if (zram_bvec_rw(zram, &bv, index, offset, - op_is_write(bio_op(bio))) < 0) + op_is_write(bio_op(bio))) < 0) goto out; - bv.bv_len = bvec.bv_len - max_transfer_size; - bv.bv_offset += max_transfer_size; - if (zram_bvec_rw(zram, &bv, index + 1, 0, - op_is_write(bio_op(bio))) < 0) - goto out; - } else - if (zram_bvec_rw(zram, &bvec, index, offset, - op_is_write(bio_op(bio))) < 0) - goto out; + bv.bv_offset += bv.bv_len; + remained -= bv.bv_len; - update_position(&index, &offset, &bvec); + update_position(&index, &offset, &bv); + } while (remained); } bio_endio(bio); @@ -882,8 +867,6 @@ static blk_qc_t zram_make_request(struct request_queue *queue, struct bio *bio) { struct zram *zram = queue->queuedata; - blk_queue_split(queue, &bio, queue->bio_split); - if (!valid_io_request(zram, bio->bi_iter.bi_sector, bio->bi_iter.bi_size)) { atomic64_inc(&zram->stats.invalid_io);