From patchwork Thu Dec 6 15:45:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Konopko X-Patchwork-Id: 10716219 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A8F214E2 for ; Thu, 6 Dec 2018 15:51:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4AA622EE62 for ; Thu, 6 Dec 2018 15:51:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3BC252EE80; Thu, 6 Dec 2018 15:51:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CBD552EE62 for ; Thu, 6 Dec 2018 15:51:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725985AbeLFPvh (ORCPT ); Thu, 6 Dec 2018 10:51:37 -0500 Received: from mga06.intel.com ([134.134.136.31]:61141 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725862AbeLFPvh (ORCPT ); Thu, 6 Dec 2018 10:51:37 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Dec 2018 07:51:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,322,1539673200"; d="scan'208";a="301597216" Received: from gklab-106-154.igk.intel.com ([10.102.106.154]) by fmsmga005.fm.intel.com with ESMTP; 06 Dec 2018 07:51:35 -0800 From: Igor Konopko To: mb@lightnvm.io Cc: linux-block@vger.kernel.org, javier@cnexlabs.com, hans.holmberg@cnexlabs.com, igor.j.konopko@intel.com Subject: [PATCH 2/2] lightnvm: pblk: Ensure that bio is not freed on recovery Date: Thu, 6 Dec 2018 16:45:59 +0100 Message-Id: <20181206154559.5642-2-igor.j.konopko@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181206154559.5642-1-igor.j.konopko@intel.com> References: <20181206154559.5642-1-igor.j.konopko@intel.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When we are using PBLK with 0 sized metadata during recovery process we need to reference a last page of bio. Currently KASAN reports use-after-free in that case, since bio is freed on IO completion. This patch adds addtional bio reference to ensure, that we can still use bio memory after IO completion. It also ensures that we are not reusing the same bio on retry_rq path. Reported-by: Hans Holmberg Signed-off-by: Igor Konopko Tested-by: Hans Holmberg --- drivers/lightnvm/pblk-recovery.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c index 009faf5db40f..3fcf062d752c 100644 --- a/drivers/lightnvm/pblk-recovery.c +++ b/drivers/lightnvm/pblk-recovery.c @@ -376,12 +376,14 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, rq_ppas = pblk->min_write_pgs; rq_len = rq_ppas * geo->csecs; +retry_rq: bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); if (IS_ERR(bio)) return PTR_ERR(bio); bio->bi_iter.bi_sector = 0; /* internal bio */ bio_set_op_attrs(bio, REQ_OP_READ, 0); + bio_get(bio); rqd->bio = bio; rqd->opcode = NVM_OP_PREAD; @@ -394,7 +396,6 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, if (pblk_io_aligned(pblk, rq_ppas)) rqd->is_seq = 1; -retry_rq: for (i = 0; i < rqd->nr_ppas; ) { struct ppa_addr ppa; int pos; @@ -417,6 +418,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, if (ret) { pblk_err(pblk, "I/O submission failed: %d\n", ret); bio_put(bio); + bio_put(bio); return ret; } @@ -428,19 +430,25 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, if (padded) { pblk_log_read_err(pblk, rqd); + bio_put(bio); return -EINTR; } pad_distance = pblk_pad_distance(pblk, line); ret = pblk_recov_pad_line(pblk, line, pad_distance); - if (ret) + if (ret) { + bio_put(bio); return ret; + } padded = true; + bio_put(bio); goto retry_rq; } pblk_get_packed_meta(pblk, rqd); + bio_put(bio); + for (i = 0; i < rqd->nr_ppas; i++) { struct pblk_sec_meta *meta = pblk_get_meta(pblk, meta_list, i); u64 lba = le64_to_cpu(meta->lba);