From patchwork Tue Oct 3 10:05:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Hans Holmberg X-Patchwork-Id: 9982115 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6B45360365 for ; Tue, 3 Oct 2017 10:08:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 67A28201B1 for ; Tue, 3 Oct 2017 10:08:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5B0C328772; Tue, 3 Oct 2017 10:08:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C6C38201B1 for ; Tue, 3 Oct 2017 10:08:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751658AbdJCKIa (ORCPT ); Tue, 3 Oct 2017 06:08:30 -0400 Received: from mail-wm0-f45.google.com ([74.125.82.45]:53767 "EHLO mail-wm0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750820AbdJCKGy (ORCPT ); Tue, 3 Oct 2017 06:06:54 -0400 Received: by mail-wm0-f45.google.com with SMTP id q132so15704847wmd.2 for ; Tue, 03 Oct 2017 03:06:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=owltronix-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wl9rhcZTw2bl/G8PcoBaqy1BeU7FBl5WLyqFr+sDEWA=; b=Ez+sX8AEY7ArVDVjZCDZp7bI/xCO21v0j6cpnp1yrl79inhYxx/n6hVyF08L1Qbi3P XD3EmeY4KyiUwmtf883agpbLzdWcNajsLLF9ECC5Fp4Z04M/vm0nD95a+/T6uJDta67A pWCB4IqHpcu995ayBANPwhmonVM7Xq4WTZldLT1O7CoMaNSk27gid5rxfmWLW7HJ2xHb YfsIO8hMFzFlMDOrrQ/OKzyA2kAaBpEbm5AIKL7uMBCh448U5c1tNH0UdCd1vcreWrxc 9NHGkqyKr6/tNPHzfd2XClibuI/JFSnIX65Saegbvp/G0Tnu+ga7QYHieq4456/no/4Y 8tWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wl9rhcZTw2bl/G8PcoBaqy1BeU7FBl5WLyqFr+sDEWA=; b=LIQIK7s89dX3Ifa/GF3yYF3a59Rv9BfYdO5uzBSOWBS/bILj+2kY3oYMtsmlyaIhN6 CukedQTWaKzdAs9hEeKYHIByPNtKmNXgI4xBJejg+YO4oYjullHCD+YrtRrGf9soEi0Y lrDQDWziygLmI6KVLAyP0ZrK9dDfFb6Myx5o484NQERT4jsKS9CVFk5YCDDXYaQDoxAs HH6Ezz5h+oBWgtI2jHAGJQLExnNwYVQAzjzEh/7rlyUNAVleMqjmbe150Mqb63Rk1LoE pDPr9w9+ZBLDPr0nv+zU7Zj2FMxSxY2/2EtBTK802ENIbF3EkJXMFvAf383ZHX3jIYQz TZjQ== X-Gm-Message-State: AHPjjUjP3OLCwRbTQcevsUJpUbg+V0LAWLPUffhk4xCOI2GqWXtO/IsJ FrEF1cHZKBHhODZN8r9h0wu4nw== X-Google-Smtp-Source: AOwi7QALwMzGMFnp2xODbUEcRxk+BQPfnaEjFFnKcJQ85YlQvuHCRF1+7ws5IhBMO/UFRQSx9w6O6Q== X-Received: by 10.80.195.87 with SMTP id q23mr23873441edb.144.1507025213636; Tue, 03 Oct 2017 03:06:53 -0700 (PDT) Received: from titan.cnexlabs.com (6164211-cl69.boa.fiberby.dk. [193.106.164.211]) by smtp.gmail.com with ESMTPSA id e50sm10594908ede.18.2017.10.03.03.06.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 03 Oct 2017 03:06:53 -0700 (PDT) From: Hans Holmberg To: Matias Bjorling Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Javier Gonzales , Hans Holmberg , Hans Holmberg Subject: [PATCH 5/9] lightnvm: pblk: consider bad sectors in emeta during recovery Date: Tue, 3 Oct 2017 12:05:09 +0200 Message-Id: <1507025113-13351-6-git-send-email-hans.ml.holmberg@owltronix.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507025113-13351-1-git-send-email-hans.ml.holmberg@owltronix.com> References: <1507025113-13351-1-git-send-email-hans.ml.holmberg@owltronix.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Hans Holmberg When recovering lines we need to consider that bad blocks in a line affects the emeta area size. Previously it was assumed that the emeta area would grow by the number of sectors per page * number of bad blocks in the line. This assumtion not correct - the number of "extra" pages that are consumed could be both smaller (depending on emeta size) and bigger (depending on the placement of the bad blocks). Fix this by calculating the emeta start by iterating backwards through the line, skipping ppas that map to bad blocks. Also fix the data types used for ppa indices/counts in pblk_recov_l2p_from_emeta - we should use u64. Signed-off-by: Hans Holmberg Reviewed-by: Javier González --- drivers/lightnvm/pblk-recovery.c | 44 +++++++++++++++++++++++++++------------- 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c index 74b3b86..b5a2275 100644 --- a/drivers/lightnvm/pblk-recovery.c +++ b/drivers/lightnvm/pblk-recovery.c @@ -133,16 +133,16 @@ static int pblk_recov_l2p_from_emeta(struct pblk *pblk, struct pblk_line *line) struct pblk_emeta *emeta = line->emeta; struct line_emeta *emeta_buf = emeta->buf; __le64 *lba_list; - int data_start, data_end; - int nr_valid_lbas, nr_lbas = 0; - int i; + u64 data_start, data_end; + u64 nr_valid_lbas, nr_lbas = 0; + u64 i; lba_list = pblk_recov_get_lba_list(pblk, emeta_buf); if (!lba_list) return 1; data_start = pblk_line_smeta_start(pblk, line) + lm->smeta_sec; - data_end = lm->sec_per_line - lm->emeta_sec[0]; + data_end = line->emeta_ssec; nr_valid_lbas = le64_to_cpu(emeta_buf->nr_valid_lbas); for (i = data_start; i < data_end; i++) { @@ -172,8 +172,8 @@ static int pblk_recov_l2p_from_emeta(struct pblk *pblk, struct pblk_line *line) } if (nr_valid_lbas != nr_lbas) - pr_err("pblk: line %d - inconsistent lba list(%llu/%d)\n", - line->id, emeta_buf->nr_valid_lbas, nr_lbas); + pr_err("pblk: line %d - inconsistent lba list(%llu/%llu)\n", + line->id, nr_valid_lbas, nr_lbas); line->left_msecs = 0; @@ -827,11 +827,33 @@ static void pblk_recov_line_add_ordered(struct list_head *head, __list_add(&line->list, t->list.prev, &t->list); } -struct pblk_line *pblk_recov_l2p(struct pblk *pblk) +static u64 pblk_line_emeta_start(struct pblk *pblk, struct pblk_line *line) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; struct pblk_line_meta *lm = &pblk->lm; + unsigned int emeta_secs; + u64 emeta_start; + struct ppa_addr ppa; + int pos; + + emeta_secs = lm->emeta_sec[0]; + emeta_start = lm->sec_per_line; + + while (emeta_secs) { + emeta_start--; + ppa = addr_to_pblk_ppa(pblk, emeta_start, line->id); + pos = pblk_ppa_to_pos(geo, ppa); + if (!test_bit(pos, line->blk_bitmap)) + emeta_secs--; + } + + return emeta_start; +} + +struct pblk_line *pblk_recov_l2p(struct pblk *pblk) +{ + struct pblk_line_meta *lm = &pblk->lm; struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line *line, *tline, *data_line = NULL; struct pblk_smeta *smeta; @@ -930,15 +952,9 @@ struct pblk_line *pblk_recov_l2p(struct pblk *pblk) /* Verify closed blocks and recover this portion of L2P table*/ list_for_each_entry_safe(line, tline, &recov_list, list) { - int off, nr_bb; - recovered_lines++; - /* Calculate where emeta starts based on the line bb */ - off = lm->sec_per_line - lm->emeta_sec[0]; - nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line); - off -= nr_bb * geo->sec_per_pl; - line->emeta_ssec = off; + line->emeta_ssec = pblk_line_emeta_start(pblk, line); line->emeta = emeta; memset(line->emeta->buf, 0, lm->emeta_len[0]);