From patchwork Tue Apr 26 10:31:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Matias_Bj=C3=B8rling?= X-Patchwork-Id: 8937441 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E0ADD9F54E for ; Tue, 26 Apr 2016 10:32:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E3F6020103 for ; Tue, 26 Apr 2016 10:32:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E56432013A for ; Tue, 26 Apr 2016 10:32:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752269AbcDZKbn (ORCPT ); Tue, 26 Apr 2016 06:31:43 -0400 Received: from mail-wm0-f43.google.com ([74.125.82.43]:37655 "EHLO mail-wm0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752201AbcDZKb1 (ORCPT ); Tue, 26 Apr 2016 06:31:27 -0400 Received: by mail-wm0-f43.google.com with SMTP id n3so1555520wmn.0 for ; Tue, 26 Apr 2016 03:31:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bjorling.me; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EJ+ah+V6BOlZdfNlwMBAyM+DqtmFa49zxCImgairRJc=; b=ITtJkT8Qim4oQBlx+HAgn6EwRv62Pq3vpZI4p50Jwb3S2jvUzDhkzhuplxsdn01Wm+ UNn3WxOhnQCXIKg91C29S4q7GLuyZG0qwZnID7ePk4xBtgw+5ON5BScc6u5pQiM/ir8j KfROVJK39vIqtu2qWAs9U+KnLrZA9AcASU938= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EJ+ah+V6BOlZdfNlwMBAyM+DqtmFa49zxCImgairRJc=; b=iLjdQnHILzyV7kCwfrdIGjyCoKjg2hieJQvyy8uMauxoRNkfNOsIfUg4ZdTqGiQpX3 3smS8VJd7h6hXGR+AzXx2riPXv07H0J7c4mw6/rJL4ZKyGXNbe9SHuBql5KsrkGLpE4a qd7vstXVsOHvSrpxL783xels02Q4m/oSFQY45l13drz2UpUeXyENOkyJpbz68WE76NAc 3+ChQSF8P38rmTx4dNANgW6sFHgPLXnHxNYmZk+fsRWELiendIsteAfTEj6+wnqDP5ZI bv/AasqWD2ymAi1IbQUUMByfFte9bUwHABZtfqX7uSEPpMCF4rgwxGcf5vBIgf+JQa5q sOkg== X-Gm-Message-State: AOPr4FVV4I5IqiWr+vtQl7Qj6sCGgj5y6Lmq9jonu6wXnjvbXYnH/+B2Ftz2SLC6Zvo4hg== X-Received: by 10.194.166.3 with SMTP id zc3mr2300136wjb.104.1461666685967; Tue, 26 Apr 2016 03:31:25 -0700 (PDT) Received: from Macroninja.cnexlabs.com (6164211-cl69.boa.fiberby.dk. [193.106.164.211]) by smtp.gmail.com with ESMTPSA id c4sm28125193wjm.24.2016.04.26.03.31.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Apr 2016 03:31:25 -0700 (PDT) From: =?UTF-8?q?Matias=20Bj=C3=B8rling?= To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Matias=20Bj=C3=B8rling?= Subject: [PATCH 4/5] lightnvm: make nvm_set_rqd_ppalist() aware of vblks Date: Tue, 26 Apr 2016 12:31:09 +0200 Message-Id: <1461666670-30996-5-git-send-email-m@bjorling.me> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1461666670-30996-1-git-send-email-m@bjorling.me> References: <1461666670-30996-1-git-send-email-m@bjorling.me> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A virtual block enables a block to identify multiple physical blocks. This is useful for metadata where a device media supports multiple planes. In that case, a block, with multiple planes can be managed as a single vblk. Reducing the metadata required by one forth. nvm_set_rqd_ppalist() takes care of expanding a ppa_list with vblks automatically. However, for some use-cases, where only a single physical block is required, the ppa_list should not be expanded. Therefore, add a vblk parameter to nvm_set_rqd_ppalist(), and only expand the ppa_list if vblk is set. Signed-off-by: Matias Bjørling --- drivers/lightnvm/core.c | 31 +++++++++++++++++-------------- drivers/lightnvm/sysblk.c | 2 +- include/linux/lightnvm.h | 2 +- 3 files changed, 19 insertions(+), 16 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index e6d7a98..de5db7b 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -251,33 +251,36 @@ void nvm_generic_to_addr_mode(struct nvm_dev *dev, struct nvm_rq *rqd) EXPORT_SYMBOL(nvm_generic_to_addr_mode); int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd, - struct ppa_addr *ppas, int nr_ppas) + struct ppa_addr *ppas, int nr_ppas, int vblk) { int i, plane_cnt, pl_idx; - if (dev->plane_mode == NVM_PLANE_SINGLE && nr_ppas == 1) { - rqd->nr_pages = 1; + if ((!vblk || dev->plane_mode == NVM_PLANE_SINGLE) && nr_ppas == 1) { + rqd->nr_pages = nr_ppas; rqd->ppa_addr = ppas[0]; return 0; } - plane_cnt = dev->plane_mode; - rqd->nr_pages = plane_cnt * nr_ppas; - - if (dev->ops->max_phys_sect < rqd->nr_pages) - return -EINVAL; - + rqd->nr_pages = nr_ppas; rqd->ppa_list = nvm_dev_dma_alloc(dev, GFP_KERNEL, &rqd->dma_ppa_list); if (!rqd->ppa_list) { pr_err("nvm: failed to allocate dma memory\n"); return -ENOMEM; } - for (pl_idx = 0; pl_idx < plane_cnt; pl_idx++) { + if (!vblk) { + for (i = 0; i < nr_ppas; i++) + rqd->ppa_list[i] = ppas[i]; + } else { + plane_cnt = dev->plane_mode; + rqd->nr_pages *= plane_cnt; + for (i = 0; i < nr_ppas; i++) { - ppas[i].g.pl = pl_idx; - rqd->ppa_list[(pl_idx * nr_ppas) + i] = ppas[i]; + for (pl_idx = 0; pl_idx < plane_cnt; pl_idx++) { + ppas[i].g.pl = pl_idx; + rqd->ppa_list[(pl_idx * nr_ppas) + i] = ppas[i]; + } } } @@ -304,7 +307,7 @@ int nvm_erase_ppa(struct nvm_dev *dev, struct ppa_addr *ppas, int nr_ppas) memset(&rqd, 0, sizeof(struct nvm_rq)); - ret = nvm_set_rqd_ppalist(dev, &rqd, ppas, nr_ppas); + ret = nvm_set_rqd_ppalist(dev, &rqd, ppas, nr_ppas, 1); if (ret) return ret; @@ -420,7 +423,7 @@ int nvm_submit_ppa(struct nvm_dev *dev, struct ppa_addr *ppa, int nr_ppas, int ret; memset(&rqd, 0, sizeof(struct nvm_rq)); - ret = nvm_set_rqd_ppalist(dev, &rqd, ppa, nr_ppas); + ret = nvm_set_rqd_ppalist(dev, &rqd, ppa, nr_ppas, 1); if (ret) return ret; diff --git a/drivers/lightnvm/sysblk.c b/drivers/lightnvm/sysblk.c index bca6902..737fbc3 100644 --- a/drivers/lightnvm/sysblk.c +++ b/drivers/lightnvm/sysblk.c @@ -277,7 +277,7 @@ static int nvm_set_bb_tbl(struct nvm_dev *dev, struct sysblk_scan *s, int type) memset(&rqd, 0, sizeof(struct nvm_rq)); - nvm_set_rqd_ppalist(dev, &rqd, s->ppas, s->nr_ppas); + nvm_set_rqd_ppalist(dev, &rqd, s->ppas, s->nr_ppas, 1); nvm_generic_to_addr_mode(dev, &rqd); ret = dev->ops->set_bb_tbl(dev, &rqd, type); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 16d4f2e..9ae0b7c 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -526,7 +526,7 @@ extern int nvm_submit_io(struct nvm_dev *, struct nvm_rq *); extern void nvm_generic_to_addr_mode(struct nvm_dev *, struct nvm_rq *); extern void nvm_addr_to_generic_mode(struct nvm_dev *, struct nvm_rq *); extern int nvm_set_rqd_ppalist(struct nvm_dev *, struct nvm_rq *, - struct ppa_addr *, int); + struct ppa_addr *, int, int); extern void nvm_free_rqd_ppalist(struct nvm_dev *, struct nvm_rq *); extern int nvm_erase_ppa(struct nvm_dev *, struct ppa_addr *, int); extern int nvm_erase_blk(struct nvm_dev *, struct nvm_block *);