From patchwork Wed Aug 29 10:09:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Javier_Gonz=C3=A1lez?= X-Patchwork-Id: 10579773 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4869817DB for ; Wed, 29 Aug 2018 10:10:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 39F272AAC6 for ; Wed, 29 Aug 2018 10:10:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2E2CA2AACD; Wed, 29 Aug 2018 10:10:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B25B22AAC6 for ; Wed, 29 Aug 2018 10:10:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728264AbeH2OGC (ORCPT ); Wed, 29 Aug 2018 10:06:02 -0400 Received: from mail-it0-f66.google.com ([209.85.214.66]:37047 "EHLO mail-it0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728262AbeH2OGB (ORCPT ); Wed, 29 Aug 2018 10:06:01 -0400 Received: by mail-it0-f66.google.com with SMTP id h20-v6so6502133itf.2 for ; Wed, 29 Aug 2018 03:09:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=javigon-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1ODcjidieP2i8+4PKVs+kvCzZXfnHJ/1WtHIaljN8HY=; b=FtLAQcProxh3GzyK/M3ewHLenRYJ3RDwQSAIqB745Pt8hCJcVH+CDoe8qhlMBXgD0U bfrnODw4ioonb3A1bzLVclL2s6bHwzJRgCfHz1I7eF2bVRDRmGP67Sld+uVB9vGua1tT 4TlX1Dpm2ZqHS9B517/F1E2Oq3hq8bwUJGfCHF26XjT3y7zenctPwKnxCa4eAQzvv9Dj 4vLDGr9Jyn0CAUsfL4F3P1aeQtSj+KZyTbhf9szFvAMykjOGJeJVwy8xBWwmBd+jrkKQ gs6SAoG7sbfmexFDiPMpsUKBWy+j689abdkdKWv5eN6px/AfC4kV1iKcs4fSPrFotr8J 5DvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1ODcjidieP2i8+4PKVs+kvCzZXfnHJ/1WtHIaljN8HY=; b=qMxzoyTmEDsQPVm89vZVqlWYGZgyjG1VeodmcnBRdUSBH68CecJ8+fMpiTyXufNzqd FdIqO1px4nwWiebM4V3fv7q799X5iOdmaIx36ueGe7ynd+5cy77B7N20dBSe0fDewakW umJOXBBLGVwcEk+EKuGmIpvYGvFuBIfThB1O8LWARyem3Q5APsa9I9tPDgnB+gfJkRRE BJxJhCdw9nwnYhsQ/DIy8G4I/ms7QCYqE/Dn9hf2nlep6tytoPJYSCWCprKGTAatjiXN 7NXvgS12TPoyKdNHhTR77Ki65KMd32t3P51GxVlPvV7jGsGNCIMCelB50YJuaZD96osX fj4A== X-Gm-Message-State: APzg51CXrf7J+DYsjRtNYEaC60yv5yUK0FZqlZ+r0jIp3iiSZJ7wP9W+ VTFHTRQw7cKxuRrv6PdGnBKAH6Do1ls= X-Google-Smtp-Source: ANB0VdY01uJljQOsdLOM8IpmvDCc45ZEqZMsVQVcYWP2NSNphh0gtRGfR9H2GosoF9D3hfOjEr0DfQ== X-Received: by 2002:a24:190c:: with SMTP id b12-v6mr4624781itb.79.1535537391726; Wed, 29 Aug 2018 03:09:51 -0700 (PDT) Received: from ch-wrk-javier.cnexlabs.com (6164211-cl69.boa.fiberby.dk. [193.106.164.211]) by smtp.gmail.com with ESMTPSA id v14-v6sm1721501iog.42.2018.08.29.03.09.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 29 Aug 2018 03:09:51 -0700 (PDT) From: " =?utf-8?q?Javier_Gonz=C3=A1lez?= " X-Google-Original-From: =?utf-8?q?Javier_Gonz=C3=A1lez?= To: mb@lightnvm.io Cc: igor.j.konopko@intel.com, marcin.dziegielewski@intel.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Javier?= =?utf-8?q?_Gonz=C3=A1lez?= Subject: [PATCH 1/3] lightnvm: use internal allocation for chunk log page Date: Wed, 29 Aug 2018 12:09:28 +0200 Message-Id: <1535537370-10729-2-git-send-email-javier@cnexlabs.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1535537370-10729-1-git-send-email-javier@cnexlabs.com> References: <1535537370-10729-1-git-send-email-javier@cnexlabs.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The lightnvm subsystem provides helpers to retrieve chunk metadata, where the target needs to provide a buffer to store the metadata. An implicit assumption is that this buffer is contiguous and can be used to retrieve the data from the device. If the device exposes too many chunks, then kmalloc might fail, thus failing instance creation. This patch removes this assumption by implementing an internal buffer in the lightnvm subsystem to retrieve chunk metadata. Targets can then use virtual memory allocations. Since this is a target API change, adapt pblk accordingly. Signed-off-by: Javier González --- drivers/lightnvm/pblk-core.c | 4 ++-- drivers/lightnvm/pblk-init.c | 2 +- drivers/nvme/host/lightnvm.c | 23 +++++++++++++++-------- 3 files changed, 18 insertions(+), 11 deletions(-) diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index fdcbeb920c9e..a311cc29afd8 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -120,7 +120,7 @@ static void pblk_end_io_erase(struct nvm_rq *rqd) /* * Get information for all chunks from the device. * - * The caller is responsible for freeing the returned structure + * The caller is responsible for freeing (vmalloc) the returned structure */ struct nvm_chk_meta *pblk_get_chunk_meta(struct pblk *pblk) { @@ -134,7 +134,7 @@ struct nvm_chk_meta *pblk_get_chunk_meta(struct pblk *pblk) ppa.ppa = 0; len = geo->all_chunks * sizeof(*meta); - meta = kzalloc(len, GFP_KERNEL); + meta = vzalloc(len); if (!meta) return ERR_PTR(-ENOMEM); diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index e0db6de137d6..a99854439224 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -983,7 +983,7 @@ static int pblk_lines_init(struct pblk *pblk) pblk_set_provision(pblk, nr_free_chks); - kfree(chunk_meta); + vfree(chunk_meta); return 0; fail_free_lines: diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 2c96e7fcdcac..5bfa354c5dd5 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -579,7 +579,7 @@ static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, struct nvm_geo *geo = &ndev->geo; struct nvme_ns *ns = ndev->q->queuedata; struct nvme_ctrl *ctrl = ns->ctrl; - struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta; + struct nvme_nvm_chk_meta *dev_meta, *dev_meta_off; struct ppa_addr ppa; size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); size_t log_pos, offset, len; @@ -591,6 +591,10 @@ static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, */ max_len = min_t(unsigned int, ctrl->max_hw_sectors << 9, 256 * 1024); + dev_meta = kmalloc(max_len, GFP_KERNEL); + if (!dev_meta) + return -ENOMEM; + /* Normalize lba address space to obtain log offset */ ppa.ppa = slba; ppa = dev_to_generic_addr(ndev, ppa); @@ -604,6 +608,9 @@ static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, while (left) { len = min_t(unsigned int, left, max_len); + memset(dev_meta, 0, max_len); + dev_meta_off = dev_meta; + ret = nvme_get_log_ext(ctrl, ns, NVME_NVM_LOG_REPORT_CHUNK, dev_meta, len, offset); if (ret) { @@ -612,15 +619,15 @@ static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, } for (i = 0; i < len; i += sizeof(struct nvme_nvm_chk_meta)) { - meta->state = dev_meta->state; - meta->type = dev_meta->type; - meta->wi = dev_meta->wi; - meta->slba = le64_to_cpu(dev_meta->slba); - meta->cnlb = le64_to_cpu(dev_meta->cnlb); - meta->wp = le64_to_cpu(dev_meta->wp); + meta->state = dev_meta_off->state; + meta->type = dev_meta_off->type; + meta->wi = dev_meta_off->wi; + meta->slba = le64_to_cpu(dev_meta_off->slba); + meta->cnlb = le64_to_cpu(dev_meta_off->cnlb); + meta->wp = le64_to_cpu(dev_meta_off->wp); meta++; - dev_meta++; + dev_meta_off++; } offset += len; From patchwork Wed Aug 29 10:09:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Javier_Gonz=C3=A1lez?= X-Patchwork-Id: 10579771 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF91517DB for ; Wed, 29 Aug 2018 10:10:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0A1A2AAC6 for ; Wed, 29 Aug 2018 10:10:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D54FB2AACD; Wed, 29 Aug 2018 10:10:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86E592AAC6 for ; Wed, 29 Aug 2018 10:10:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727099AbeH2OGD (ORCPT ); Wed, 29 Aug 2018 10:06:03 -0400 Received: from mail-it0-f68.google.com ([209.85.214.68]:51698 "EHLO mail-it0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728242AbeH2OGD (ORCPT ); Wed, 29 Aug 2018 10:06:03 -0400 Received: by mail-it0-f68.google.com with SMTP id e14-v6so6879185itf.1 for ; Wed, 29 Aug 2018 03:09:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=javigon-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+5hWH5cwcRqnlP8QBSgwyrr7XbkM0aTdIEsouUgxOgI=; b=kl6MziuDWeog/Wjr0DV4rb3rrVKfhFMJYw+iSCJUYXGkhUeEkkHWGaNxsNxAGDvl/q cCReVw3ECkC699JFd/+W03yoVntYtcWv1EAp48ClWF8RpJ7WbeIvVP/M6KRSGlt1GXJu rRgvvHGFOYFEU2QMorGkmB390+hQAVEqR/4ABlZjwKljmaT5O9VO3mrAu/+vjmuBP0Bh FZ3ZnNoP+ZYi6I+GZ1VMwWeNEe8hDdZmrE0V68RBjld4IdbI0yVFLuRPqDvpCPKwcj/5 YVmPF1BXNpzip+O4EzgcEISwQCVCoolDFs6uIjz2kVBdQjnnmcb8zH/83ErClnuYZkwv JvkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+5hWH5cwcRqnlP8QBSgwyrr7XbkM0aTdIEsouUgxOgI=; b=WzRWUZCP8VU6SlbReagb3Y52NYbO4DjjwCECqhHK5a6qolIeeBhRH0l0KPnhxJOWmh tEDfs+QqUh/C3aWEFitIgMLh4WW48qdy5uSyaPQ6cQGY71euYw9AYfHFqu6fJz0TSSNR 8WE4Wn7mLSu5xXQhUzNdHa2ZfcpFUUVxmAMXNS3yfAAREiYUQiJPKSoQT8OTg5vfpLx4 8S3+Ax48rqgi77EO+lNl/rCo3uh2HZPDILkt2O9l4MZneDOFl6vhet02IqsQ6PizHyxu x3D7Xq0Hw6kDvUC5STivTvGYczEIV80eYgChmS2OftGxSWtKRtxkBnMIDGcprNU0Ds6a 8Z1A== X-Gm-Message-State: APzg51CXkvEfS3eVLEyGV5NIVLTNxClrmyvpI77qNd3xj0hEad1+enp8 FMlSJmffE4E5rzL1+W5aFIN8/A== X-Google-Smtp-Source: ANB0VdZriADwfuav4jchmmAO260IMGAbpH2FKmZIpBn7BkxcINQkJ7JVObs3ikZk/8LKE38muM0ECQ== X-Received: by 2002:a24:ac5c:: with SMTP id m28-v6mr4381774iti.120.1535537393308; Wed, 29 Aug 2018 03:09:53 -0700 (PDT) Received: from ch-wrk-javier.cnexlabs.com (6164211-cl69.boa.fiberby.dk. [193.106.164.211]) by smtp.gmail.com with ESMTPSA id v14-v6sm1721501iog.42.2018.08.29.03.09.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 29 Aug 2018 03:09:53 -0700 (PDT) From: " =?utf-8?q?Javier_Gonz=C3=A1lez?= " X-Google-Original-From: =?utf-8?q?Javier_Gonz=C3=A1lez?= To: mb@lightnvm.io Cc: igor.j.konopko@intel.com, marcin.dziegielewski@intel.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Javier?= =?utf-8?q?_Gonz=C3=A1lez?= Subject: [PATCH 2/3] lightnvm: do no update csecs and sos on 1.2 Date: Wed, 29 Aug 2018 12:09:29 +0200 Message-Id: <1535537370-10729-3-git-send-email-javier@cnexlabs.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1535537370-10729-1-git-send-email-javier@cnexlabs.com> References: <1535537370-10729-1-git-send-email-javier@cnexlabs.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the OCSSD 2.0 spec., the sector and metadata sizes are reported though the standard nvme identify command. Thus, the lightnvm subsystem needs to update this information on the geometry structure on bootup. Since 1.2 devices report these values on the OCSSD geometry identify, avoid this update is it is unnecessary and can also corrupt the geometry if the devices does not report the nvme sizes correctly (which is not required by the OCSSD 1.2 spec either) Signed-off-by: Javier González --- drivers/nvme/host/lightnvm.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 5bfa354c5dd5..33ed09f8410e 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -980,6 +980,9 @@ void nvme_nvm_update_nvm_info(struct nvme_ns *ns) struct nvm_dev *ndev = ns->ndev; struct nvm_geo *geo = &ndev->geo; + if (geo->version == NVM_OCSSD_SPEC_12) + return; + geo->csecs = 1 << ns->lba_shift; geo->sos = ns->ms; } From patchwork Wed Aug 29 10:09:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Javier_Gonz=C3=A1lez?= X-Patchwork-Id: 10579767 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D88DD17DB for ; Wed, 29 Aug 2018 10:10:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C81172AAC6 for ; Wed, 29 Aug 2018 10:10:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B84082AACD; Wed, 29 Aug 2018 10:10:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6980D2AAC6 for ; Wed, 29 Aug 2018 10:10:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728323AbeH2OGG (ORCPT ); Wed, 29 Aug 2018 10:06:06 -0400 Received: from mail-io0-f193.google.com ([209.85.223.193]:38002 "EHLO mail-io0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728303AbeH2OGF (ORCPT ); Wed, 29 Aug 2018 10:06:05 -0400 Received: by mail-io0-f193.google.com with SMTP id y3-v6so3996208ioc.5 for ; Wed, 29 Aug 2018 03:09:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=javigon-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3DHs7o+EztWvRc2/X55mg/gtjIrTaebHFnkzt9ZUIYo=; b=AONyVQgm5aapk9pjVDp4zuqfKKQh/BsB9fRggK688xQx2a+uvA2yJlE+e0k0qes2c/ /m6i25Dej0/8BgdUkpxUBufNo+BrWmEim91gOaTV/WEqpFjQ2aInrRlOpEHMswE9+qXm SohAEpUM2wLOdrgLbxKoRhoMC5MZce6Y/ZDPbpT6NBZbUqOLRJNqZQi+HjM0O69aUKeG uPuG25CQIU06zG8liqh8TY3A+U9r0bGW9X8LpkX1Sujf9avquKWk/ZF/PWHeMIp9n+EO EuGQ0qqIuLRFOBdvS5nadfDIzKd7IeAyoxlQWeZF7157w+BcjjY+NpjMJzpO/zQwZe/F UndA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3DHs7o+EztWvRc2/X55mg/gtjIrTaebHFnkzt9ZUIYo=; b=dByExAaDKN3O3cXa9HIiir69MoHTf5F+YCog371NY3sutAOE+KMGUGI8jM+aNL+D9H 0j70wkq/EeutEe4RhpqxSJyA45J5FOZsuizGMDAO8PQkQnaTZcDd7sQuNaXP/5RcrHqN /d3tS41D0lepjXYOennKAPc1/pKzgK4xn3QcR8OzcJwym6XjtJZl+NMYZns4k64gqE+x z5pa1E9eiY+wKj7MFOAR5rDciAokhBvR5aTiFSVaMwMcrRX/RQ0YBlnipF3Yon5K6Buv umexdYBG0FezP7KaUfladuD0L5rquF1YcMI+8bLKcgccCim/43BodAaeq3Ye8+9Wx/Q1 49cQ== X-Gm-Message-State: APzg51AvIgIB5MESJlTYuJvjqLGz/6BezozK7xvU6MQdVIgZ6q33dZ+F BFtByegoa4zAd3/+1ww/moc1tbc2UiI= X-Google-Smtp-Source: ANB0VdbEUeIR8jgKxZ8Cabw2b7gnrd1pe2UsNvcxQT+uV2ON3V7SO+k56F+zJjgPGoR9OO6cIkIaTA== X-Received: by 2002:a6b:410f:: with SMTP id n15-v6mr4400354ioa.3.1535537395175; Wed, 29 Aug 2018 03:09:55 -0700 (PDT) Received: from ch-wrk-javier.cnexlabs.com (6164211-cl69.boa.fiberby.dk. [193.106.164.211]) by smtp.gmail.com with ESMTPSA id v14-v6sm1721501iog.42.2018.08.29.03.09.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 29 Aug 2018 03:09:54 -0700 (PDT) From: " =?utf-8?q?Javier_Gonz=C3=A1lez?= " X-Google-Original-From: =?utf-8?q?Javier_Gonz=C3=A1lez?= To: mb@lightnvm.io Cc: igor.j.konopko@intel.com, marcin.dziegielewski@intel.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, =?utf-8?q?Javier?= =?utf-8?q?_Gonz=C3=A1lez?= Subject: [PATCH 3/3] lightnvm: pblk: support variable OOB size Date: Wed, 29 Aug 2018 12:09:30 +0200 Message-Id: <1535537370-10729-4-git-send-email-javier@cnexlabs.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1535537370-10729-1-git-send-email-javier@cnexlabs.com> References: <1535537370-10729-1-git-send-email-javier@cnexlabs.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP pblk uses 8 bytes in the metadata region area exposed by the device through the out of band area to store the lba mapped to the given physical sector. This is used for recovery purposes. Given that the first generation OCSSD devices exposed 16 bytes, pblk used a hard-coded structure for this purpose. This patch relaxes the 16 bytes assumption and uses the metadata size reported by the device to layout metadata appropriately for the vector commands. This adds support for arbitrary metadata sizes, as long as these are larger than 8 bytes. Note that this patch does not address the case in which the device does not expose an out of band area and that pblk creation will fail in this case. Signed-off-by: Javier González --- drivers/lightnvm/pblk-core.c | 56 ++++++++++++++++++++++++++++++---------- drivers/lightnvm/pblk-init.c | 14 ++++++++++ drivers/lightnvm/pblk-map.c | 19 +++++++++----- drivers/lightnvm/pblk-read.c | 55 +++++++++++++++++++++++++-------------- drivers/lightnvm/pblk-recovery.c | 34 +++++++++++++++++------- drivers/lightnvm/pblk.h | 18 ++++++++++--- 6 files changed, 143 insertions(+), 53 deletions(-) diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index a311cc29afd8..d52e0047ae9d 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -250,8 +250,20 @@ int pblk_setup_rqd(struct pblk *pblk, struct nvm_rq *rqd, gfp_t mem_flags, if (!is_vector) return 0; - rqd->ppa_list = rqd->meta_list + pblk_dma_meta_size; - rqd->dma_ppa_list = rqd->dma_meta_list + pblk_dma_meta_size; + if (pblk->dma_shared) { + rqd->ppa_list = rqd->meta_list + pblk->dma_meta_size; + rqd->dma_ppa_list = rqd->dma_meta_list + pblk->dma_meta_size; + + return 0; + } + + rqd->ppa_list = nvm_dev_dma_alloc(dev->parent, mem_flags, + &rqd->dma_ppa_list); + if (!rqd->ppa_list) { + nvm_dev_dma_free(dev->parent, rqd->meta_list, + rqd->dma_meta_list); + return -ENOMEM; + } return 0; } @@ -262,7 +274,11 @@ void pblk_clear_rqd(struct pblk *pblk, struct nvm_rq *rqd) if (rqd->meta_list) nvm_dev_dma_free(dev->parent, rqd->meta_list, - rqd->dma_meta_list); + rqd->dma_meta_list); + + if (!pblk->dma_shared && rqd->ppa_list) + nvm_dev_dma_free(dev->parent, rqd->ppa_list, + rqd->dma_ppa_list); } /* Caller must guarantee that the request is a valid type */ @@ -796,10 +812,12 @@ static int pblk_line_smeta_write(struct pblk *pblk, struct pblk_line *line, rqd.is_seq = 1; for (i = 0; i < lm->smeta_sec; i++, paddr++) { - struct pblk_sec_meta *meta_list = rqd.meta_list; + struct pblk_sec_meta *meta; rqd.ppa_list[i] = addr_to_gen_ppa(pblk, paddr, line->id); - meta_list[i].lba = lba_list[paddr] = addr_empty; + + meta = sec_meta_index(pblk, rqd.meta_list, i); + meta->lba = lba_list[paddr] = addr_empty; } ret = pblk_submit_io_sync_sem(pblk, &rqd); @@ -845,8 +863,17 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, if (!meta_list) return -ENOMEM; - ppa_list = meta_list + pblk_dma_meta_size; - dma_ppa_list = dma_meta_list + pblk_dma_meta_size; + if (pblk->dma_shared) { + ppa_list = meta_list + pblk->dma_meta_size; + dma_ppa_list = dma_meta_list + pblk->dma_meta_size; + } else { + ppa_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, + &dma_ppa_list); + if (!ppa_list) { + ret = -ENOMEM; + goto free_meta_list; + } + } next_rq: memset(&rqd, 0, sizeof(struct nvm_rq)); @@ -858,7 +885,7 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, l_mg->emeta_alloc_type, GFP_KERNEL); if (IS_ERR(bio)) { ret = PTR_ERR(bio); - goto free_rqd_dma; + goto free_ppa_list; } bio->bi_iter.bi_sector = 0; /* internal bio */ @@ -884,7 +911,7 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, if (pblk_boundary_paddr_checks(pblk, paddr)) { bio_put(bio); ret = -EINTR; - goto free_rqd_dma; + goto free_ppa_list; } ppa = addr_to_gen_ppa(pblk, paddr, line_id); @@ -894,7 +921,7 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, if (pblk_boundary_paddr_checks(pblk, paddr + min)) { bio_put(bio); ret = -EINTR; - goto free_rqd_dma; + goto free_ppa_list; } for (j = 0; j < min; j++, i++, paddr++) @@ -905,7 +932,7 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, if (ret) { pblk_err(pblk, "emeta I/O submission failed: %d\n", ret); bio_put(bio); - goto free_rqd_dma; + goto free_ppa_list; } atomic_dec(&pblk->inflight_io); @@ -918,8 +945,11 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, if (left_ppas) goto next_rq; -free_rqd_dma: - nvm_dev_dma_free(dev->parent, rqd.meta_list, rqd.dma_meta_list); +free_ppa_list: + if (!pblk->dma_shared) + nvm_dev_dma_free(dev->parent, ppa_list, dma_ppa_list); +free_meta_list: + nvm_dev_dma_free(dev->parent, meta_list, dma_meta_list); return ret; } diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index a99854439224..57972156c318 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -354,6 +354,20 @@ static int pblk_core_init(struct pblk *pblk) struct nvm_geo *geo = &dev->geo; int ret, max_write_ppas; + if (sizeof(struct pblk_sec_meta) > geo->sos) { + pblk_err(pblk, "OOB area too small. Min %lu bytes (%d)\n", + (unsigned long)sizeof(struct pblk_sec_meta), geo->sos); + return -EINTR; + } + + pblk->dma_ppa_size = (sizeof(u64) * NVM_MAX_VLBA); + pblk->dma_meta_size = geo->sos * NVM_MAX_VLBA; + + if (pblk->dma_ppa_size + pblk->dma_meta_size > PAGE_SIZE) + pblk->dma_shared = false; + else + pblk->dma_shared = true; + atomic64_set(&pblk->user_wa, 0); atomic64_set(&pblk->pad_wa, 0); atomic64_set(&pblk->gc_wa, 0); diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c index dc0efb852475..55fca16d18e4 100644 --- a/drivers/lightnvm/pblk-map.c +++ b/drivers/lightnvm/pblk-map.c @@ -25,6 +25,7 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry, unsigned int valid_secs) { struct pblk_line *line = pblk_line_get_data(pblk); + struct pblk_sec_meta *meta; struct pblk_emeta *emeta; struct pblk_w_ctx *w_ctx; __le64 *lba_list; @@ -56,6 +57,8 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry, /* ppa to be sent to the device */ ppa_list[i] = addr_to_gen_ppa(pblk, paddr, line->id); + meta = sec_meta_index(pblk, meta_list, i); + /* Write context for target bio completion on write buffer. Note * that the write buffer is protected by the sync backpointer, * and a single writer thread have access to each specific entry @@ -67,14 +70,14 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry, kref_get(&line->ref); w_ctx = pblk_rb_w_ctx(&pblk->rwb, sentry + i); w_ctx->ppa = ppa_list[i]; - meta_list[i].lba = cpu_to_le64(w_ctx->lba); + meta->lba = cpu_to_le64(w_ctx->lba); lba_list[paddr] = cpu_to_le64(w_ctx->lba); if (lba_list[paddr] != addr_empty) line->nr_valid_lbas++; else atomic64_inc(&pblk->pad_wa); } else { - lba_list[paddr] = meta_list[i].lba = addr_empty; + lba_list[paddr] = meta->lba = addr_empty; __pblk_map_invalidate(pblk, line, paddr); } } @@ -87,7 +90,7 @@ void pblk_map_rq(struct pblk *pblk, struct nvm_rq *rqd, unsigned int sentry, unsigned long *lun_bitmap, unsigned int valid_secs, unsigned int off) { - struct pblk_sec_meta *meta_list = rqd->meta_list; + struct pblk_sec_meta *meta_list; struct ppa_addr *ppa_list = nvm_rq_to_ppa_list(rqd); unsigned int map_secs; int min = pblk->min_write_pgs; @@ -95,8 +98,10 @@ void pblk_map_rq(struct pblk *pblk, struct nvm_rq *rqd, unsigned int sentry, for (i = off; i < rqd->nr_ppas; i += min) { map_secs = (i + min > valid_secs) ? (valid_secs % min) : min; + meta_list = sec_meta_index(pblk, rqd->meta_list, i); + if (pblk_map_page_data(pblk, sentry + i, &ppa_list[i], - lun_bitmap, &meta_list[i], map_secs)) { + lun_bitmap, meta_list, map_secs)) { bio_put(rqd->bio); pblk_free_rqd(pblk, rqd, PBLK_WRITE); pblk_pipeline_stop(pblk); @@ -112,8 +117,8 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; struct pblk_line_meta *lm = &pblk->lm; - struct pblk_sec_meta *meta_list = rqd->meta_list; struct ppa_addr *ppa_list = nvm_rq_to_ppa_list(rqd); + struct pblk_sec_meta *meta_list; struct pblk_line *e_line, *d_line; unsigned int map_secs; int min = pblk->min_write_pgs; @@ -121,8 +126,10 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, for (i = 0; i < rqd->nr_ppas; i += min) { map_secs = (i + min > valid_secs) ? (valid_secs % min) : min; + meta_list = sec_meta_index(pblk, rqd->meta_list, i); + if (pblk_map_page_data(pblk, sentry + i, &ppa_list[i], - lun_bitmap, &meta_list[i], map_secs)) { + lun_bitmap, meta_list, map_secs)) { bio_put(rqd->bio); pblk_free_rqd(pblk, rqd, PBLK_WRITE); pblk_pipeline_stop(pblk); diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c index 57d3155ef9a5..12b690e2abd9 100644 --- a/drivers/lightnvm/pblk-read.c +++ b/drivers/lightnvm/pblk-read.c @@ -42,7 +42,6 @@ static void pblk_read_ppalist_rq(struct pblk *pblk, struct nvm_rq *rqd, struct bio *bio, sector_t blba, unsigned long *read_bitmap) { - struct pblk_sec_meta *meta_list = rqd->meta_list; struct ppa_addr ppas[NVM_MAX_VLBA]; int nr_secs = rqd->nr_ppas; bool advanced_bio = false; @@ -51,13 +50,16 @@ static void pblk_read_ppalist_rq(struct pblk *pblk, struct nvm_rq *rqd, pblk_lookup_l2p_seq(pblk, ppas, blba, nr_secs); for (i = 0; i < nr_secs; i++) { + struct pblk_sec_meta *meta; struct ppa_addr p = ppas[i]; sector_t lba = blba + i; + meta = sec_meta_index(pblk, rqd->meta_list, i); retry: if (pblk_ppa_empty(p)) { WARN_ON(test_and_set_bit(i, read_bitmap)); - meta_list[i].lba = cpu_to_le64(ADDR_EMPTY); + + meta->lba = cpu_to_le64(ADDR_EMPTY); if (unlikely(!advanced_bio)) { bio_advance(bio, (i) * PBLK_EXPOSED_PAGE_SIZE); @@ -77,7 +79,7 @@ static void pblk_read_ppalist_rq(struct pblk *pblk, struct nvm_rq *rqd, goto retry; } WARN_ON(test_and_set_bit(i, read_bitmap)); - meta_list[i].lba = cpu_to_le64(lba); + meta->lba = cpu_to_le64(lba); advanced_bio = true; #ifdef CONFIG_NVM_PBLK_DEBUG atomic_long_inc(&pblk->cache_reads); @@ -104,12 +106,15 @@ static void pblk_read_ppalist_rq(struct pblk *pblk, struct nvm_rq *rqd, static void pblk_read_check_seq(struct pblk *pblk, struct nvm_rq *rqd, sector_t blba) { - struct pblk_sec_meta *meta_lba_list = rqd->meta_list; int nr_lbas = rqd->nr_ppas; int i; for (i = 0; i < nr_lbas; i++) { - u64 lba = le64_to_cpu(meta_lba_list[i].lba); + struct pblk_sec_meta *meta; + u64 lba; + + meta = sec_meta_index(pblk, rqd->meta_list, i); + lba = le64_to_cpu(meta->lba); if (lba == ADDR_EMPTY) continue; @@ -133,17 +138,18 @@ static void pblk_read_check_seq(struct pblk *pblk, struct nvm_rq *rqd, static void pblk_read_check_rand(struct pblk *pblk, struct nvm_rq *rqd, u64 *lba_list, int nr_lbas) { - struct pblk_sec_meta *meta_lba_list = rqd->meta_list; int i, j; for (i = 0, j = 0; i < nr_lbas; i++) { + struct pblk_sec_meta *meta; u64 lba = lba_list[i]; u64 meta_lba; if (lba == ADDR_EMPTY) continue; - meta_lba = le64_to_cpu(meta_lba_list[j].lba); + meta = sec_meta_index(pblk, rqd->meta_list, j); + meta_lba = le64_to_cpu(meta->lba); if (lba != meta_lba) { #ifdef CONFIG_NVM_PBLK_DEBUG @@ -218,7 +224,7 @@ static void pblk_end_partial_read(struct nvm_rq *rqd) struct bio *new_bio = rqd->bio; struct bio *bio = pr_ctx->orig_bio; struct bio_vec src_bv, dst_bv; - struct pblk_sec_meta *meta_list = rqd->meta_list; + struct pblk_sec_meta *meta; int bio_init_idx = pr_ctx->bio_init_idx; unsigned long *read_bitmap = pr_ctx->bitmap; int nr_secs = pr_ctx->orig_nr_secs; @@ -237,12 +243,13 @@ static void pblk_end_partial_read(struct nvm_rq *rqd) } /* Re-use allocated memory for intermediate lbas */ - lba_list_mem = (((void *)rqd->ppa_list) + pblk_dma_ppa_size); - lba_list_media = (((void *)rqd->ppa_list) + 2 * pblk_dma_ppa_size); + lba_list_mem = (((void *)rqd->ppa_list) + pblk->dma_ppa_size); + lba_list_media = (((void *)rqd->ppa_list) + 2 * pblk->dma_ppa_size); for (i = 0; i < nr_secs; i++) { - lba_list_media[i] = meta_list[i].lba; - meta_list[i].lba = lba_list_mem[i]; + meta = sec_meta_index(pblk, rqd->meta_list, i); + lba_list_media[i] = meta->lba; + meta->lba = lba_list_mem[i]; } /* Fill the holes in the original bio */ @@ -254,7 +261,8 @@ static void pblk_end_partial_read(struct nvm_rq *rqd) line = pblk_ppa_to_line(pblk, rqd->ppa_list[i]); kref_put(&line->ref, pblk_line_put); - meta_list[hole].lba = lba_list_media[i]; + meta = sec_meta_index(pblk, rqd->meta_list, hole); + meta->lba = lba_list_media[i]; src_bv = new_bio->bi_io_vec[i++]; dst_bv = bio->bi_io_vec[bio_init_idx + hole]; @@ -290,8 +298,8 @@ static int pblk_setup_partial_read(struct pblk *pblk, struct nvm_rq *rqd, unsigned long *read_bitmap, int nr_holes) { - struct pblk_sec_meta *meta_list = rqd->meta_list; struct pblk_g_ctx *r_ctx = nvm_rq_to_pdu(rqd); + struct pblk_sec_meta *meta; struct pblk_pr_ctx *pr_ctx; struct bio *new_bio, *bio = r_ctx->private; __le64 *lba_list_mem; @@ -299,7 +307,7 @@ static int pblk_setup_partial_read(struct pblk *pblk, struct nvm_rq *rqd, int i; /* Re-use allocated memory for intermediate lbas */ - lba_list_mem = (((void *)rqd->ppa_list) + pblk_dma_ppa_size); + lba_list_mem = (((void *)rqd->ppa_list) + pblk->dma_ppa_size); new_bio = bio_alloc(GFP_KERNEL, nr_holes); @@ -315,8 +323,10 @@ static int pblk_setup_partial_read(struct pblk *pblk, struct nvm_rq *rqd, if (!pr_ctx) goto fail_free_pages; - for (i = 0; i < nr_secs; i++) - lba_list_mem[i] = meta_list[i].lba; + for (i = 0; i < nr_secs; i++) { + meta = sec_meta_index(pblk, rqd->meta_list, i); + lba_list_mem[i] = meta->lba; + } new_bio->bi_iter.bi_sector = 0; /* internal bio */ bio_set_op_attrs(new_bio, REQ_OP_READ, 0); @@ -382,7 +392,7 @@ static int pblk_partial_read_bio(struct pblk *pblk, struct nvm_rq *rqd, static void pblk_read_rq(struct pblk *pblk, struct nvm_rq *rqd, struct bio *bio, sector_t lba, unsigned long *read_bitmap) { - struct pblk_sec_meta *meta_list = rqd->meta_list; + struct pblk_sec_meta *meta; struct ppa_addr ppa; pblk_lookup_l2p_seq(pblk, &ppa, lba, 1); @@ -394,7 +404,10 @@ static void pblk_read_rq(struct pblk *pblk, struct nvm_rq *rqd, struct bio *bio, retry: if (pblk_ppa_empty(ppa)) { WARN_ON(test_and_set_bit(0, read_bitmap)); - meta_list[0].lba = cpu_to_le64(ADDR_EMPTY); + + meta = sec_meta_index(pblk, rqd->meta_list, 0); + meta->lba = cpu_to_le64(ADDR_EMPTY); + return; } @@ -408,7 +421,9 @@ static void pblk_read_rq(struct pblk *pblk, struct nvm_rq *rqd, struct bio *bio, } WARN_ON(test_and_set_bit(0, read_bitmap)); - meta_list[0].lba = cpu_to_le64(lba); + + meta = sec_meta_index(pblk, rqd->meta_list, 0); + meta->lba = cpu_to_le64(lba); #ifdef CONFIG_NVM_PBLK_DEBUG atomic_long_inc(&pblk->cache_reads); diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c index 8114013c37b8..1ce92562603d 100644 --- a/drivers/lightnvm/pblk-recovery.c +++ b/drivers/lightnvm/pblk-recovery.c @@ -157,7 +157,7 @@ static int pblk_recov_pad_line(struct pblk *pblk, struct pblk_line *line, { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - struct pblk_sec_meta *meta_list; + struct pblk_sec_meta *meta; struct pblk_pad_rq *pad_rq; struct nvm_rq *rqd; struct bio *bio; @@ -218,8 +218,6 @@ static int pblk_recov_pad_line(struct pblk *pblk, struct pblk_line *line, rqd->end_io = pblk_end_io_recov; rqd->private = pad_rq; - meta_list = rqd->meta_list; - for (i = 0; i < rqd->nr_ppas; ) { struct ppa_addr ppa; int pos; @@ -241,8 +239,10 @@ static int pblk_recov_pad_line(struct pblk *pblk, struct pblk_line *line, dev_ppa = addr_to_gen_ppa(pblk, w_ptr, line->id); pblk_map_invalidate(pblk, dev_ppa); - lba_list[w_ptr] = meta_list[i].lba = addr_empty; rqd->ppa_list[i] = dev_ppa; + + meta = sec_meta_index(pblk, rqd->meta_list, i); + lba_list[w_ptr] = meta->lba = addr_empty; } } @@ -327,7 +327,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; struct ppa_addr *ppa_list; - struct pblk_sec_meta *meta_list; + struct pblk_sec_meta *meta_list, *meta; struct nvm_rq *rqd; struct bio *bio; void *data; @@ -425,7 +425,10 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, } for (i = 0; i < rqd->nr_ppas; i++) { - u64 lba = le64_to_cpu(meta_list[i].lba); + u64 lba; + + meta = sec_meta_index(pblk, meta_list, i); + lba = le64_to_cpu(meta->lba); lba_list[paddr++] = cpu_to_le64(lba); @@ -464,13 +467,22 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) if (!meta_list) return -ENOMEM; - ppa_list = (void *)(meta_list) + pblk_dma_meta_size; - dma_ppa_list = dma_meta_list + pblk_dma_meta_size; + if (pblk->dma_shared) { + ppa_list = (void *)(meta_list) + pblk->dma_meta_size; + dma_ppa_list = dma_meta_list + pblk->dma_meta_size; + } else { + ppa_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, + &dma_ppa_list); + if (!ppa_list) { + ret = -ENOMEM; + goto free_meta_list; + } + } data = kcalloc(pblk->max_write_pgs, geo->csecs, GFP_KERNEL); if (!data) { ret = -ENOMEM; - goto free_meta_list; + goto free_ppa_list; } rqd = mempool_alloc(&pblk->r_rq_pool, GFP_KERNEL); @@ -495,9 +507,11 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) out: mempool_free(rqd, &pblk->r_rq_pool); kfree(data); +free_ppa_list: + if (!pblk->dma_shared) + nvm_dev_dma_free(dev->parent, ppa_list, dma_ppa_list); free_meta_list: nvm_dev_dma_free(dev->parent, meta_list, dma_meta_list); - return ret; } diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index 22cc9bfbbb10..4526fee206d9 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -86,7 +86,6 @@ enum { }; struct pblk_sec_meta { - u64 reserved; __le64 lba; }; @@ -103,9 +102,6 @@ enum { PBLK_RL_LOW = 4 }; -#define pblk_dma_meta_size (sizeof(struct pblk_sec_meta) * NVM_MAX_VLBA) -#define pblk_dma_ppa_size (sizeof(u64) * NVM_MAX_VLBA) - /* write buffer completion context */ struct pblk_c_ctx { struct list_head list; /* Head for out-of-order completion */ @@ -637,6 +633,10 @@ struct pblk { int sec_per_write; + int dma_meta_size; + int dma_ppa_size; + bool dma_shared; + unsigned char instance_uuid[16]; /* Persistent write amplification counters, 4kb sector I/Os */ @@ -985,6 +985,16 @@ static inline void *emeta_to_vsc(struct pblk *pblk, struct line_emeta *emeta) return (emeta_to_lbas(pblk, emeta) + pblk->lm.emeta_len[2]); } +static inline struct pblk_sec_meta *sec_meta_index(struct pblk *pblk, + struct pblk_sec_meta *meta, + int index) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + + return ((void *)meta + index * geo->sos); +} + static inline int pblk_line_vsc(struct pblk_line *line) { return le32_to_cpu(*line->vsc);