From patchwork Wed Oct 10 23:39:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 10635447 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EED4415E2 for ; Wed, 10 Oct 2018 23:39:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CD7652AB82 for ; Wed, 10 Oct 2018 23:39:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C21BD2AB83; Wed, 10 Oct 2018 23:39:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 51D5D2AB80 for ; Wed, 10 Oct 2018 23:39:38 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 4753E21165258; Wed, 10 Oct 2018 16:39:38 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received-SPF: None (no SPF record) identity=mailfrom; client-ip=192.55.52.151; helo=mga17.intel.com; envelope-from=alexander.h.duyck@linux.intel.com; receiver=linux-nvdimm@lists.01.org Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 842D52116524A for ; Wed, 10 Oct 2018 16:39:36 -0700 (PDT) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Oct 2018 16:39:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,366,1534834800"; d="scan'208";a="80463881" Received: from ahduyck-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.7.198.156]) by orsmga008.jf.intel.com with ESMTP; 10 Oct 2018 16:39:35 -0700 Subject: [nvdimm PATCH 6/6] nvdimm: Use namespace index data to reduce number of label reads needed From: Alexander Duyck To: dan.j.williams@intel.com, linux-nvdimm@lists.01.org Date: Wed, 10 Oct 2018 16:39:35 -0700 Message-ID: <20181010233926.12228.65663.stgit@localhost.localdomain> In-Reply-To: <20181010233428.12228.26106.stgit@localhost.localdomain> References: <20181010233428.12228.26106.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: alexander.h.duyck@linux.intel.com, zwisler@kernel.org Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP This patch adds logic that is meant to make use of the namespace index data to reduce the number of reads that are needed to initialize a given namespace. The general idea is that once we have enough data to validate the namespace index we do so and then proceed to fetch only those labels that are not listed as being "free". By doing this I am seeing a total time reduction from about 4-5 seconds to 2-3 seconds for 24 NVDIMM modules each with 128K of label config area. Signed-off-by: Alexander Duyck --- drivers/nvdimm/dimm.c | 4 -- drivers/nvdimm/label.c | 93 +++++++++++++++++++++++++++++++++++++++++++++--- drivers/nvdimm/label.h | 3 -- 3 files changed, 88 insertions(+), 12 deletions(-) diff --git a/drivers/nvdimm/dimm.c b/drivers/nvdimm/dimm.c index 07bf96948553..9899c97138a3 100644 --- a/drivers/nvdimm/dimm.c +++ b/drivers/nvdimm/dimm.c @@ -84,10 +84,6 @@ static int nvdimm_probe(struct device *dev) dev_dbg(dev, "config data size: %d\n", ndd->nsarea.config_size); nvdimm_bus_lock(dev); - ndd->ns_current = nd_label_validate(ndd); - ndd->ns_next = nd_label_next_nsindex(ndd->ns_current); - nd_label_copy(ndd, to_next_namespace_index(ndd), - to_current_namespace_index(ndd)); if (ndd->ns_current >= 0) { rc = nd_label_reserve_dpa(ndd); if (rc == 0) diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c index 563f24af01b5..7f03d117824f 100644 --- a/drivers/nvdimm/label.c +++ b/drivers/nvdimm/label.c @@ -235,7 +235,7 @@ static int __nd_label_validate(struct nvdimm_drvdata *ndd) return -1; } -int nd_label_validate(struct nvdimm_drvdata *ndd) +static int nd_label_validate(struct nvdimm_drvdata *ndd) { /* * In order to probe for and validate namespace index blocks we @@ -258,8 +258,9 @@ int nd_label_validate(struct nvdimm_drvdata *ndd) return -1; } -void nd_label_copy(struct nvdimm_drvdata *ndd, struct nd_namespace_index *dst, - struct nd_namespace_index *src) +static void nd_label_copy(struct nvdimm_drvdata *ndd, + struct nd_namespace_index *dst, + struct nd_namespace_index *src) { /* just exit if either destination or source is NULL */ if (!dst || !src) @@ -419,7 +420,9 @@ int nd_label_reserve_dpa(struct nvdimm_drvdata *ndd) int nd_label_data_init(struct nvdimm_drvdata *ndd) { - size_t config_size, read_size; + size_t config_size, read_size, max_xfer, offset; + struct nd_namespace_index *nsindex; + unsigned int i; int rc = 0; if (ndd->data) @@ -452,7 +455,87 @@ int nd_label_data_init(struct nvdimm_drvdata *ndd) if (!ndd->data) return -ENOMEM; - return nvdimm_get_config_data(ndd, ndd->data, 0, config_size); + /* + * We want to guarantee as few reads as possible while conserving + * memory. To do that we figure out how much unused space will be left + * in the last read, divide that by the total number of reads it is + * going to take given our maximum transfer size, and then reduce our + * maximum transfer size based on that result. + */ + max_xfer = min_t(size_t, ndd->nsarea.max_xfer, config_size); + if (read_size < max_xfer) { + /* trim waste */ + max_xfer -= ((max_xfer - 1) - (config_size - 1) % max_xfer) / + DIV_ROUND_UP(config_size, max_xfer); + /* make certain we read indexes in exactly 1 read */ + if (max_xfer < read_size) + max_xfer = read_size; + } + + /* Make our initial read size a multiple of max_xfer size */ + read_size = min(DIV_ROUND_UP(read_size, max_xfer) * max_xfer, + config_size); + + /* Read the index data */ + rc = nvdimm_get_config_data(ndd, ndd->data, 0, read_size); + if (rc) + goto out_err; + + /* Validate index data, if not valid assume all labels are invalid */ + ndd->ns_current = nd_label_validate(ndd); + if (ndd->ns_current < 0) + return 0; + + /* Record our index values */ + ndd->ns_next = nd_label_next_nsindex(ndd->ns_current); + + /* Copy "current" index on top of the "next" index */ + nsindex = to_current_namespace_index(ndd); + nd_label_copy(ndd, to_next_namespace_index(ndd), nsindex); + + /* Determine starting offset for label data */ + offset = __le64_to_cpu(nsindex->labeloff); + + /* Loop through the free list pulling in any active labels */ + for (i = 0; i < nsindex->nslot; i++, offset += ndd->nslabel_size) { + size_t label_read_size; + + /* zero out the unused labels */ + if (test_bit_le(i, nsindex->free)) { + memset(ndd->data + offset, 0, ndd->nslabel_size); + continue; + } + + /* if we already read past here then just continue */ + if (offset + ndd->nslabel_size <= read_size) + continue; + + /* if we haven't read in a while reset our read_size offset */ + if (read_size < offset) + read_size = offset; + + /* determine how much more will be read after this next call. */ + label_read_size = offset + ndd->nslabel_size - read_size; + label_read_size = DIV_ROUND_UP(label_read_size, max_xfer) * + max_xfer; + + /* truncate last read if needed */ + if (read_size + label_read_size > config_size) + label_read_size = config_size - read_size; + + /* Read the label data */ + rc = nvdimm_get_config_data(ndd, ndd->data + read_size, + read_size, label_read_size); + if (rc) + goto out_err; + + /* push read_size to next read offset */ + read_size += label_read_size; + } + + dev_dbg(ndd->dev, "len: %zu rc: %d\n", offset, rc); +out_err: + return rc; } int nd_label_active_count(struct nvdimm_drvdata *ndd) diff --git a/drivers/nvdimm/label.h b/drivers/nvdimm/label.h index 685afb3de0fe..e9a2ad3c2150 100644 --- a/drivers/nvdimm/label.h +++ b/drivers/nvdimm/label.h @@ -138,9 +138,6 @@ static inline int nd_label_next_nsindex(int index) } struct nvdimm_drvdata; -int nd_label_validate(struct nvdimm_drvdata *ndd); -void nd_label_copy(struct nvdimm_drvdata *ndd, struct nd_namespace_index *dst, - struct nd_namespace_index *src); int nd_label_data_init(struct nvdimm_drvdata *ndd); size_t sizeof_namespace_index(struct nvdimm_drvdata *ndd); int nd_label_active_count(struct nvdimm_drvdata *ndd);