From patchwork Wed Sep 4 05:08:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11129333 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A23F216B1 for ; Wed, 4 Sep 2019 05:08:45 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8A43222CF7 for ; Wed, 4 Sep 2019 05:08:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A43222CF7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id CCC0D2021DD3A; Tue, 3 Sep 2019 22:09:53 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=aneesh.kumar@linux.ibm.com; receiver=linux-nvdimm@lists.01.org Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id B0E132021DD34 for ; Tue, 3 Sep 2019 22:09:52 -0700 (PDT) Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x8458dmv167491; Wed, 4 Sep 2019 01:08:42 -0400 Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0b-001b2d01.pphosted.com with ESMTP id 2ut67a10nk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Sep 2019 01:08:40 -0400 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id x84564QY017179; Wed, 4 Sep 2019 05:08:33 GMT Received: from b01cxnp22034.gho.pok.ibm.com (b01cxnp22034.gho.pok.ibm.com [9.57.198.24]) by ppma02dal.us.ibm.com with ESMTP id 2uqgh74prw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Sep 2019 05:08:33 +0000 Received: from b01ledav005.gho.pok.ibm.com (b01ledav005.gho.pok.ibm.com [9.57.199.110]) by b01cxnp22034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x8458We316056664 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 4 Sep 2019 05:08:32 GMT Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 93703AE060; Wed, 4 Sep 2019 05:08:32 +0000 (GMT) Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 613D0AE05F; Wed, 4 Sep 2019 05:08:31 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.199.33.228]) by b01ledav005.gho.pok.ibm.com (Postfix) with ESMTP; Wed, 4 Sep 2019 05:08:31 +0000 (GMT) From: "Aneesh Kumar K.V" To: dan.j.williams@intel.com Subject: [PATCH v7 3/7] libnvdimm/pfn_dev: Add a build check to make sure we notice when struct page size change Date: Wed, 4 Sep 2019 10:38:18 +0530 Message-Id: <20190904050822.23139-4-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190904050822.23139-1-aneesh.kumar@linux.ibm.com> References: <20190904050822.23139-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-09-04_01:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909040053 X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Aneesh Kumar K.V" , linux-nvdimm@lists.01.org Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" Namespaces created with PFN_MODE_PMEM mode stores struct page in the reserve block area. We need to make sure we account for the right struct page size while doing this. Instead of directly depending on sizeof(struct page) which can change based on different kernel config option, use the max struct page size (64) while calculating the reserve block area. This makes sure pmem device can be used across kernels built with different configs. If the above assumption of max struct page size change, we need to update the reserve block allocation space for new namespaces created. Signed-off-by: Aneesh Kumar K.V --- drivers/nvdimm/nd.h | 4 ++++ drivers/nvdimm/pfn_devs.c | 10 +++++++++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 1b9955651379..e89af4b2d8e9 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -375,6 +375,10 @@ unsigned int pmem_sector_size(struct nd_namespace_common *ndns); void nvdimm_badblocks_populate(struct nd_region *nd_region, struct badblocks *bb, const struct resource *res); #if IS_ENABLED(CONFIG_ND_CLAIM) + +/* max struct page size independent of kernel config */ +#define MAX_STRUCT_PAGE_SIZE 64 + int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap); int devm_nsio_enable(struct device *dev, struct nd_namespace_io *nsio); void devm_nsio_disable(struct device *dev, struct nd_namespace_io *nsio); diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index 3e7b11cf1aae..cd120feb9213 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -701,8 +701,16 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) * The altmap should be padded out to the block size used * when populating the vmemmap. This *should* be equal to * PMD_SIZE for most architectures. + * + * Also make sure size of struct page is less than 64. We + * want to make sure we use large enough size here so that + * we don't have a dynamic reserve space depending on + * struct page size. But we also want to make sure we notice + * when we end up adding new elements to struct page. */ - offset = ALIGN(start + SZ_8K + 64 * npfns, align) - start; + BUILD_BUG_ON(sizeof(struct page) > MAX_STRUCT_PAGE_SIZE); + offset = ALIGN(start + SZ_8K + MAX_STRUCT_PAGE_SIZE * npfns, align) + - start; } else if (nd_pfn->mode == PFN_MODE_RAM) offset = ALIGN(start + SZ_8K, align) - start; else