From patchwork Fri Jun 5 21:19:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 6557971 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 71714C0020 for ; Fri, 5 Jun 2015 21:21:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5AD8820814 for ; Fri, 5 Jun 2015 21:21:50 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3A77C20813 for ; Fri, 5 Jun 2015 21:21:49 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 2CA5518294D; Fri, 5 Jun 2015 14:21:49 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by ml01.01.org (Postfix) with ESMTP id BC59D18294B for ; Fri, 5 Jun 2015 14:21:47 -0700 (PDT) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP; 05 Jun 2015 14:21:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,560,1427785200"; d="scan'208";a="737936128" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.23.232.36]) by fmsmga002.fm.intel.com with ESMTP; 05 Jun 2015 14:21:47 -0700 Subject: [PATCH v4 1/9] introduce __pfn_t for scatterlists and pmem From: Dan Williams To: linux-kernel@vger.kernel.org Date: Fri, 05 Jun 2015 17:19:06 -0400 Message-ID: <20150605211906.20751.59875.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20150605205052.20751.77149.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20150605205052.20751.77149.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-8-g92dd MIME-Version: 1.0 Cc: axboe@kernel.dk, akpm@linux-foundation.org, arnd@arndb.de, linux-nvdimm@lists.01.org, benh@kernel.crashing.org, hpa@zytor.com, david@fromorbit.com, heiko.carstens@de.ibm.com, mingo@kernel.org, schwidefsky@de.ibm.com, paulus@samba.org, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, tj@kernel.org, torvalds@linux-foundation.org, hch@lst.de X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce a type that encapsulates a page-frame-number that can also be used to encode other information. This other information is the traditional "page_link" encoding in a scatterlist, but can also denote "device memory". Where "device memory" is a set of pfns that are not part of the kernel's linear mapping, but are accessed via the same memory controller as ram. The motivation for this conversion is large capacity persistent memory that does not enjoy struct page coverage, entries in memmap, by default. This type will be used in replace usage of 'struct page *' in cases where only a pfn is required, i.e. scatterlists for drivers, dma mapping api, and later biovecs for the block layer. The operations in those i/o paths that formerly required a 'struct page *' are converted to use __pfn_t aware equivalent helpers. The goal is that existing use cases of data structures referencing struct page are binary identical to the __pfn_t converted type. Only new use cases for pmem will require the new __pfn_t helper routines, i.e. __pfn_t is a struct page alias in the absence of pmem. It turns out that while 'struct page' references are used broadly in the kernel I/O stacks the usage of 'struct page' based capabilities is very shallow for block-i/o. It is only used for populating bio_vecs and scatterlists for the retrieval of dma addresses, and for temporary kernel mappings (kmap). Aside from kmap, these usages can be trivially converted to operate on a pfn. Indeed, kmap_atomic() is more problematic as it uses mm infrastructure, via struct page, to setup and track temporary kernel mappings. It would be unfortunate if the kmap infrastructure escaped its 32-bit/HIGHMEM bonds and leaked into 64-bit code. Thankfully, it seems all that is needed here is to convert kmap_atomic() callers, that want to opt-in to supporting persistent memory, to use a new kmap_atomic_pfn_t(). Where kmap_atomic_pfn_t() is enabled to re-use the existing ioremap() mapping established by the driver for persistent memory. Note, that as far as conceptually understanding __pfn_t is concerned, 'persistent memory' is really any address range in host memory not covered by memmap. Contrast this with pure iomem that is on an mmio mapped bus like PCI and cannot be converted to a dma_addr_t by "pfn << PAGE_SHIFT". Cc: H. Peter Anvin Cc: Jens Axboe Cc: Tejun Heo Cc: Ingo Molnar Cc: Andrew Morton Cc: Linus Torvalds Signed-off-by: Dan Williams --- include/asm-generic/memory_model.h | 1 include/asm-generic/pfn.h | 86 ++++++++++++++++++++++++++++++++++++ include/linux/mm.h | 1 init/Kconfig | 13 +++++ 4 files changed, 100 insertions(+), 1 deletion(-) create mode 100644 include/asm-generic/pfn.h diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h index 14909b0b9cae..1b0ae21fd8ff 100644 --- a/include/asm-generic/memory_model.h +++ b/include/asm-generic/memory_model.h @@ -70,7 +70,6 @@ #endif /* CONFIG_FLATMEM/DISCONTIGMEM/SPARSEMEM */ #define page_to_pfn __page_to_pfn -#define pfn_to_page __pfn_to_page #endif /* __ASSEMBLY__ */ diff --git a/include/asm-generic/pfn.h b/include/asm-generic/pfn.h new file mode 100644 index 000000000000..2f4ae40dc6a7 --- /dev/null +++ b/include/asm-generic/pfn.h @@ -0,0 +1,86 @@ +#ifndef __ASM_PFN_H +#define __ASM_PFN_H + +/* + * Default pfn to physical address conversion, like most arch + * page_to_phys() implementations this resolves to a dma_addr_t as it + * should be the size needed for a device to reference this address. + */ +#ifndef __pfn_to_phys +#define __pfn_to_phys(pfn) ((dma_addr_t)(pfn) << PAGE_SHIFT) +#endif + +static inline struct page *pfn_to_page(unsigned long pfn) +{ + return __pfn_to_page(pfn); +} + +/* + * __pfn_t: encapsulates a page-frame number that is optionally backed + * by memmap (struct page). This type will be used in place of a + * 'struct page *' instance in contexts where unmapped memory (usually + * persistent memory) is being referenced (scatterlists for drivers, + * biovecs for the block layer, etc). Whether a __pfn_t has a struct + * page backing is indicated by flags in the low bits of @data. + */ +typedef struct { + union { + unsigned long data; + struct page *page; + }; +} __pfn_t; + +enum { +#if BITS_PER_LONG == 64 + PFN_SHIFT = 3, + /* device-pfn not covered by memmap */ + PFN_DEV = (1UL << 2), +#else + PFN_SHIFT = 2, +#endif + PFN_MASK = (1UL << PFN_SHIFT) - 1, + PFN_SG_CHAIN = (1UL << 0), + PFN_SG_LAST = (1UL << 1), +}; + +#ifdef CONFIG_DEV_PFN +static inline bool __pfn_t_has_page(__pfn_t pfn) +{ + return (pfn.data & PFN_MASK) == 0; +} + +#else +static inline bool __pfn_t_has_page(__pfn_t pfn) +{ + return true; +} +#endif + +static inline struct page *__pfn_t_to_page(__pfn_t pfn) +{ + if (!__pfn_t_has_page(pfn)) + return NULL; + return pfn.page; +} + +static inline unsigned long __pfn_t_to_pfn(__pfn_t pfn) +{ + if (__pfn_t_has_page(pfn)) + return page_to_pfn(pfn.page); + return pfn.data >> PFN_SHIFT; +} + +static inline dma_addr_t __pfn_t_to_phys(__pfn_t pfn) +{ + if (!__pfn_t_has_page(pfn)) + return __pfn_to_phys(__pfn_t_to_pfn(pfn)); + return __pfn_to_phys(page_to_pfn(pfn.page)); +} + +static inline __pfn_t page_to_pfn_t(struct page *page) +{ + __pfn_t pfn = { .page = page }; + + return pfn; +} +#endif /* __ASM_PFN_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 4024543b4203..ae6f9965f3dd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -53,6 +53,7 @@ extern int sysctl_legacy_va_layout; #include #include #include +#include #ifndef __pa_symbol #define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0)) diff --git a/init/Kconfig b/init/Kconfig index d4f763332f9f..907ab91a5557 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1769,6 +1769,19 @@ config PROFILING Say Y here to enable the extended profiling support mechanisms used by profilers such as OProfile. +config DEV_PFN + depends on 64BIT + bool "Support for device provided (pmem, graphics, etc) memory" if EXPERT + help + Say Y here to enable I/O to/from device provided memory, + i.e. reference memory that is not mapped. This is usually + the case if you have large quantities of persistent memory + relative to DRAM. Enabling this option may increase the + kernel size by a few kilobytes as it instructs the kernel + that a __pfn_t may reference unmapped memory. Disabling + this option instructs the kernel that a __pfn_t always + references mapped platform memory. + # # Place an empty function call at each tracepoint site. Can be # dynamically changed for a probe function.